Understanding piped commands in Unix/Linux





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}







15















I have two simple programs: A and B. A would run first, then B gets the “stdout” of A and uses it as its “stdin”. Assume I am using a GNU/Linux operating system and the simplest possible way to do this would be:



./A | ./B


If I had to describe this command, I would say that it is a command that takes input (i.e., reads) from a producer (A) and writes to a consumer (B). Is that a correct description? Am I missing anything?










share|improve this question









New contributor




nihulus is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





















  • Related: In what order do piped commands run?

    – G-Man
    Apr 21 at 21:35











  • It's not command, it's an kenerl object created by bash process, which is used as stdout of process A and stdin as B. Two processes are started nearly at the same time.

    – 炸鱼薯条德里克
    Apr 22 at 0:45






  • 1





    @炸鱼 You're correct - for kernel pipeline is an object in pipefs filesystem, but as far as shell itself is concerned - technically that's a pipeline command

    – Sergiy Kolodyazhnyy
    Apr 22 at 1:30


















15















I have two simple programs: A and B. A would run first, then B gets the “stdout” of A and uses it as its “stdin”. Assume I am using a GNU/Linux operating system and the simplest possible way to do this would be:



./A | ./B


If I had to describe this command, I would say that it is a command that takes input (i.e., reads) from a producer (A) and writes to a consumer (B). Is that a correct description? Am I missing anything?










share|improve this question









New contributor




nihulus is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





















  • Related: In what order do piped commands run?

    – G-Man
    Apr 21 at 21:35











  • It's not command, it's an kenerl object created by bash process, which is used as stdout of process A and stdin as B. Two processes are started nearly at the same time.

    – 炸鱼薯条德里克
    Apr 22 at 0:45






  • 1





    @炸鱼 You're correct - for kernel pipeline is an object in pipefs filesystem, but as far as shell itself is concerned - technically that's a pipeline command

    – Sergiy Kolodyazhnyy
    Apr 22 at 1:30














15












15








15


3






I have two simple programs: A and B. A would run first, then B gets the “stdout” of A and uses it as its “stdin”. Assume I am using a GNU/Linux operating system and the simplest possible way to do this would be:



./A | ./B


If I had to describe this command, I would say that it is a command that takes input (i.e., reads) from a producer (A) and writes to a consumer (B). Is that a correct description? Am I missing anything?










share|improve this question









New contributor




nihulus is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.












I have two simple programs: A and B. A would run first, then B gets the “stdout” of A and uses it as its “stdin”. Assume I am using a GNU/Linux operating system and the simplest possible way to do this would be:



./A | ./B


If I had to describe this command, I would say that it is a command that takes input (i.e., reads) from a producer (A) and writes to a consumer (B). Is that a correct description? Am I missing anything?







pipe terminology






share|improve this question









New contributor




nihulus is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question









New contributor




nihulus is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question








edited Apr 22 at 17:15









G-Man

13.9k93870




13.9k93870






New contributor




nihulus is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked Apr 21 at 11:59









nihulusnihulus

1815




1815




New contributor




nihulus is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





nihulus is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






nihulus is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.













  • Related: In what order do piped commands run?

    – G-Man
    Apr 21 at 21:35











  • It's not command, it's an kenerl object created by bash process, which is used as stdout of process A and stdin as B. Two processes are started nearly at the same time.

    – 炸鱼薯条德里克
    Apr 22 at 0:45






  • 1





    @炸鱼 You're correct - for kernel pipeline is an object in pipefs filesystem, but as far as shell itself is concerned - technically that's a pipeline command

    – Sergiy Kolodyazhnyy
    Apr 22 at 1:30



















  • Related: In what order do piped commands run?

    – G-Man
    Apr 21 at 21:35











  • It's not command, it's an kenerl object created by bash process, which is used as stdout of process A and stdin as B. Two processes are started nearly at the same time.

    – 炸鱼薯条德里克
    Apr 22 at 0:45






  • 1





    @炸鱼 You're correct - for kernel pipeline is an object in pipefs filesystem, but as far as shell itself is concerned - technically that's a pipeline command

    – Sergiy Kolodyazhnyy
    Apr 22 at 1:30

















Related: In what order do piped commands run?

– G-Man
Apr 21 at 21:35





Related: In what order do piped commands run?

– G-Man
Apr 21 at 21:35













It's not command, it's an kenerl object created by bash process, which is used as stdout of process A and stdin as B. Two processes are started nearly at the same time.

– 炸鱼薯条德里克
Apr 22 at 0:45





It's not command, it's an kenerl object created by bash process, which is used as stdout of process A and stdin as B. Two processes are started nearly at the same time.

– 炸鱼薯条德里克
Apr 22 at 0:45




1




1





@炸鱼 You're correct - for kernel pipeline is an object in pipefs filesystem, but as far as shell itself is concerned - technically that's a pipeline command

– Sergiy Kolodyazhnyy
Apr 22 at 1:30





@炸鱼 You're correct - for kernel pipeline is an object in pipefs filesystem, but as far as shell itself is concerned - technically that's a pipeline command

– Sergiy Kolodyazhnyy
Apr 22 at 1:30










2 Answers
2






active

oldest

votes


















24














The only thing about your question that stands out as wrong is that you say




A would run first, then B gets the stdout of A




In fact, both programs would be started at pretty much the same time. If there's no input for B when it tries to read, it will block until there is input to read. Likewise, if there's nobody reading the output from A, its writes will block until its output is read (some of it will be buffered by the pipe).



The only thing synchronising the processes that take part in a pipeline is the I/O, i.e. the reading and writing. If no writing or reading happens, then the two processes will run totally independent of each other. If one ignores the reading or writing of the other, the ignored process will block and eventually be killed by a SIGPIPE signal (if writing) or get an end-of-file condition on its standard input stream (if reading) when the other process terminates.



The idiomatic way to describe A | B is that it's a pipeline containing two programs. The output produced on standard output from the first program is available to be read on the standard input by the second ("[the output of] A is piped into B"). The shell does the required plumbing to allow this to happen.



If you want to use the words "consumer" and "producer", I suppose that's ok too.



The fact that these are programs written in C is not relevant. The fact that this is Linux, macOS, OpenBSD or AIX is not relevant.






share|improve this answer





















  • 2





    Writing to a temporary file was used in DOS, as that didn't support multiple processes.

    – CSM
    Apr 21 at 17:56






  • 2





    @AlexVong Note though that your example with a temporary file is not exactly equivalent. A program may choose to seek though the contents of a file, but data coming off a pipe is not seekable. A better examlp would be to use mkfifo to create a named pipe, then start B in the background reading from the pipe, and then A writing to it. This is nit-picking though, as the effect would be the same.

    – Kusalananda
    Apr 21 at 17:58






  • 2





    @AlexVong The simplifications made in that article divorce it from real pipelines; the parallel execution is truly semantic, not an optimisation. It's a reasonable lies-to-children explanation of monadic evaluation or composition for someone who's seen shell pipelines, but it's not valid in the other direction. Kusalananda's fifo version is closer, but the error propagation parts of the model are genuinely important and not replicable. (all of which I say as someone who is very on the "shell pipelines are just function composition" train)

    – Michael Homer
    Apr 21 at 21:41








  • 6





    @AlexVong No, that's completely off track. That isn't able to explain even something simple like yes | sed 10q

    – Uncle Billy
    Apr 21 at 22:29








  • 1





    @UncleBilly I agree with your example. This shows that parallel execution is really required also noted by Michael. Otherwise, we'll get non-termination.

    – Alex Vong
    Apr 21 at 23:53





















2














The term usually used in documentation is "pipeline" , which consists of one or more commands, see POSIX definition So technically speaking, that's two commands you have there, two subprocesses for the shell (either fork()+exec()'ed external commands or subshells )



As for producer-consumer part, the pipeline can be described by that pattern, since:




  • Producer and Consumer share fixed-size buffer, and at least on Linux and MacOS X, there's fixed size for pipeline buffer

  • Producer and Consumer are loosely-coupled, commands in pipeline don't know of each other's existence ( unless they are actively checking /proc/<pid>/fd directory ).

  • Producers write to stdout and consumers read stdin as if they were a single command being executed, aka they can exist without each other.


The difference I see here is that unlike Producer-Consumer in other languges, shell commands use buffering and they write stdout once buffer is filled, but there's no mention that Producer-Consumer has to follow that rule - only wait when queue is filled or discard data (which is something else that pipeline doesn't do).






share|improve this answer
























    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "106"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });






    nihulus is a new contributor. Be nice, and check out our Code of Conduct.










    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f513657%2funderstanding-piped-commands-in-unix-linux%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    24














    The only thing about your question that stands out as wrong is that you say




    A would run first, then B gets the stdout of A




    In fact, both programs would be started at pretty much the same time. If there's no input for B when it tries to read, it will block until there is input to read. Likewise, if there's nobody reading the output from A, its writes will block until its output is read (some of it will be buffered by the pipe).



    The only thing synchronising the processes that take part in a pipeline is the I/O, i.e. the reading and writing. If no writing or reading happens, then the two processes will run totally independent of each other. If one ignores the reading or writing of the other, the ignored process will block and eventually be killed by a SIGPIPE signal (if writing) or get an end-of-file condition on its standard input stream (if reading) when the other process terminates.



    The idiomatic way to describe A | B is that it's a pipeline containing two programs. The output produced on standard output from the first program is available to be read on the standard input by the second ("[the output of] A is piped into B"). The shell does the required plumbing to allow this to happen.



    If you want to use the words "consumer" and "producer", I suppose that's ok too.



    The fact that these are programs written in C is not relevant. The fact that this is Linux, macOS, OpenBSD or AIX is not relevant.






    share|improve this answer





















    • 2





      Writing to a temporary file was used in DOS, as that didn't support multiple processes.

      – CSM
      Apr 21 at 17:56






    • 2





      @AlexVong Note though that your example with a temporary file is not exactly equivalent. A program may choose to seek though the contents of a file, but data coming off a pipe is not seekable. A better examlp would be to use mkfifo to create a named pipe, then start B in the background reading from the pipe, and then A writing to it. This is nit-picking though, as the effect would be the same.

      – Kusalananda
      Apr 21 at 17:58






    • 2





      @AlexVong The simplifications made in that article divorce it from real pipelines; the parallel execution is truly semantic, not an optimisation. It's a reasonable lies-to-children explanation of monadic evaluation or composition for someone who's seen shell pipelines, but it's not valid in the other direction. Kusalananda's fifo version is closer, but the error propagation parts of the model are genuinely important and not replicable. (all of which I say as someone who is very on the "shell pipelines are just function composition" train)

      – Michael Homer
      Apr 21 at 21:41








    • 6





      @AlexVong No, that's completely off track. That isn't able to explain even something simple like yes | sed 10q

      – Uncle Billy
      Apr 21 at 22:29








    • 1





      @UncleBilly I agree with your example. This shows that parallel execution is really required also noted by Michael. Otherwise, we'll get non-termination.

      – Alex Vong
      Apr 21 at 23:53


















    24














    The only thing about your question that stands out as wrong is that you say




    A would run first, then B gets the stdout of A




    In fact, both programs would be started at pretty much the same time. If there's no input for B when it tries to read, it will block until there is input to read. Likewise, if there's nobody reading the output from A, its writes will block until its output is read (some of it will be buffered by the pipe).



    The only thing synchronising the processes that take part in a pipeline is the I/O, i.e. the reading and writing. If no writing or reading happens, then the two processes will run totally independent of each other. If one ignores the reading or writing of the other, the ignored process will block and eventually be killed by a SIGPIPE signal (if writing) or get an end-of-file condition on its standard input stream (if reading) when the other process terminates.



    The idiomatic way to describe A | B is that it's a pipeline containing two programs. The output produced on standard output from the first program is available to be read on the standard input by the second ("[the output of] A is piped into B"). The shell does the required plumbing to allow this to happen.



    If you want to use the words "consumer" and "producer", I suppose that's ok too.



    The fact that these are programs written in C is not relevant. The fact that this is Linux, macOS, OpenBSD or AIX is not relevant.






    share|improve this answer





















    • 2





      Writing to a temporary file was used in DOS, as that didn't support multiple processes.

      – CSM
      Apr 21 at 17:56






    • 2





      @AlexVong Note though that your example with a temporary file is not exactly equivalent. A program may choose to seek though the contents of a file, but data coming off a pipe is not seekable. A better examlp would be to use mkfifo to create a named pipe, then start B in the background reading from the pipe, and then A writing to it. This is nit-picking though, as the effect would be the same.

      – Kusalananda
      Apr 21 at 17:58






    • 2





      @AlexVong The simplifications made in that article divorce it from real pipelines; the parallel execution is truly semantic, not an optimisation. It's a reasonable lies-to-children explanation of monadic evaluation or composition for someone who's seen shell pipelines, but it's not valid in the other direction. Kusalananda's fifo version is closer, but the error propagation parts of the model are genuinely important and not replicable. (all of which I say as someone who is very on the "shell pipelines are just function composition" train)

      – Michael Homer
      Apr 21 at 21:41








    • 6





      @AlexVong No, that's completely off track. That isn't able to explain even something simple like yes | sed 10q

      – Uncle Billy
      Apr 21 at 22:29








    • 1





      @UncleBilly I agree with your example. This shows that parallel execution is really required also noted by Michael. Otherwise, we'll get non-termination.

      – Alex Vong
      Apr 21 at 23:53
















    24












    24








    24







    The only thing about your question that stands out as wrong is that you say




    A would run first, then B gets the stdout of A




    In fact, both programs would be started at pretty much the same time. If there's no input for B when it tries to read, it will block until there is input to read. Likewise, if there's nobody reading the output from A, its writes will block until its output is read (some of it will be buffered by the pipe).



    The only thing synchronising the processes that take part in a pipeline is the I/O, i.e. the reading and writing. If no writing or reading happens, then the two processes will run totally independent of each other. If one ignores the reading or writing of the other, the ignored process will block and eventually be killed by a SIGPIPE signal (if writing) or get an end-of-file condition on its standard input stream (if reading) when the other process terminates.



    The idiomatic way to describe A | B is that it's a pipeline containing two programs. The output produced on standard output from the first program is available to be read on the standard input by the second ("[the output of] A is piped into B"). The shell does the required plumbing to allow this to happen.



    If you want to use the words "consumer" and "producer", I suppose that's ok too.



    The fact that these are programs written in C is not relevant. The fact that this is Linux, macOS, OpenBSD or AIX is not relevant.






    share|improve this answer















    The only thing about your question that stands out as wrong is that you say




    A would run first, then B gets the stdout of A




    In fact, both programs would be started at pretty much the same time. If there's no input for B when it tries to read, it will block until there is input to read. Likewise, if there's nobody reading the output from A, its writes will block until its output is read (some of it will be buffered by the pipe).



    The only thing synchronising the processes that take part in a pipeline is the I/O, i.e. the reading and writing. If no writing or reading happens, then the two processes will run totally independent of each other. If one ignores the reading or writing of the other, the ignored process will block and eventually be killed by a SIGPIPE signal (if writing) or get an end-of-file condition on its standard input stream (if reading) when the other process terminates.



    The idiomatic way to describe A | B is that it's a pipeline containing two programs. The output produced on standard output from the first program is available to be read on the standard input by the second ("[the output of] A is piped into B"). The shell does the required plumbing to allow this to happen.



    If you want to use the words "consumer" and "producer", I suppose that's ok too.



    The fact that these are programs written in C is not relevant. The fact that this is Linux, macOS, OpenBSD or AIX is not relevant.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Apr 21 at 22:03

























    answered Apr 21 at 12:26









    KusalanandaKusalananda

    143k18268448




    143k18268448








    • 2





      Writing to a temporary file was used in DOS, as that didn't support multiple processes.

      – CSM
      Apr 21 at 17:56






    • 2





      @AlexVong Note though that your example with a temporary file is not exactly equivalent. A program may choose to seek though the contents of a file, but data coming off a pipe is not seekable. A better examlp would be to use mkfifo to create a named pipe, then start B in the background reading from the pipe, and then A writing to it. This is nit-picking though, as the effect would be the same.

      – Kusalananda
      Apr 21 at 17:58






    • 2





      @AlexVong The simplifications made in that article divorce it from real pipelines; the parallel execution is truly semantic, not an optimisation. It's a reasonable lies-to-children explanation of monadic evaluation or composition for someone who's seen shell pipelines, but it's not valid in the other direction. Kusalananda's fifo version is closer, but the error propagation parts of the model are genuinely important and not replicable. (all of which I say as someone who is very on the "shell pipelines are just function composition" train)

      – Michael Homer
      Apr 21 at 21:41








    • 6





      @AlexVong No, that's completely off track. That isn't able to explain even something simple like yes | sed 10q

      – Uncle Billy
      Apr 21 at 22:29








    • 1





      @UncleBilly I agree with your example. This shows that parallel execution is really required also noted by Michael. Otherwise, we'll get non-termination.

      – Alex Vong
      Apr 21 at 23:53
















    • 2





      Writing to a temporary file was used in DOS, as that didn't support multiple processes.

      – CSM
      Apr 21 at 17:56






    • 2





      @AlexVong Note though that your example with a temporary file is not exactly equivalent. A program may choose to seek though the contents of a file, but data coming off a pipe is not seekable. A better examlp would be to use mkfifo to create a named pipe, then start B in the background reading from the pipe, and then A writing to it. This is nit-picking though, as the effect would be the same.

      – Kusalananda
      Apr 21 at 17:58






    • 2





      @AlexVong The simplifications made in that article divorce it from real pipelines; the parallel execution is truly semantic, not an optimisation. It's a reasonable lies-to-children explanation of monadic evaluation or composition for someone who's seen shell pipelines, but it's not valid in the other direction. Kusalananda's fifo version is closer, but the error propagation parts of the model are genuinely important and not replicable. (all of which I say as someone who is very on the "shell pipelines are just function composition" train)

      – Michael Homer
      Apr 21 at 21:41








    • 6





      @AlexVong No, that's completely off track. That isn't able to explain even something simple like yes | sed 10q

      – Uncle Billy
      Apr 21 at 22:29








    • 1





      @UncleBilly I agree with your example. This shows that parallel execution is really required also noted by Michael. Otherwise, we'll get non-termination.

      – Alex Vong
      Apr 21 at 23:53










    2




    2





    Writing to a temporary file was used in DOS, as that didn't support multiple processes.

    – CSM
    Apr 21 at 17:56





    Writing to a temporary file was used in DOS, as that didn't support multiple processes.

    – CSM
    Apr 21 at 17:56




    2




    2





    @AlexVong Note though that your example with a temporary file is not exactly equivalent. A program may choose to seek though the contents of a file, but data coming off a pipe is not seekable. A better examlp would be to use mkfifo to create a named pipe, then start B in the background reading from the pipe, and then A writing to it. This is nit-picking though, as the effect would be the same.

    – Kusalananda
    Apr 21 at 17:58





    @AlexVong Note though that your example with a temporary file is not exactly equivalent. A program may choose to seek though the contents of a file, but data coming off a pipe is not seekable. A better examlp would be to use mkfifo to create a named pipe, then start B in the background reading from the pipe, and then A writing to it. This is nit-picking though, as the effect would be the same.

    – Kusalananda
    Apr 21 at 17:58




    2




    2





    @AlexVong The simplifications made in that article divorce it from real pipelines; the parallel execution is truly semantic, not an optimisation. It's a reasonable lies-to-children explanation of monadic evaluation or composition for someone who's seen shell pipelines, but it's not valid in the other direction. Kusalananda's fifo version is closer, but the error propagation parts of the model are genuinely important and not replicable. (all of which I say as someone who is very on the "shell pipelines are just function composition" train)

    – Michael Homer
    Apr 21 at 21:41







    @AlexVong The simplifications made in that article divorce it from real pipelines; the parallel execution is truly semantic, not an optimisation. It's a reasonable lies-to-children explanation of monadic evaluation or composition for someone who's seen shell pipelines, but it's not valid in the other direction. Kusalananda's fifo version is closer, but the error propagation parts of the model are genuinely important and not replicable. (all of which I say as someone who is very on the "shell pipelines are just function composition" train)

    – Michael Homer
    Apr 21 at 21:41






    6




    6





    @AlexVong No, that's completely off track. That isn't able to explain even something simple like yes | sed 10q

    – Uncle Billy
    Apr 21 at 22:29







    @AlexVong No, that's completely off track. That isn't able to explain even something simple like yes | sed 10q

    – Uncle Billy
    Apr 21 at 22:29






    1




    1





    @UncleBilly I agree with your example. This shows that parallel execution is really required also noted by Michael. Otherwise, we'll get non-termination.

    – Alex Vong
    Apr 21 at 23:53







    @UncleBilly I agree with your example. This shows that parallel execution is really required also noted by Michael. Otherwise, we'll get non-termination.

    – Alex Vong
    Apr 21 at 23:53















    2














    The term usually used in documentation is "pipeline" , which consists of one or more commands, see POSIX definition So technically speaking, that's two commands you have there, two subprocesses for the shell (either fork()+exec()'ed external commands or subshells )



    As for producer-consumer part, the pipeline can be described by that pattern, since:




    • Producer and Consumer share fixed-size buffer, and at least on Linux and MacOS X, there's fixed size for pipeline buffer

    • Producer and Consumer are loosely-coupled, commands in pipeline don't know of each other's existence ( unless they are actively checking /proc/<pid>/fd directory ).

    • Producers write to stdout and consumers read stdin as if they were a single command being executed, aka they can exist without each other.


    The difference I see here is that unlike Producer-Consumer in other languges, shell commands use buffering and they write stdout once buffer is filled, but there's no mention that Producer-Consumer has to follow that rule - only wait when queue is filled or discard data (which is something else that pipeline doesn't do).






    share|improve this answer




























      2














      The term usually used in documentation is "pipeline" , which consists of one or more commands, see POSIX definition So technically speaking, that's two commands you have there, two subprocesses for the shell (either fork()+exec()'ed external commands or subshells )



      As for producer-consumer part, the pipeline can be described by that pattern, since:




      • Producer and Consumer share fixed-size buffer, and at least on Linux and MacOS X, there's fixed size for pipeline buffer

      • Producer and Consumer are loosely-coupled, commands in pipeline don't know of each other's existence ( unless they are actively checking /proc/<pid>/fd directory ).

      • Producers write to stdout and consumers read stdin as if they were a single command being executed, aka they can exist without each other.


      The difference I see here is that unlike Producer-Consumer in other languges, shell commands use buffering and they write stdout once buffer is filled, but there's no mention that Producer-Consumer has to follow that rule - only wait when queue is filled or discard data (which is something else that pipeline doesn't do).






      share|improve this answer


























        2












        2








        2







        The term usually used in documentation is "pipeline" , which consists of one or more commands, see POSIX definition So technically speaking, that's two commands you have there, two subprocesses for the shell (either fork()+exec()'ed external commands or subshells )



        As for producer-consumer part, the pipeline can be described by that pattern, since:




        • Producer and Consumer share fixed-size buffer, and at least on Linux and MacOS X, there's fixed size for pipeline buffer

        • Producer and Consumer are loosely-coupled, commands in pipeline don't know of each other's existence ( unless they are actively checking /proc/<pid>/fd directory ).

        • Producers write to stdout and consumers read stdin as if they were a single command being executed, aka they can exist without each other.


        The difference I see here is that unlike Producer-Consumer in other languges, shell commands use buffering and they write stdout once buffer is filled, but there's no mention that Producer-Consumer has to follow that rule - only wait when queue is filled or discard data (which is something else that pipeline doesn't do).






        share|improve this answer













        The term usually used in documentation is "pipeline" , which consists of one or more commands, see POSIX definition So technically speaking, that's two commands you have there, two subprocesses for the shell (either fork()+exec()'ed external commands or subshells )



        As for producer-consumer part, the pipeline can be described by that pattern, since:




        • Producer and Consumer share fixed-size buffer, and at least on Linux and MacOS X, there's fixed size for pipeline buffer

        • Producer and Consumer are loosely-coupled, commands in pipeline don't know of each other's existence ( unless they are actively checking /proc/<pid>/fd directory ).

        • Producers write to stdout and consumers read stdin as if they were a single command being executed, aka they can exist without each other.


        The difference I see here is that unlike Producer-Consumer in other languges, shell commands use buffering and they write stdout once buffer is filled, but there's no mention that Producer-Consumer has to follow that rule - only wait when queue is filled or discard data (which is something else that pipeline doesn't do).







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Apr 21 at 22:38









        Sergiy KolodyazhnyySergiy Kolodyazhnyy

        10.7k42765




        10.7k42765






















            nihulus is a new contributor. Be nice, and check out our Code of Conduct.










            draft saved

            draft discarded


















            nihulus is a new contributor. Be nice, and check out our Code of Conduct.













            nihulus is a new contributor. Be nice, and check out our Code of Conduct.












            nihulus is a new contributor. Be nice, and check out our Code of Conduct.
















            Thanks for contributing an answer to Unix & Linux Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f513657%2funderstanding-piped-commands-in-unix-linux%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Plaza Victoria

            Puebla de Zaragoza

            Musa