Conditional expectation of number of trials











up vote
0
down vote

favorite
1












Consider $n$ independent trials, each of which results in one of the outcomes ${1, ..., k}$, with respective probabilities $p_1, p_2, ...,p_k$ where those probabilites sum to $1$. Let $N_i$ denote the number of trials that result in outcome $i$ where $i = 1, ..., k$. For $ineq j$ find $mathbb{E}[N_i|N_j>0]$.



I tried to write it as a double sum on $i$ and $j$, and expanding the conditional probability as $mathbb{E}[N_i=i|N_j=j]=mathbb{P}dfrac{(N_i=icap N_j=j)}{mathbb{P}(N_j=j)}$ but nothing came out of it, how should I proceed?










share|cite|improve this question


























    up vote
    0
    down vote

    favorite
    1












    Consider $n$ independent trials, each of which results in one of the outcomes ${1, ..., k}$, with respective probabilities $p_1, p_2, ...,p_k$ where those probabilites sum to $1$. Let $N_i$ denote the number of trials that result in outcome $i$ where $i = 1, ..., k$. For $ineq j$ find $mathbb{E}[N_i|N_j>0]$.



    I tried to write it as a double sum on $i$ and $j$, and expanding the conditional probability as $mathbb{E}[N_i=i|N_j=j]=mathbb{P}dfrac{(N_i=icap N_j=j)}{mathbb{P}(N_j=j)}$ but nothing came out of it, how should I proceed?










    share|cite|improve this question
























      up vote
      0
      down vote

      favorite
      1









      up vote
      0
      down vote

      favorite
      1






      1





      Consider $n$ independent trials, each of which results in one of the outcomes ${1, ..., k}$, with respective probabilities $p_1, p_2, ...,p_k$ where those probabilites sum to $1$. Let $N_i$ denote the number of trials that result in outcome $i$ where $i = 1, ..., k$. For $ineq j$ find $mathbb{E}[N_i|N_j>0]$.



      I tried to write it as a double sum on $i$ and $j$, and expanding the conditional probability as $mathbb{E}[N_i=i|N_j=j]=mathbb{P}dfrac{(N_i=icap N_j=j)}{mathbb{P}(N_j=j)}$ but nothing came out of it, how should I proceed?










      share|cite|improve this question













      Consider $n$ independent trials, each of which results in one of the outcomes ${1, ..., k}$, with respective probabilities $p_1, p_2, ...,p_k$ where those probabilites sum to $1$. Let $N_i$ denote the number of trials that result in outcome $i$ where $i = 1, ..., k$. For $ineq j$ find $mathbb{E}[N_i|N_j>0]$.



      I tried to write it as a double sum on $i$ and $j$, and expanding the conditional probability as $mathbb{E}[N_i=i|N_j=j]=mathbb{P}dfrac{(N_i=icap N_j=j)}{mathbb{P}(N_j=j)}$ but nothing came out of it, how should I proceed?







      probability conditional-expectation conditional-probability






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Nov 15 at 18:58









      liz

      1046




      1046






















          2 Answers
          2






          active

          oldest

          votes

















          up vote
          1
          down vote













          Hint: use the Law of Total Expectation:$$mathsf E(N_i)=mathsf E(N_imid N_j{=}0)~mathsf P(N_j{=}0)+mathsf E(N_imid N_j{>}0)~mathsf P(N_j{>}0)$$



          $$therefore mathsf E(N_imid N_j>0)=dfrac{mathsf E(N_i)-mathsf E(N_imid N_j{=}0)~mathsf P(N_j{=}0)}{mathsf P(N_j{>}0)}$$



          The terms in this fraction may be evaluated by noticing that $N_isimmathcal{Binom}(n,p_i)$, $N_jsimmathcal{Binom}(n,p_j)$, and $(N_imid N_j{=}0)simmathcal {Binom}(n, tfrac{p_i}{1-p_j})$.



          [When given that none of the trials are outcome $j$ the conditional probability that a particular trial is outcome $i$ is $p_i/(1-p_j)$]






          share|cite|improve this answer






























            up vote
            0
            down vote













            $newcommandE{mathbb{E}}newcommandP{mathbb{P}}$Using Bayes' rule we can reduce$$
            P(N_i = k | N_j > 0) = frac{P(N_j > 0 | N_i = k)P(N_i = k)}{P(N_j > 0)} = frac{p_i^k(1-p_i)^{n-k}}{1 - (1-p_j)^n}P(N_j > 0 | N_i = k).
            $$

            But when $N_i = k$, there are $n-k$ more independent trials to possibly affect $N_j$. As a result
            $$
            P(N_j > 0 | N=k) = 1 - (1-p_j)^{n-k},
            $$

            giving
            $$
            P(N_i = k | N_j > 0) = frac{1 - (1-p_j)^n}{,,,,,1 - (1-p_j)^{n-k}} p_i^k (1-p_i)^{n-k}.
            $$

            Thus
            $$
            E(N_i | N_j > 0) = sum_{k=1}^n k frac{1 - (1-p_j)^n}{,,,,,1 - (1-p_j)^{n-k}} p_i^k (1-p_i)^k = bigl[1 - (1-p_j)^nbigr]sum_{k=0}^n frac{k p_i^k (1-p_i)^{n-k}}{1 - (1-p_j^{n-k})}.
            $$






            share|cite|improve this answer





















              Your Answer





              StackExchange.ifUsing("editor", function () {
              return StackExchange.using("mathjaxEditing", function () {
              StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
              StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
              });
              });
              }, "mathjax-editing");

              StackExchange.ready(function() {
              var channelOptions = {
              tags: "".split(" "),
              id: "69"
              };
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function() {
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled) {
              StackExchange.using("snippets", function() {
              createEditor();
              });
              }
              else {
              createEditor();
              }
              });

              function createEditor() {
              StackExchange.prepareEditor({
              heartbeatType: 'answer',
              convertImagesToLinks: true,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: 10,
              bindNavPrevention: true,
              postfix: "",
              imageUploader: {
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              },
              noCode: true, onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              });


              }
              });














               

              draft saved


              draft discarded


















              StackExchange.ready(
              function () {
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3000127%2fconditional-expectation-of-number-of-trials%23new-answer', 'question_page');
              }
              );

              Post as a guest















              Required, but never shown

























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes








              up vote
              1
              down vote













              Hint: use the Law of Total Expectation:$$mathsf E(N_i)=mathsf E(N_imid N_j{=}0)~mathsf P(N_j{=}0)+mathsf E(N_imid N_j{>}0)~mathsf P(N_j{>}0)$$



              $$therefore mathsf E(N_imid N_j>0)=dfrac{mathsf E(N_i)-mathsf E(N_imid N_j{=}0)~mathsf P(N_j{=}0)}{mathsf P(N_j{>}0)}$$



              The terms in this fraction may be evaluated by noticing that $N_isimmathcal{Binom}(n,p_i)$, $N_jsimmathcal{Binom}(n,p_j)$, and $(N_imid N_j{=}0)simmathcal {Binom}(n, tfrac{p_i}{1-p_j})$.



              [When given that none of the trials are outcome $j$ the conditional probability that a particular trial is outcome $i$ is $p_i/(1-p_j)$]






              share|cite|improve this answer



























                up vote
                1
                down vote













                Hint: use the Law of Total Expectation:$$mathsf E(N_i)=mathsf E(N_imid N_j{=}0)~mathsf P(N_j{=}0)+mathsf E(N_imid N_j{>}0)~mathsf P(N_j{>}0)$$



                $$therefore mathsf E(N_imid N_j>0)=dfrac{mathsf E(N_i)-mathsf E(N_imid N_j{=}0)~mathsf P(N_j{=}0)}{mathsf P(N_j{>}0)}$$



                The terms in this fraction may be evaluated by noticing that $N_isimmathcal{Binom}(n,p_i)$, $N_jsimmathcal{Binom}(n,p_j)$, and $(N_imid N_j{=}0)simmathcal {Binom}(n, tfrac{p_i}{1-p_j})$.



                [When given that none of the trials are outcome $j$ the conditional probability that a particular trial is outcome $i$ is $p_i/(1-p_j)$]






                share|cite|improve this answer

























                  up vote
                  1
                  down vote










                  up vote
                  1
                  down vote









                  Hint: use the Law of Total Expectation:$$mathsf E(N_i)=mathsf E(N_imid N_j{=}0)~mathsf P(N_j{=}0)+mathsf E(N_imid N_j{>}0)~mathsf P(N_j{>}0)$$



                  $$therefore mathsf E(N_imid N_j>0)=dfrac{mathsf E(N_i)-mathsf E(N_imid N_j{=}0)~mathsf P(N_j{=}0)}{mathsf P(N_j{>}0)}$$



                  The terms in this fraction may be evaluated by noticing that $N_isimmathcal{Binom}(n,p_i)$, $N_jsimmathcal{Binom}(n,p_j)$, and $(N_imid N_j{=}0)simmathcal {Binom}(n, tfrac{p_i}{1-p_j})$.



                  [When given that none of the trials are outcome $j$ the conditional probability that a particular trial is outcome $i$ is $p_i/(1-p_j)$]






                  share|cite|improve this answer














                  Hint: use the Law of Total Expectation:$$mathsf E(N_i)=mathsf E(N_imid N_j{=}0)~mathsf P(N_j{=}0)+mathsf E(N_imid N_j{>}0)~mathsf P(N_j{>}0)$$



                  $$therefore mathsf E(N_imid N_j>0)=dfrac{mathsf E(N_i)-mathsf E(N_imid N_j{=}0)~mathsf P(N_j{=}0)}{mathsf P(N_j{>}0)}$$



                  The terms in this fraction may be evaluated by noticing that $N_isimmathcal{Binom}(n,p_i)$, $N_jsimmathcal{Binom}(n,p_j)$, and $(N_imid N_j{=}0)simmathcal {Binom}(n, tfrac{p_i}{1-p_j})$.



                  [When given that none of the trials are outcome $j$ the conditional probability that a particular trial is outcome $i$ is $p_i/(1-p_j)$]







                  share|cite|improve this answer














                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited Nov 16 at 2:42

























                  answered Nov 15 at 23:09









                  Graham Kemp

                  84.3k43378




                  84.3k43378






















                      up vote
                      0
                      down vote













                      $newcommandE{mathbb{E}}newcommandP{mathbb{P}}$Using Bayes' rule we can reduce$$
                      P(N_i = k | N_j > 0) = frac{P(N_j > 0 | N_i = k)P(N_i = k)}{P(N_j > 0)} = frac{p_i^k(1-p_i)^{n-k}}{1 - (1-p_j)^n}P(N_j > 0 | N_i = k).
                      $$

                      But when $N_i = k$, there are $n-k$ more independent trials to possibly affect $N_j$. As a result
                      $$
                      P(N_j > 0 | N=k) = 1 - (1-p_j)^{n-k},
                      $$

                      giving
                      $$
                      P(N_i = k | N_j > 0) = frac{1 - (1-p_j)^n}{,,,,,1 - (1-p_j)^{n-k}} p_i^k (1-p_i)^{n-k}.
                      $$

                      Thus
                      $$
                      E(N_i | N_j > 0) = sum_{k=1}^n k frac{1 - (1-p_j)^n}{,,,,,1 - (1-p_j)^{n-k}} p_i^k (1-p_i)^k = bigl[1 - (1-p_j)^nbigr]sum_{k=0}^n frac{k p_i^k (1-p_i)^{n-k}}{1 - (1-p_j^{n-k})}.
                      $$






                      share|cite|improve this answer

























                        up vote
                        0
                        down vote













                        $newcommandE{mathbb{E}}newcommandP{mathbb{P}}$Using Bayes' rule we can reduce$$
                        P(N_i = k | N_j > 0) = frac{P(N_j > 0 | N_i = k)P(N_i = k)}{P(N_j > 0)} = frac{p_i^k(1-p_i)^{n-k}}{1 - (1-p_j)^n}P(N_j > 0 | N_i = k).
                        $$

                        But when $N_i = k$, there are $n-k$ more independent trials to possibly affect $N_j$. As a result
                        $$
                        P(N_j > 0 | N=k) = 1 - (1-p_j)^{n-k},
                        $$

                        giving
                        $$
                        P(N_i = k | N_j > 0) = frac{1 - (1-p_j)^n}{,,,,,1 - (1-p_j)^{n-k}} p_i^k (1-p_i)^{n-k}.
                        $$

                        Thus
                        $$
                        E(N_i | N_j > 0) = sum_{k=1}^n k frac{1 - (1-p_j)^n}{,,,,,1 - (1-p_j)^{n-k}} p_i^k (1-p_i)^k = bigl[1 - (1-p_j)^nbigr]sum_{k=0}^n frac{k p_i^k (1-p_i)^{n-k}}{1 - (1-p_j^{n-k})}.
                        $$






                        share|cite|improve this answer























                          up vote
                          0
                          down vote










                          up vote
                          0
                          down vote









                          $newcommandE{mathbb{E}}newcommandP{mathbb{P}}$Using Bayes' rule we can reduce$$
                          P(N_i = k | N_j > 0) = frac{P(N_j > 0 | N_i = k)P(N_i = k)}{P(N_j > 0)} = frac{p_i^k(1-p_i)^{n-k}}{1 - (1-p_j)^n}P(N_j > 0 | N_i = k).
                          $$

                          But when $N_i = k$, there are $n-k$ more independent trials to possibly affect $N_j$. As a result
                          $$
                          P(N_j > 0 | N=k) = 1 - (1-p_j)^{n-k},
                          $$

                          giving
                          $$
                          P(N_i = k | N_j > 0) = frac{1 - (1-p_j)^n}{,,,,,1 - (1-p_j)^{n-k}} p_i^k (1-p_i)^{n-k}.
                          $$

                          Thus
                          $$
                          E(N_i | N_j > 0) = sum_{k=1}^n k frac{1 - (1-p_j)^n}{,,,,,1 - (1-p_j)^{n-k}} p_i^k (1-p_i)^k = bigl[1 - (1-p_j)^nbigr]sum_{k=0}^n frac{k p_i^k (1-p_i)^{n-k}}{1 - (1-p_j^{n-k})}.
                          $$






                          share|cite|improve this answer












                          $newcommandE{mathbb{E}}newcommandP{mathbb{P}}$Using Bayes' rule we can reduce$$
                          P(N_i = k | N_j > 0) = frac{P(N_j > 0 | N_i = k)P(N_i = k)}{P(N_j > 0)} = frac{p_i^k(1-p_i)^{n-k}}{1 - (1-p_j)^n}P(N_j > 0 | N_i = k).
                          $$

                          But when $N_i = k$, there are $n-k$ more independent trials to possibly affect $N_j$. As a result
                          $$
                          P(N_j > 0 | N=k) = 1 - (1-p_j)^{n-k},
                          $$

                          giving
                          $$
                          P(N_i = k | N_j > 0) = frac{1 - (1-p_j)^n}{,,,,,1 - (1-p_j)^{n-k}} p_i^k (1-p_i)^{n-k}.
                          $$

                          Thus
                          $$
                          E(N_i | N_j > 0) = sum_{k=1}^n k frac{1 - (1-p_j)^n}{,,,,,1 - (1-p_j)^{n-k}} p_i^k (1-p_i)^k = bigl[1 - (1-p_j)^nbigr]sum_{k=0}^n frac{k p_i^k (1-p_i)^{n-k}}{1 - (1-p_j^{n-k})}.
                          $$







                          share|cite|improve this answer












                          share|cite|improve this answer



                          share|cite|improve this answer










                          answered Nov 15 at 22:56









                          cdipaolo

                          590312




                          590312






























                               

                              draft saved


                              draft discarded



















































                               


                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function () {
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3000127%2fconditional-expectation-of-number-of-trials%23new-answer', 'question_page');
                              }
                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              Plaza Victoria

                              In PowerPoint, is there a keyboard shortcut for bulleted / numbered list?

                              How to put 3 figures in Latex with 2 figures side by side and 1 below these side by side images but in...