Probability of finding specific sequences












1












$begingroup$


I am running a series of trials. Each trial has the following possible outcomes:

Event A - 10%

Event B - 20%

Event C - 70%



However, success is not determined by a single trial, but by a series of trials that produce specific sequences.



An example sequence for 20 trials might be:

ACCCBCCACCBCBCCCCBBC



Success is determined by the following two criteria:

Event A

Event B followed by either Event B or Event A (with or without intervening Event Cs)



In the above example sequence, success occurs 4 times, first by the sequence A, then by the sequence BCCA, then by the sequence BCB, and finally by the sequence BB



How does one determine the probability of success given a sequence produced by n trials? That is, for a large n, what is the expected number of successes?



EDIT: It has been suggested that a Markov process be assumed and move forward accordingly. As I have zero knowledge of Markov, I've done a few hours of research and put together the following. (Most sites I encountered were so far over my head, I had no possibility of understanding them.)



The first step in establishing a Markov Chain is to define probabilities of state transitions. Of course, before state transitions can be established, one first has to identify what the states are.



For my problem, I've concluded there are three states.

N is the neutral state. This is the initial state.

B is the state in which Event B has happened but has not yet terminated in success.

S is the success state.



The following is a table of state transition probabilities:

N -> N = 0.7

N -> B = 0.2

N -> S = 0.1

B -> N = 0.0

B -> B = 0.7

B -> S = 0.3

S -> N = 0.7

S -> B = 0.2

S -> S = 0.1



This yields a transition matrix:
$M = begin{bmatrix}0.7 & 0.2 & 0.1\0.0 & 0.7 & 0.3\0.7 & 0.2 & 0.1end{bmatrix}$



The next step is to produce an initial state matrix. Here I start in the N state.
$x_0 = begin{bmatrix}1\0\0end{bmatrix}$



Multiplying them together yields:
$Mx_0 = begin{bmatrix}0.7 & 0.2 & 0.1\0.0 & 0.7 & 0.3\0.7 & 0.2 & 0.1end{bmatrix}begin{bmatrix}1\0\0end{bmatrix} = begin{bmatrix}0.7\0\0.7end{bmatrix} = x_1$



What does this tell me? Am I on the right path at all? How do I continue?










share|cite|improve this question











$endgroup$

















    1












    $begingroup$


    I am running a series of trials. Each trial has the following possible outcomes:

    Event A - 10%

    Event B - 20%

    Event C - 70%



    However, success is not determined by a single trial, but by a series of trials that produce specific sequences.



    An example sequence for 20 trials might be:

    ACCCBCCACCBCBCCCCBBC



    Success is determined by the following two criteria:

    Event A

    Event B followed by either Event B or Event A (with or without intervening Event Cs)



    In the above example sequence, success occurs 4 times, first by the sequence A, then by the sequence BCCA, then by the sequence BCB, and finally by the sequence BB



    How does one determine the probability of success given a sequence produced by n trials? That is, for a large n, what is the expected number of successes?



    EDIT: It has been suggested that a Markov process be assumed and move forward accordingly. As I have zero knowledge of Markov, I've done a few hours of research and put together the following. (Most sites I encountered were so far over my head, I had no possibility of understanding them.)



    The first step in establishing a Markov Chain is to define probabilities of state transitions. Of course, before state transitions can be established, one first has to identify what the states are.



    For my problem, I've concluded there are three states.

    N is the neutral state. This is the initial state.

    B is the state in which Event B has happened but has not yet terminated in success.

    S is the success state.



    The following is a table of state transition probabilities:

    N -> N = 0.7

    N -> B = 0.2

    N -> S = 0.1

    B -> N = 0.0

    B -> B = 0.7

    B -> S = 0.3

    S -> N = 0.7

    S -> B = 0.2

    S -> S = 0.1



    This yields a transition matrix:
    $M = begin{bmatrix}0.7 & 0.2 & 0.1\0.0 & 0.7 & 0.3\0.7 & 0.2 & 0.1end{bmatrix}$



    The next step is to produce an initial state matrix. Here I start in the N state.
    $x_0 = begin{bmatrix}1\0\0end{bmatrix}$



    Multiplying them together yields:
    $Mx_0 = begin{bmatrix}0.7 & 0.2 & 0.1\0.0 & 0.7 & 0.3\0.7 & 0.2 & 0.1end{bmatrix}begin{bmatrix}1\0\0end{bmatrix} = begin{bmatrix}0.7\0\0.7end{bmatrix} = x_1$



    What does this tell me? Am I on the right path at all? How do I continue?










    share|cite|improve this question











    $endgroup$















      1












      1








      1





      $begingroup$


      I am running a series of trials. Each trial has the following possible outcomes:

      Event A - 10%

      Event B - 20%

      Event C - 70%



      However, success is not determined by a single trial, but by a series of trials that produce specific sequences.



      An example sequence for 20 trials might be:

      ACCCBCCACCBCBCCCCBBC



      Success is determined by the following two criteria:

      Event A

      Event B followed by either Event B or Event A (with or without intervening Event Cs)



      In the above example sequence, success occurs 4 times, first by the sequence A, then by the sequence BCCA, then by the sequence BCB, and finally by the sequence BB



      How does one determine the probability of success given a sequence produced by n trials? That is, for a large n, what is the expected number of successes?



      EDIT: It has been suggested that a Markov process be assumed and move forward accordingly. As I have zero knowledge of Markov, I've done a few hours of research and put together the following. (Most sites I encountered were so far over my head, I had no possibility of understanding them.)



      The first step in establishing a Markov Chain is to define probabilities of state transitions. Of course, before state transitions can be established, one first has to identify what the states are.



      For my problem, I've concluded there are three states.

      N is the neutral state. This is the initial state.

      B is the state in which Event B has happened but has not yet terminated in success.

      S is the success state.



      The following is a table of state transition probabilities:

      N -> N = 0.7

      N -> B = 0.2

      N -> S = 0.1

      B -> N = 0.0

      B -> B = 0.7

      B -> S = 0.3

      S -> N = 0.7

      S -> B = 0.2

      S -> S = 0.1



      This yields a transition matrix:
      $M = begin{bmatrix}0.7 & 0.2 & 0.1\0.0 & 0.7 & 0.3\0.7 & 0.2 & 0.1end{bmatrix}$



      The next step is to produce an initial state matrix. Here I start in the N state.
      $x_0 = begin{bmatrix}1\0\0end{bmatrix}$



      Multiplying them together yields:
      $Mx_0 = begin{bmatrix}0.7 & 0.2 & 0.1\0.0 & 0.7 & 0.3\0.7 & 0.2 & 0.1end{bmatrix}begin{bmatrix}1\0\0end{bmatrix} = begin{bmatrix}0.7\0\0.7end{bmatrix} = x_1$



      What does this tell me? Am I on the right path at all? How do I continue?










      share|cite|improve this question











      $endgroup$




      I am running a series of trials. Each trial has the following possible outcomes:

      Event A - 10%

      Event B - 20%

      Event C - 70%



      However, success is not determined by a single trial, but by a series of trials that produce specific sequences.



      An example sequence for 20 trials might be:

      ACCCBCCACCBCBCCCCBBC



      Success is determined by the following two criteria:

      Event A

      Event B followed by either Event B or Event A (with or without intervening Event Cs)



      In the above example sequence, success occurs 4 times, first by the sequence A, then by the sequence BCCA, then by the sequence BCB, and finally by the sequence BB



      How does one determine the probability of success given a sequence produced by n trials? That is, for a large n, what is the expected number of successes?



      EDIT: It has been suggested that a Markov process be assumed and move forward accordingly. As I have zero knowledge of Markov, I've done a few hours of research and put together the following. (Most sites I encountered were so far over my head, I had no possibility of understanding them.)



      The first step in establishing a Markov Chain is to define probabilities of state transitions. Of course, before state transitions can be established, one first has to identify what the states are.



      For my problem, I've concluded there are three states.

      N is the neutral state. This is the initial state.

      B is the state in which Event B has happened but has not yet terminated in success.

      S is the success state.



      The following is a table of state transition probabilities:

      N -> N = 0.7

      N -> B = 0.2

      N -> S = 0.1

      B -> N = 0.0

      B -> B = 0.7

      B -> S = 0.3

      S -> N = 0.7

      S -> B = 0.2

      S -> S = 0.1



      This yields a transition matrix:
      $M = begin{bmatrix}0.7 & 0.2 & 0.1\0.0 & 0.7 & 0.3\0.7 & 0.2 & 0.1end{bmatrix}$



      The next step is to produce an initial state matrix. Here I start in the N state.
      $x_0 = begin{bmatrix}1\0\0end{bmatrix}$



      Multiplying them together yields:
      $Mx_0 = begin{bmatrix}0.7 & 0.2 & 0.1\0.0 & 0.7 & 0.3\0.7 & 0.2 & 0.1end{bmatrix}begin{bmatrix}1\0\0end{bmatrix} = begin{bmatrix}0.7\0\0.7end{bmatrix} = x_1$



      What does this tell me? Am I on the right path at all? How do I continue?







      probability statistics






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Nov 30 '18 at 20:05







      Kadara Bilk

















      asked Nov 30 '18 at 17:51









      Kadara BilkKadara Bilk

      115




      115






















          2 Answers
          2






          active

          oldest

          votes


















          0












          $begingroup$

          I think you are on the right track, though I suspect the probabilities after each step should add up to $1$ , perhaps with $$x_0 M = begin{bmatrix}1& 0 &0end{bmatrix}begin{bmatrix}0.7 & 0.2 & 0.1\0.0 & 0.7 & 0.3\0.7 & 0.2 & 0.1end{bmatrix} = begin{bmatrix}0.7 & 0.2 & 0.1end{bmatrix} = x_1$$ and $$x_0 M^2 = x_1 M=begin{bmatrix}0.7 & 0.2 & 0.1end{bmatrix}begin{bmatrix}0.7 & 0.2 & 0.1\0.0 & 0.7 & 0.3\0.7 & 0.2 & 0.1end{bmatrix} = begin{bmatrix}0.56 & 0.3 & 0.14end{bmatrix} = x_2$$



          The steady-state distribution for your Markov chain is $begin{bmatrix}0.42& 0.4 & 0.18end{bmatrix}$ so the expected number of successes from a large number $n$ of trials will not be far away from $0.18, n$



          But you do not start in the steady state. In fact $begin{bmatrix}0.7 ,,,& 0.2 & 0.1,,,end{bmatrix} - begin{bmatrix}0.42& 0.4 & 0.18end{bmatrix} =begin{bmatrix}0.28&-0.2& -0.08end{bmatrix}$ and $begin{bmatrix}0.56 & 0.3 & 0.14end{bmatrix} - begin{bmatrix}0.42& 0.4 & 0.18end{bmatrix} =begin{bmatrix}0.14&-0.1& -0.04end{bmatrix}$ with the difference from the steady state halving at each step



          So the expected number of successes after $n$ trials is $$(0.18 - 0.08) + (0.18 - 0.04) + (0.18 - 0.02) + cdots + (0.18 - 0.08 times 2^{-(n-1)})$$ which is $$0.18n - 0.16 left(1- 2^{-n}right)$$






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            How did you achieve the steady state distribution? Also, thank you for showing that I needed to transpose the x matrix.
            $endgroup$
            – Kadara Bilk
            Nov 30 '18 at 20:54










          • $begingroup$
            @KadaraBilk: The transpose of $M$ has three eigenvalues: $1, 0.5, 0$. The steady-state distribution is essentially the eigenvector corresponding to $1$, rescaled so its terms add up to $1$. But an easier empirical approach is to look at what happens to $x_0M^n$ as $n$ increases
            $endgroup$
            – Henry
            Nov 30 '18 at 21:25





















          0












          $begingroup$

          If you consider the occurences to be i.i.d., then you can assume a Markov process and do according calculations.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            I've added to the question based on information from your answer. I know nothing about probability theory, so I'm not sure I'm going anywhere.
            $endgroup$
            – Kadara Bilk
            Nov 30 '18 at 20:06











          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3020398%2fprobability-of-finding-specific-sequences%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          2 Answers
          2






          active

          oldest

          votes








          2 Answers
          2






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0












          $begingroup$

          I think you are on the right track, though I suspect the probabilities after each step should add up to $1$ , perhaps with $$x_0 M = begin{bmatrix}1& 0 &0end{bmatrix}begin{bmatrix}0.7 & 0.2 & 0.1\0.0 & 0.7 & 0.3\0.7 & 0.2 & 0.1end{bmatrix} = begin{bmatrix}0.7 & 0.2 & 0.1end{bmatrix} = x_1$$ and $$x_0 M^2 = x_1 M=begin{bmatrix}0.7 & 0.2 & 0.1end{bmatrix}begin{bmatrix}0.7 & 0.2 & 0.1\0.0 & 0.7 & 0.3\0.7 & 0.2 & 0.1end{bmatrix} = begin{bmatrix}0.56 & 0.3 & 0.14end{bmatrix} = x_2$$



          The steady-state distribution for your Markov chain is $begin{bmatrix}0.42& 0.4 & 0.18end{bmatrix}$ so the expected number of successes from a large number $n$ of trials will not be far away from $0.18, n$



          But you do not start in the steady state. In fact $begin{bmatrix}0.7 ,,,& 0.2 & 0.1,,,end{bmatrix} - begin{bmatrix}0.42& 0.4 & 0.18end{bmatrix} =begin{bmatrix}0.28&-0.2& -0.08end{bmatrix}$ and $begin{bmatrix}0.56 & 0.3 & 0.14end{bmatrix} - begin{bmatrix}0.42& 0.4 & 0.18end{bmatrix} =begin{bmatrix}0.14&-0.1& -0.04end{bmatrix}$ with the difference from the steady state halving at each step



          So the expected number of successes after $n$ trials is $$(0.18 - 0.08) + (0.18 - 0.04) + (0.18 - 0.02) + cdots + (0.18 - 0.08 times 2^{-(n-1)})$$ which is $$0.18n - 0.16 left(1- 2^{-n}right)$$






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            How did you achieve the steady state distribution? Also, thank you for showing that I needed to transpose the x matrix.
            $endgroup$
            – Kadara Bilk
            Nov 30 '18 at 20:54










          • $begingroup$
            @KadaraBilk: The transpose of $M$ has three eigenvalues: $1, 0.5, 0$. The steady-state distribution is essentially the eigenvector corresponding to $1$, rescaled so its terms add up to $1$. But an easier empirical approach is to look at what happens to $x_0M^n$ as $n$ increases
            $endgroup$
            – Henry
            Nov 30 '18 at 21:25


















          0












          $begingroup$

          I think you are on the right track, though I suspect the probabilities after each step should add up to $1$ , perhaps with $$x_0 M = begin{bmatrix}1& 0 &0end{bmatrix}begin{bmatrix}0.7 & 0.2 & 0.1\0.0 & 0.7 & 0.3\0.7 & 0.2 & 0.1end{bmatrix} = begin{bmatrix}0.7 & 0.2 & 0.1end{bmatrix} = x_1$$ and $$x_0 M^2 = x_1 M=begin{bmatrix}0.7 & 0.2 & 0.1end{bmatrix}begin{bmatrix}0.7 & 0.2 & 0.1\0.0 & 0.7 & 0.3\0.7 & 0.2 & 0.1end{bmatrix} = begin{bmatrix}0.56 & 0.3 & 0.14end{bmatrix} = x_2$$



          The steady-state distribution for your Markov chain is $begin{bmatrix}0.42& 0.4 & 0.18end{bmatrix}$ so the expected number of successes from a large number $n$ of trials will not be far away from $0.18, n$



          But you do not start in the steady state. In fact $begin{bmatrix}0.7 ,,,& 0.2 & 0.1,,,end{bmatrix} - begin{bmatrix}0.42& 0.4 & 0.18end{bmatrix} =begin{bmatrix}0.28&-0.2& -0.08end{bmatrix}$ and $begin{bmatrix}0.56 & 0.3 & 0.14end{bmatrix} - begin{bmatrix}0.42& 0.4 & 0.18end{bmatrix} =begin{bmatrix}0.14&-0.1& -0.04end{bmatrix}$ with the difference from the steady state halving at each step



          So the expected number of successes after $n$ trials is $$(0.18 - 0.08) + (0.18 - 0.04) + (0.18 - 0.02) + cdots + (0.18 - 0.08 times 2^{-(n-1)})$$ which is $$0.18n - 0.16 left(1- 2^{-n}right)$$






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            How did you achieve the steady state distribution? Also, thank you for showing that I needed to transpose the x matrix.
            $endgroup$
            – Kadara Bilk
            Nov 30 '18 at 20:54










          • $begingroup$
            @KadaraBilk: The transpose of $M$ has three eigenvalues: $1, 0.5, 0$. The steady-state distribution is essentially the eigenvector corresponding to $1$, rescaled so its terms add up to $1$. But an easier empirical approach is to look at what happens to $x_0M^n$ as $n$ increases
            $endgroup$
            – Henry
            Nov 30 '18 at 21:25
















          0












          0








          0





          $begingroup$

          I think you are on the right track, though I suspect the probabilities after each step should add up to $1$ , perhaps with $$x_0 M = begin{bmatrix}1& 0 &0end{bmatrix}begin{bmatrix}0.7 & 0.2 & 0.1\0.0 & 0.7 & 0.3\0.7 & 0.2 & 0.1end{bmatrix} = begin{bmatrix}0.7 & 0.2 & 0.1end{bmatrix} = x_1$$ and $$x_0 M^2 = x_1 M=begin{bmatrix}0.7 & 0.2 & 0.1end{bmatrix}begin{bmatrix}0.7 & 0.2 & 0.1\0.0 & 0.7 & 0.3\0.7 & 0.2 & 0.1end{bmatrix} = begin{bmatrix}0.56 & 0.3 & 0.14end{bmatrix} = x_2$$



          The steady-state distribution for your Markov chain is $begin{bmatrix}0.42& 0.4 & 0.18end{bmatrix}$ so the expected number of successes from a large number $n$ of trials will not be far away from $0.18, n$



          But you do not start in the steady state. In fact $begin{bmatrix}0.7 ,,,& 0.2 & 0.1,,,end{bmatrix} - begin{bmatrix}0.42& 0.4 & 0.18end{bmatrix} =begin{bmatrix}0.28&-0.2& -0.08end{bmatrix}$ and $begin{bmatrix}0.56 & 0.3 & 0.14end{bmatrix} - begin{bmatrix}0.42& 0.4 & 0.18end{bmatrix} =begin{bmatrix}0.14&-0.1& -0.04end{bmatrix}$ with the difference from the steady state halving at each step



          So the expected number of successes after $n$ trials is $$(0.18 - 0.08) + (0.18 - 0.04) + (0.18 - 0.02) + cdots + (0.18 - 0.08 times 2^{-(n-1)})$$ which is $$0.18n - 0.16 left(1- 2^{-n}right)$$






          share|cite|improve this answer









          $endgroup$



          I think you are on the right track, though I suspect the probabilities after each step should add up to $1$ , perhaps with $$x_0 M = begin{bmatrix}1& 0 &0end{bmatrix}begin{bmatrix}0.7 & 0.2 & 0.1\0.0 & 0.7 & 0.3\0.7 & 0.2 & 0.1end{bmatrix} = begin{bmatrix}0.7 & 0.2 & 0.1end{bmatrix} = x_1$$ and $$x_0 M^2 = x_1 M=begin{bmatrix}0.7 & 0.2 & 0.1end{bmatrix}begin{bmatrix}0.7 & 0.2 & 0.1\0.0 & 0.7 & 0.3\0.7 & 0.2 & 0.1end{bmatrix} = begin{bmatrix}0.56 & 0.3 & 0.14end{bmatrix} = x_2$$



          The steady-state distribution for your Markov chain is $begin{bmatrix}0.42& 0.4 & 0.18end{bmatrix}$ so the expected number of successes from a large number $n$ of trials will not be far away from $0.18, n$



          But you do not start in the steady state. In fact $begin{bmatrix}0.7 ,,,& 0.2 & 0.1,,,end{bmatrix} - begin{bmatrix}0.42& 0.4 & 0.18end{bmatrix} =begin{bmatrix}0.28&-0.2& -0.08end{bmatrix}$ and $begin{bmatrix}0.56 & 0.3 & 0.14end{bmatrix} - begin{bmatrix}0.42& 0.4 & 0.18end{bmatrix} =begin{bmatrix}0.14&-0.1& -0.04end{bmatrix}$ with the difference from the steady state halving at each step



          So the expected number of successes after $n$ trials is $$(0.18 - 0.08) + (0.18 - 0.04) + (0.18 - 0.02) + cdots + (0.18 - 0.08 times 2^{-(n-1)})$$ which is $$0.18n - 0.16 left(1- 2^{-n}right)$$







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Nov 30 '18 at 20:44









          HenryHenry

          99.3k479164




          99.3k479164












          • $begingroup$
            How did you achieve the steady state distribution? Also, thank you for showing that I needed to transpose the x matrix.
            $endgroup$
            – Kadara Bilk
            Nov 30 '18 at 20:54










          • $begingroup$
            @KadaraBilk: The transpose of $M$ has three eigenvalues: $1, 0.5, 0$. The steady-state distribution is essentially the eigenvector corresponding to $1$, rescaled so its terms add up to $1$. But an easier empirical approach is to look at what happens to $x_0M^n$ as $n$ increases
            $endgroup$
            – Henry
            Nov 30 '18 at 21:25




















          • $begingroup$
            How did you achieve the steady state distribution? Also, thank you for showing that I needed to transpose the x matrix.
            $endgroup$
            – Kadara Bilk
            Nov 30 '18 at 20:54










          • $begingroup$
            @KadaraBilk: The transpose of $M$ has three eigenvalues: $1, 0.5, 0$. The steady-state distribution is essentially the eigenvector corresponding to $1$, rescaled so its terms add up to $1$. But an easier empirical approach is to look at what happens to $x_0M^n$ as $n$ increases
            $endgroup$
            – Henry
            Nov 30 '18 at 21:25


















          $begingroup$
          How did you achieve the steady state distribution? Also, thank you for showing that I needed to transpose the x matrix.
          $endgroup$
          – Kadara Bilk
          Nov 30 '18 at 20:54




          $begingroup$
          How did you achieve the steady state distribution? Also, thank you for showing that I needed to transpose the x matrix.
          $endgroup$
          – Kadara Bilk
          Nov 30 '18 at 20:54












          $begingroup$
          @KadaraBilk: The transpose of $M$ has three eigenvalues: $1, 0.5, 0$. The steady-state distribution is essentially the eigenvector corresponding to $1$, rescaled so its terms add up to $1$. But an easier empirical approach is to look at what happens to $x_0M^n$ as $n$ increases
          $endgroup$
          – Henry
          Nov 30 '18 at 21:25






          $begingroup$
          @KadaraBilk: The transpose of $M$ has three eigenvalues: $1, 0.5, 0$. The steady-state distribution is essentially the eigenvector corresponding to $1$, rescaled so its terms add up to $1$. But an easier empirical approach is to look at what happens to $x_0M^n$ as $n$ increases
          $endgroup$
          – Henry
          Nov 30 '18 at 21:25













          0












          $begingroup$

          If you consider the occurences to be i.i.d., then you can assume a Markov process and do according calculations.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            I've added to the question based on information from your answer. I know nothing about probability theory, so I'm not sure I'm going anywhere.
            $endgroup$
            – Kadara Bilk
            Nov 30 '18 at 20:06
















          0












          $begingroup$

          If you consider the occurences to be i.i.d., then you can assume a Markov process and do according calculations.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            I've added to the question based on information from your answer. I know nothing about probability theory, so I'm not sure I'm going anywhere.
            $endgroup$
            – Kadara Bilk
            Nov 30 '18 at 20:06














          0












          0








          0





          $begingroup$

          If you consider the occurences to be i.i.d., then you can assume a Markov process and do according calculations.






          share|cite|improve this answer









          $endgroup$



          If you consider the occurences to be i.i.d., then you can assume a Markov process and do according calculations.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Nov 30 '18 at 17:53









          Thomas LangThomas Lang

          1624




          1624












          • $begingroup$
            I've added to the question based on information from your answer. I know nothing about probability theory, so I'm not sure I'm going anywhere.
            $endgroup$
            – Kadara Bilk
            Nov 30 '18 at 20:06


















          • $begingroup$
            I've added to the question based on information from your answer. I know nothing about probability theory, so I'm not sure I'm going anywhere.
            $endgroup$
            – Kadara Bilk
            Nov 30 '18 at 20:06
















          $begingroup$
          I've added to the question based on information from your answer. I know nothing about probability theory, so I'm not sure I'm going anywhere.
          $endgroup$
          – Kadara Bilk
          Nov 30 '18 at 20:06




          $begingroup$
          I've added to the question based on information from your answer. I know nothing about probability theory, so I'm not sure I'm going anywhere.
          $endgroup$
          – Kadara Bilk
          Nov 30 '18 at 20:06


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Mathematics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3020398%2fprobability-of-finding-specific-sequences%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Plaza Victoria

          Puebla de Zaragoza

          Musa