Finding the variance on the number of steps required to reach an absorbing state in a MC using the...












0












$begingroup$


I'm trying to implement a Markov Chain class in Python to test some theoretical results with actual iterations over the matrix itself. What I've implemented up to now seems to work and output the expected result, but I'm at a loss on how to calculate the variance on the expected number of steps needed to get to an absorbing state.



I'm pulling the formula used for this calculation from this article on Wikipedia, namely:



$$
(2N-I_t)t-t_{sq}
$$



where:



$$
N: fundamental matrix\
I_t: Identity matrix\
t: Expected no. of steps (N1)\
t_{sq}: Hadamard product of t with itself
$$



The code I use to compute this is:



t = self._es
twoN = np.multiply(2, self._N)
sub = np.subtract(twoN, self._It)
dot = np.dot(sub, t)
variance = np.subtract(dot, np.multiply(t, t))
std = [np.sqrt(el) for el in variance]


The matrix I'm using as an example is the following (HTH coin toss sequence / substring generation):



$$
begin{bmatrix}
1/2& 1/2& 0& 0\
0& 1/2& 1/2& 0\
1/2& 0& 0& 1/2\
0& 0& 0& 1
end{bmatrix}
$$



It follows that the Fundamental matrix is:



$$
begin{bmatrix}
4& 4& 2\
2& 4& 2\
2& 2& 2
end{bmatrix}
$$



The expected number of steps to reach the absorbing state are then:



$$
begin{bmatrix}
10\
8\
6
end{bmatrix}
$$



Thus, following my code, twoN, sub, dot and the variance would be:



$$
twoN:
begin{bmatrix}
8& 8& 4\
4& 8& 4\
4& 4& 4
end{bmatrix}
$$

$$
sub:
begin{bmatrix}
7& 8& 4\
4& 7& 4\
4& 4& 3
end{bmatrix}
$$

$$
dot:
begin{bmatrix}
158\
120\
90
end{bmatrix}
$$

$$
variance:
begin{bmatrix}
58\
56\
54\
end{bmatrix}
$$

$$
std:
begin{bmatrix}
sim7.6\
sim7.5\
sim7.3
end{bmatrix}
$$



However these numbers seem way off to me, and I am not sure if I am messing up the computation somewhere or if the formula I'm using is wrong (or using a notation I don't understand).



For reference, if I run the MC 10.000 times I get a mean value on the number of steps to reach the absorbing state very close to the predicted ones, with the highest deviation I've seen at around 0.15 from the prediction starting from that state.



Any help would be greatly appreciated!



P.S.: I have an additional doubt for which I'd like to not open another question, so if anyone is willing to answer this I'd be really grateful.



Can we say that a MC is absorbing if the absorbing state is reached from any other state? Or does the state need to be reachable from all other states (in one or more steps)?



How I understood it at first was that being reachable from any one state would have been enough, but today I was told that that's not the case and the definition is stronger (namely, reachable from all other states which are not absorbing themselves). Thinking about it, I came up with this matrix that seems to prove my first intuition was wrong, but I'd still like confirmation:



$$
begin{bmatrix}
*& *& 0& 0\
*& *& 0& 0\
0& 0& 1/2& 1/2\
0& 0& 0& 1
end{bmatrix}
$$










share|cite|improve this question









$endgroup$

















    0












    $begingroup$


    I'm trying to implement a Markov Chain class in Python to test some theoretical results with actual iterations over the matrix itself. What I've implemented up to now seems to work and output the expected result, but I'm at a loss on how to calculate the variance on the expected number of steps needed to get to an absorbing state.



    I'm pulling the formula used for this calculation from this article on Wikipedia, namely:



    $$
    (2N-I_t)t-t_{sq}
    $$



    where:



    $$
    N: fundamental matrix\
    I_t: Identity matrix\
    t: Expected no. of steps (N1)\
    t_{sq}: Hadamard product of t with itself
    $$



    The code I use to compute this is:



    t = self._es
    twoN = np.multiply(2, self._N)
    sub = np.subtract(twoN, self._It)
    dot = np.dot(sub, t)
    variance = np.subtract(dot, np.multiply(t, t))
    std = [np.sqrt(el) for el in variance]


    The matrix I'm using as an example is the following (HTH coin toss sequence / substring generation):



    $$
    begin{bmatrix}
    1/2& 1/2& 0& 0\
    0& 1/2& 1/2& 0\
    1/2& 0& 0& 1/2\
    0& 0& 0& 1
    end{bmatrix}
    $$



    It follows that the Fundamental matrix is:



    $$
    begin{bmatrix}
    4& 4& 2\
    2& 4& 2\
    2& 2& 2
    end{bmatrix}
    $$



    The expected number of steps to reach the absorbing state are then:



    $$
    begin{bmatrix}
    10\
    8\
    6
    end{bmatrix}
    $$



    Thus, following my code, twoN, sub, dot and the variance would be:



    $$
    twoN:
    begin{bmatrix}
    8& 8& 4\
    4& 8& 4\
    4& 4& 4
    end{bmatrix}
    $$

    $$
    sub:
    begin{bmatrix}
    7& 8& 4\
    4& 7& 4\
    4& 4& 3
    end{bmatrix}
    $$

    $$
    dot:
    begin{bmatrix}
    158\
    120\
    90
    end{bmatrix}
    $$

    $$
    variance:
    begin{bmatrix}
    58\
    56\
    54\
    end{bmatrix}
    $$

    $$
    std:
    begin{bmatrix}
    sim7.6\
    sim7.5\
    sim7.3
    end{bmatrix}
    $$



    However these numbers seem way off to me, and I am not sure if I am messing up the computation somewhere or if the formula I'm using is wrong (or using a notation I don't understand).



    For reference, if I run the MC 10.000 times I get a mean value on the number of steps to reach the absorbing state very close to the predicted ones, with the highest deviation I've seen at around 0.15 from the prediction starting from that state.



    Any help would be greatly appreciated!



    P.S.: I have an additional doubt for which I'd like to not open another question, so if anyone is willing to answer this I'd be really grateful.



    Can we say that a MC is absorbing if the absorbing state is reached from any other state? Or does the state need to be reachable from all other states (in one or more steps)?



    How I understood it at first was that being reachable from any one state would have been enough, but today I was told that that's not the case and the definition is stronger (namely, reachable from all other states which are not absorbing themselves). Thinking about it, I came up with this matrix that seems to prove my first intuition was wrong, but I'd still like confirmation:



    $$
    begin{bmatrix}
    *& *& 0& 0\
    *& *& 0& 0\
    0& 0& 1/2& 1/2\
    0& 0& 0& 1
    end{bmatrix}
    $$










    share|cite|improve this question









    $endgroup$















      0












      0








      0





      $begingroup$


      I'm trying to implement a Markov Chain class in Python to test some theoretical results with actual iterations over the matrix itself. What I've implemented up to now seems to work and output the expected result, but I'm at a loss on how to calculate the variance on the expected number of steps needed to get to an absorbing state.



      I'm pulling the formula used for this calculation from this article on Wikipedia, namely:



      $$
      (2N-I_t)t-t_{sq}
      $$



      where:



      $$
      N: fundamental matrix\
      I_t: Identity matrix\
      t: Expected no. of steps (N1)\
      t_{sq}: Hadamard product of t with itself
      $$



      The code I use to compute this is:



      t = self._es
      twoN = np.multiply(2, self._N)
      sub = np.subtract(twoN, self._It)
      dot = np.dot(sub, t)
      variance = np.subtract(dot, np.multiply(t, t))
      std = [np.sqrt(el) for el in variance]


      The matrix I'm using as an example is the following (HTH coin toss sequence / substring generation):



      $$
      begin{bmatrix}
      1/2& 1/2& 0& 0\
      0& 1/2& 1/2& 0\
      1/2& 0& 0& 1/2\
      0& 0& 0& 1
      end{bmatrix}
      $$



      It follows that the Fundamental matrix is:



      $$
      begin{bmatrix}
      4& 4& 2\
      2& 4& 2\
      2& 2& 2
      end{bmatrix}
      $$



      The expected number of steps to reach the absorbing state are then:



      $$
      begin{bmatrix}
      10\
      8\
      6
      end{bmatrix}
      $$



      Thus, following my code, twoN, sub, dot and the variance would be:



      $$
      twoN:
      begin{bmatrix}
      8& 8& 4\
      4& 8& 4\
      4& 4& 4
      end{bmatrix}
      $$

      $$
      sub:
      begin{bmatrix}
      7& 8& 4\
      4& 7& 4\
      4& 4& 3
      end{bmatrix}
      $$

      $$
      dot:
      begin{bmatrix}
      158\
      120\
      90
      end{bmatrix}
      $$

      $$
      variance:
      begin{bmatrix}
      58\
      56\
      54\
      end{bmatrix}
      $$

      $$
      std:
      begin{bmatrix}
      sim7.6\
      sim7.5\
      sim7.3
      end{bmatrix}
      $$



      However these numbers seem way off to me, and I am not sure if I am messing up the computation somewhere or if the formula I'm using is wrong (or using a notation I don't understand).



      For reference, if I run the MC 10.000 times I get a mean value on the number of steps to reach the absorbing state very close to the predicted ones, with the highest deviation I've seen at around 0.15 from the prediction starting from that state.



      Any help would be greatly appreciated!



      P.S.: I have an additional doubt for which I'd like to not open another question, so if anyone is willing to answer this I'd be really grateful.



      Can we say that a MC is absorbing if the absorbing state is reached from any other state? Or does the state need to be reachable from all other states (in one or more steps)?



      How I understood it at first was that being reachable from any one state would have been enough, but today I was told that that's not the case and the definition is stronger (namely, reachable from all other states which are not absorbing themselves). Thinking about it, I came up with this matrix that seems to prove my first intuition was wrong, but I'd still like confirmation:



      $$
      begin{bmatrix}
      *& *& 0& 0\
      *& *& 0& 0\
      0& 0& 1/2& 1/2\
      0& 0& 0& 1
      end{bmatrix}
      $$










      share|cite|improve this question









      $endgroup$




      I'm trying to implement a Markov Chain class in Python to test some theoretical results with actual iterations over the matrix itself. What I've implemented up to now seems to work and output the expected result, but I'm at a loss on how to calculate the variance on the expected number of steps needed to get to an absorbing state.



      I'm pulling the formula used for this calculation from this article on Wikipedia, namely:



      $$
      (2N-I_t)t-t_{sq}
      $$



      where:



      $$
      N: fundamental matrix\
      I_t: Identity matrix\
      t: Expected no. of steps (N1)\
      t_{sq}: Hadamard product of t with itself
      $$



      The code I use to compute this is:



      t = self._es
      twoN = np.multiply(2, self._N)
      sub = np.subtract(twoN, self._It)
      dot = np.dot(sub, t)
      variance = np.subtract(dot, np.multiply(t, t))
      std = [np.sqrt(el) for el in variance]


      The matrix I'm using as an example is the following (HTH coin toss sequence / substring generation):



      $$
      begin{bmatrix}
      1/2& 1/2& 0& 0\
      0& 1/2& 1/2& 0\
      1/2& 0& 0& 1/2\
      0& 0& 0& 1
      end{bmatrix}
      $$



      It follows that the Fundamental matrix is:



      $$
      begin{bmatrix}
      4& 4& 2\
      2& 4& 2\
      2& 2& 2
      end{bmatrix}
      $$



      The expected number of steps to reach the absorbing state are then:



      $$
      begin{bmatrix}
      10\
      8\
      6
      end{bmatrix}
      $$



      Thus, following my code, twoN, sub, dot and the variance would be:



      $$
      twoN:
      begin{bmatrix}
      8& 8& 4\
      4& 8& 4\
      4& 4& 4
      end{bmatrix}
      $$

      $$
      sub:
      begin{bmatrix}
      7& 8& 4\
      4& 7& 4\
      4& 4& 3
      end{bmatrix}
      $$

      $$
      dot:
      begin{bmatrix}
      158\
      120\
      90
      end{bmatrix}
      $$

      $$
      variance:
      begin{bmatrix}
      58\
      56\
      54\
      end{bmatrix}
      $$

      $$
      std:
      begin{bmatrix}
      sim7.6\
      sim7.5\
      sim7.3
      end{bmatrix}
      $$



      However these numbers seem way off to me, and I am not sure if I am messing up the computation somewhere or if the formula I'm using is wrong (or using a notation I don't understand).



      For reference, if I run the MC 10.000 times I get a mean value on the number of steps to reach the absorbing state very close to the predicted ones, with the highest deviation I've seen at around 0.15 from the prediction starting from that state.



      Any help would be greatly appreciated!



      P.S.: I have an additional doubt for which I'd like to not open another question, so if anyone is willing to answer this I'd be really grateful.



      Can we say that a MC is absorbing if the absorbing state is reached from any other state? Or does the state need to be reachable from all other states (in one or more steps)?



      How I understood it at first was that being reachable from any one state would have been enough, but today I was told that that's not the case and the definition is stronger (namely, reachable from all other states which are not absorbing themselves). Thinking about it, I came up with this matrix that seems to prove my first intuition was wrong, but I'd still like confirmation:



      $$
      begin{bmatrix}
      *& *& 0& 0\
      *& *& 0& 0\
      0& 0& 1/2& 1/2\
      0& 0& 0& 1
      end{bmatrix}
      $$







      probability matrices statistics markov-chains variance






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Dec 18 '18 at 17:49









      Luca GiorgiLuca Giorgi

      1033




      1033






















          0






          active

          oldest

          votes












          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3045462%2ffinding-the-variance-on-the-number-of-steps-required-to-reach-an-absorbing-state%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes
















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Mathematics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3045462%2ffinding-the-variance-on-the-number-of-steps-required-to-reach-an-absorbing-state%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Plaza Victoria

          Puebla de Zaragoza

          Musa