How to calculate probably when the odds change over time











up vote
0
down vote

favorite












Sorry for the dumb wording, or asking a question that may have been answered before, I'm not familiar with the vocabulary so I don't really know how to ask the question or what to search for.



I can best explain what I'm looking for with an example.



Let's say you have a bag of marbles, 4 red and 1 blue. Whenever you pull a red marble, you replace it with a blue marble and return it to the bag, the odds of pulling a blue marble on the next turn are increased. Whenever you pull a blue marble you return all of the original red mables marbles back to the bag and remove all but 1 blue marble.



On turn 1 there's a 20% chance to pull the blue marble.
If you pulled a red on turn 1, then there is a 40% chance to pull a blue on Turn 2, but if you did pull the blue on Turn 1 then there's a 20% chance to pull the blue on Turn 2.



So at first you have 20% chance, if you fail then you have a 40% chance, if you fail again then you have a 60% chance, if you fail again you have an 80% chance, and if you fail that you are gauranteed to get ablue on the 5th turn. Every time you pull a blue it resets back to 20%.



I wrote a program to simulate 1000 turns in a row, ran it multiple times, and I get results ranging from 375-420 blues per 1000 turns.



So I believe the answer is somewhere between 37.5% and 42%, but is there some sort of formula that can be used to calculate how likely you are to pull a blue without knowing what happened on previous turns?










share|cite|improve this question
























  • Learn about markov chains. You can treat the composition of your marble bag as the state, and specify transition probabilities between the possible state values. Then analyze the markov chain using existing theorems.
    – jnez71
    Nov 12 at 22:24










  • You probably need to add your definition of "win" to the question.
    – Phil H
    Nov 12 at 23:11










  • Ok, so I have this Markov chain [[0.2,0.8,0,0,0],[0.4,0,0.6,0,0],[0.6,0,0,0.4,0],[0.8,0,0,0,0.2],[1,0,0,0,0]] but I'm not sure what existing theorems you're talking about to figure out how often it goes to the 1st state.
    – Nick
    Nov 15 at 17:07















up vote
0
down vote

favorite












Sorry for the dumb wording, or asking a question that may have been answered before, I'm not familiar with the vocabulary so I don't really know how to ask the question or what to search for.



I can best explain what I'm looking for with an example.



Let's say you have a bag of marbles, 4 red and 1 blue. Whenever you pull a red marble, you replace it with a blue marble and return it to the bag, the odds of pulling a blue marble on the next turn are increased. Whenever you pull a blue marble you return all of the original red mables marbles back to the bag and remove all but 1 blue marble.



On turn 1 there's a 20% chance to pull the blue marble.
If you pulled a red on turn 1, then there is a 40% chance to pull a blue on Turn 2, but if you did pull the blue on Turn 1 then there's a 20% chance to pull the blue on Turn 2.



So at first you have 20% chance, if you fail then you have a 40% chance, if you fail again then you have a 60% chance, if you fail again you have an 80% chance, and if you fail that you are gauranteed to get ablue on the 5th turn. Every time you pull a blue it resets back to 20%.



I wrote a program to simulate 1000 turns in a row, ran it multiple times, and I get results ranging from 375-420 blues per 1000 turns.



So I believe the answer is somewhere between 37.5% and 42%, but is there some sort of formula that can be used to calculate how likely you are to pull a blue without knowing what happened on previous turns?










share|cite|improve this question
























  • Learn about markov chains. You can treat the composition of your marble bag as the state, and specify transition probabilities between the possible state values. Then analyze the markov chain using existing theorems.
    – jnez71
    Nov 12 at 22:24










  • You probably need to add your definition of "win" to the question.
    – Phil H
    Nov 12 at 23:11










  • Ok, so I have this Markov chain [[0.2,0.8,0,0,0],[0.4,0,0.6,0,0],[0.6,0,0,0.4,0],[0.8,0,0,0,0.2],[1,0,0,0,0]] but I'm not sure what existing theorems you're talking about to figure out how often it goes to the 1st state.
    – Nick
    Nov 15 at 17:07













up vote
0
down vote

favorite









up vote
0
down vote

favorite











Sorry for the dumb wording, or asking a question that may have been answered before, I'm not familiar with the vocabulary so I don't really know how to ask the question or what to search for.



I can best explain what I'm looking for with an example.



Let's say you have a bag of marbles, 4 red and 1 blue. Whenever you pull a red marble, you replace it with a blue marble and return it to the bag, the odds of pulling a blue marble on the next turn are increased. Whenever you pull a blue marble you return all of the original red mables marbles back to the bag and remove all but 1 blue marble.



On turn 1 there's a 20% chance to pull the blue marble.
If you pulled a red on turn 1, then there is a 40% chance to pull a blue on Turn 2, but if you did pull the blue on Turn 1 then there's a 20% chance to pull the blue on Turn 2.



So at first you have 20% chance, if you fail then you have a 40% chance, if you fail again then you have a 60% chance, if you fail again you have an 80% chance, and if you fail that you are gauranteed to get ablue on the 5th turn. Every time you pull a blue it resets back to 20%.



I wrote a program to simulate 1000 turns in a row, ran it multiple times, and I get results ranging from 375-420 blues per 1000 turns.



So I believe the answer is somewhere between 37.5% and 42%, but is there some sort of formula that can be used to calculate how likely you are to pull a blue without knowing what happened on previous turns?










share|cite|improve this question















Sorry for the dumb wording, or asking a question that may have been answered before, I'm not familiar with the vocabulary so I don't really know how to ask the question or what to search for.



I can best explain what I'm looking for with an example.



Let's say you have a bag of marbles, 4 red and 1 blue. Whenever you pull a red marble, you replace it with a blue marble and return it to the bag, the odds of pulling a blue marble on the next turn are increased. Whenever you pull a blue marble you return all of the original red mables marbles back to the bag and remove all but 1 blue marble.



On turn 1 there's a 20% chance to pull the blue marble.
If you pulled a red on turn 1, then there is a 40% chance to pull a blue on Turn 2, but if you did pull the blue on Turn 1 then there's a 20% chance to pull the blue on Turn 2.



So at first you have 20% chance, if you fail then you have a 40% chance, if you fail again then you have a 60% chance, if you fail again you have an 80% chance, and if you fail that you are gauranteed to get ablue on the 5th turn. Every time you pull a blue it resets back to 20%.



I wrote a program to simulate 1000 turns in a row, ran it multiple times, and I get results ranging from 375-420 blues per 1000 turns.



So I believe the answer is somewhere between 37.5% and 42%, but is there some sort of formula that can be used to calculate how likely you are to pull a blue without knowing what happened on previous turns?







probability probability-distributions conditional-probability






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Nov 13 at 14:47

























asked Nov 12 at 22:10









Nick

12115




12115












  • Learn about markov chains. You can treat the composition of your marble bag as the state, and specify transition probabilities between the possible state values. Then analyze the markov chain using existing theorems.
    – jnez71
    Nov 12 at 22:24










  • You probably need to add your definition of "win" to the question.
    – Phil H
    Nov 12 at 23:11










  • Ok, so I have this Markov chain [[0.2,0.8,0,0,0],[0.4,0,0.6,0,0],[0.6,0,0,0.4,0],[0.8,0,0,0,0.2],[1,0,0,0,0]] but I'm not sure what existing theorems you're talking about to figure out how often it goes to the 1st state.
    – Nick
    Nov 15 at 17:07


















  • Learn about markov chains. You can treat the composition of your marble bag as the state, and specify transition probabilities between the possible state values. Then analyze the markov chain using existing theorems.
    – jnez71
    Nov 12 at 22:24










  • You probably need to add your definition of "win" to the question.
    – Phil H
    Nov 12 at 23:11










  • Ok, so I have this Markov chain [[0.2,0.8,0,0,0],[0.4,0,0.6,0,0],[0.6,0,0,0.4,0],[0.8,0,0,0,0.2],[1,0,0,0,0]] but I'm not sure what existing theorems you're talking about to figure out how often it goes to the 1st state.
    – Nick
    Nov 15 at 17:07
















Learn about markov chains. You can treat the composition of your marble bag as the state, and specify transition probabilities between the possible state values. Then analyze the markov chain using existing theorems.
– jnez71
Nov 12 at 22:24




Learn about markov chains. You can treat the composition of your marble bag as the state, and specify transition probabilities between the possible state values. Then analyze the markov chain using existing theorems.
– jnez71
Nov 12 at 22:24












You probably need to add your definition of "win" to the question.
– Phil H
Nov 12 at 23:11




You probably need to add your definition of "win" to the question.
– Phil H
Nov 12 at 23:11












Ok, so I have this Markov chain [[0.2,0.8,0,0,0],[0.4,0,0.6,0,0],[0.6,0,0,0.4,0],[0.8,0,0,0,0.2],[1,0,0,0,0]] but I'm not sure what existing theorems you're talking about to figure out how often it goes to the 1st state.
– Nick
Nov 15 at 17:07




Ok, so I have this Markov chain [[0.2,0.8,0,0,0],[0.4,0,0.6,0,0],[0.6,0,0,0.4,0],[0.8,0,0,0,0.2],[1,0,0,0,0]] but I'm not sure what existing theorems you're talking about to figure out how often it goes to the 1st state.
– Nick
Nov 15 at 17:07










1 Answer
1






active

oldest

votes

















up vote
0
down vote













I found a kind of similar question this and was able to adapt that answer to my problem.



$$sumlimits_{n=1}^{5}bigg(ncdot0.2ncdotprodlimits_{j=0}^{n-1}(1-0.2j)bigg)$$



This gives me ~39.8% which also happens to line up with my expected result, and very close to the center of the range I thought it was in (38.75%, only off by 0.05%!)



This sums the chance of each attempt




  1. First attempt (1) * the probability of winning this attempt (0.2 * 1) * the probability of being on this attempt (100%) = 1 * 0.2 * 1 * 1 = 0.2

  2. 2nd attmept (2) * the probability of winning this attempt (0.2 * 2) * the probability of being on this attempt (100% * 80%) = 2 * 0.2 * 2 * 0.8 = 0.64

  3. 3rd attempt (3) * the probability of winning this attempt (0.2 * 3) * the probability of being on this attempt (100% * 80% * 60%) = 3 * 0.2 * 3 * 0.48 = 0.864

  4. 4th attempt (4) * the probability of winning this attempt (0.2 * 4) * the probability of being on this attempt (100% * 80% * 60% * 40%) = 4 * 0.2 * 4 * 0.192 = 0.6144

  5. 5th attempt (5) * the probability of winning this attempt (0.2 * 5) * the probability of being on this attempt (100% * 80% * 60% * 40% * 20%) = 5 * 0.2 * 5 * 0.0384 = 0.192


Add them all up, it will take 0.2 + 0.64 + 0.864 + 0.6144 + 0.192 = 2.5104 attempts per "win", or 1 / 2.5104 = ~39.8% chance per attempt.






share|cite|improve this answer





















    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2995962%2fhow-to-calculate-probably-when-the-odds-change-over-time%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    0
    down vote













    I found a kind of similar question this and was able to adapt that answer to my problem.



    $$sumlimits_{n=1}^{5}bigg(ncdot0.2ncdotprodlimits_{j=0}^{n-1}(1-0.2j)bigg)$$



    This gives me ~39.8% which also happens to line up with my expected result, and very close to the center of the range I thought it was in (38.75%, only off by 0.05%!)



    This sums the chance of each attempt




    1. First attempt (1) * the probability of winning this attempt (0.2 * 1) * the probability of being on this attempt (100%) = 1 * 0.2 * 1 * 1 = 0.2

    2. 2nd attmept (2) * the probability of winning this attempt (0.2 * 2) * the probability of being on this attempt (100% * 80%) = 2 * 0.2 * 2 * 0.8 = 0.64

    3. 3rd attempt (3) * the probability of winning this attempt (0.2 * 3) * the probability of being on this attempt (100% * 80% * 60%) = 3 * 0.2 * 3 * 0.48 = 0.864

    4. 4th attempt (4) * the probability of winning this attempt (0.2 * 4) * the probability of being on this attempt (100% * 80% * 60% * 40%) = 4 * 0.2 * 4 * 0.192 = 0.6144

    5. 5th attempt (5) * the probability of winning this attempt (0.2 * 5) * the probability of being on this attempt (100% * 80% * 60% * 40% * 20%) = 5 * 0.2 * 5 * 0.0384 = 0.192


    Add them all up, it will take 0.2 + 0.64 + 0.864 + 0.6144 + 0.192 = 2.5104 attempts per "win", or 1 / 2.5104 = ~39.8% chance per attempt.






    share|cite|improve this answer

























      up vote
      0
      down vote













      I found a kind of similar question this and was able to adapt that answer to my problem.



      $$sumlimits_{n=1}^{5}bigg(ncdot0.2ncdotprodlimits_{j=0}^{n-1}(1-0.2j)bigg)$$



      This gives me ~39.8% which also happens to line up with my expected result, and very close to the center of the range I thought it was in (38.75%, only off by 0.05%!)



      This sums the chance of each attempt




      1. First attempt (1) * the probability of winning this attempt (0.2 * 1) * the probability of being on this attempt (100%) = 1 * 0.2 * 1 * 1 = 0.2

      2. 2nd attmept (2) * the probability of winning this attempt (0.2 * 2) * the probability of being on this attempt (100% * 80%) = 2 * 0.2 * 2 * 0.8 = 0.64

      3. 3rd attempt (3) * the probability of winning this attempt (0.2 * 3) * the probability of being on this attempt (100% * 80% * 60%) = 3 * 0.2 * 3 * 0.48 = 0.864

      4. 4th attempt (4) * the probability of winning this attempt (0.2 * 4) * the probability of being on this attempt (100% * 80% * 60% * 40%) = 4 * 0.2 * 4 * 0.192 = 0.6144

      5. 5th attempt (5) * the probability of winning this attempt (0.2 * 5) * the probability of being on this attempt (100% * 80% * 60% * 40% * 20%) = 5 * 0.2 * 5 * 0.0384 = 0.192


      Add them all up, it will take 0.2 + 0.64 + 0.864 + 0.6144 + 0.192 = 2.5104 attempts per "win", or 1 / 2.5104 = ~39.8% chance per attempt.






      share|cite|improve this answer























        up vote
        0
        down vote










        up vote
        0
        down vote









        I found a kind of similar question this and was able to adapt that answer to my problem.



        $$sumlimits_{n=1}^{5}bigg(ncdot0.2ncdotprodlimits_{j=0}^{n-1}(1-0.2j)bigg)$$



        This gives me ~39.8% which also happens to line up with my expected result, and very close to the center of the range I thought it was in (38.75%, only off by 0.05%!)



        This sums the chance of each attempt




        1. First attempt (1) * the probability of winning this attempt (0.2 * 1) * the probability of being on this attempt (100%) = 1 * 0.2 * 1 * 1 = 0.2

        2. 2nd attmept (2) * the probability of winning this attempt (0.2 * 2) * the probability of being on this attempt (100% * 80%) = 2 * 0.2 * 2 * 0.8 = 0.64

        3. 3rd attempt (3) * the probability of winning this attempt (0.2 * 3) * the probability of being on this attempt (100% * 80% * 60%) = 3 * 0.2 * 3 * 0.48 = 0.864

        4. 4th attempt (4) * the probability of winning this attempt (0.2 * 4) * the probability of being on this attempt (100% * 80% * 60% * 40%) = 4 * 0.2 * 4 * 0.192 = 0.6144

        5. 5th attempt (5) * the probability of winning this attempt (0.2 * 5) * the probability of being on this attempt (100% * 80% * 60% * 40% * 20%) = 5 * 0.2 * 5 * 0.0384 = 0.192


        Add them all up, it will take 0.2 + 0.64 + 0.864 + 0.6144 + 0.192 = 2.5104 attempts per "win", or 1 / 2.5104 = ~39.8% chance per attempt.






        share|cite|improve this answer












        I found a kind of similar question this and was able to adapt that answer to my problem.



        $$sumlimits_{n=1}^{5}bigg(ncdot0.2ncdotprodlimits_{j=0}^{n-1}(1-0.2j)bigg)$$



        This gives me ~39.8% which also happens to line up with my expected result, and very close to the center of the range I thought it was in (38.75%, only off by 0.05%!)



        This sums the chance of each attempt




        1. First attempt (1) * the probability of winning this attempt (0.2 * 1) * the probability of being on this attempt (100%) = 1 * 0.2 * 1 * 1 = 0.2

        2. 2nd attmept (2) * the probability of winning this attempt (0.2 * 2) * the probability of being on this attempt (100% * 80%) = 2 * 0.2 * 2 * 0.8 = 0.64

        3. 3rd attempt (3) * the probability of winning this attempt (0.2 * 3) * the probability of being on this attempt (100% * 80% * 60%) = 3 * 0.2 * 3 * 0.48 = 0.864

        4. 4th attempt (4) * the probability of winning this attempt (0.2 * 4) * the probability of being on this attempt (100% * 80% * 60% * 40%) = 4 * 0.2 * 4 * 0.192 = 0.6144

        5. 5th attempt (5) * the probability of winning this attempt (0.2 * 5) * the probability of being on this attempt (100% * 80% * 60% * 40% * 20%) = 5 * 0.2 * 5 * 0.0384 = 0.192


        Add them all up, it will take 0.2 + 0.64 + 0.864 + 0.6144 + 0.192 = 2.5104 attempts per "win", or 1 / 2.5104 = ~39.8% chance per attempt.







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered Nov 15 at 19:50









        Nick

        12115




        12115






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2995962%2fhow-to-calculate-probably-when-the-odds-change-over-time%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Plaza Victoria

            Puebla de Zaragoza

            Musa