How to calculate probably when the odds change over time
up vote
0
down vote
favorite
Sorry for the dumb wording, or asking a question that may have been answered before, I'm not familiar with the vocabulary so I don't really know how to ask the question or what to search for.
I can best explain what I'm looking for with an example.
Let's say you have a bag of marbles, 4 red and 1 blue. Whenever you pull a red marble, you replace it with a blue marble and return it to the bag, the odds of pulling a blue marble on the next turn are increased. Whenever you pull a blue marble you return all of the original red mables marbles back to the bag and remove all but 1 blue marble.
On turn 1 there's a 20% chance to pull the blue marble.
If you pulled a red on turn 1, then there is a 40% chance to pull a blue on Turn 2, but if you did pull the blue on Turn 1 then there's a 20% chance to pull the blue on Turn 2.
So at first you have 20% chance, if you fail then you have a 40% chance, if you fail again then you have a 60% chance, if you fail again you have an 80% chance, and if you fail that you are gauranteed to get ablue on the 5th turn. Every time you pull a blue it resets back to 20%.
I wrote a program to simulate 1000 turns in a row, ran it multiple times, and I get results ranging from 375-420 blues per 1000 turns.
So I believe the answer is somewhere between 37.5% and 42%, but is there some sort of formula that can be used to calculate how likely you are to pull a blue without knowing what happened on previous turns?
probability probability-distributions conditional-probability
add a comment |
up vote
0
down vote
favorite
Sorry for the dumb wording, or asking a question that may have been answered before, I'm not familiar with the vocabulary so I don't really know how to ask the question or what to search for.
I can best explain what I'm looking for with an example.
Let's say you have a bag of marbles, 4 red and 1 blue. Whenever you pull a red marble, you replace it with a blue marble and return it to the bag, the odds of pulling a blue marble on the next turn are increased. Whenever you pull a blue marble you return all of the original red mables marbles back to the bag and remove all but 1 blue marble.
On turn 1 there's a 20% chance to pull the blue marble.
If you pulled a red on turn 1, then there is a 40% chance to pull a blue on Turn 2, but if you did pull the blue on Turn 1 then there's a 20% chance to pull the blue on Turn 2.
So at first you have 20% chance, if you fail then you have a 40% chance, if you fail again then you have a 60% chance, if you fail again you have an 80% chance, and if you fail that you are gauranteed to get ablue on the 5th turn. Every time you pull a blue it resets back to 20%.
I wrote a program to simulate 1000 turns in a row, ran it multiple times, and I get results ranging from 375-420 blues per 1000 turns.
So I believe the answer is somewhere between 37.5% and 42%, but is there some sort of formula that can be used to calculate how likely you are to pull a blue without knowing what happened on previous turns?
probability probability-distributions conditional-probability
Learn about markov chains. You can treat the composition of your marble bag as the state, and specify transition probabilities between the possible state values. Then analyze the markov chain using existing theorems.
– jnez71
Nov 12 at 22:24
You probably need to add your definition of "win" to the question.
– Phil H
Nov 12 at 23:11
Ok, so I have this Markov chain[[0.2,0.8,0,0,0],[0.4,0,0.6,0,0],[0.6,0,0,0.4,0],[0.8,0,0,0,0.2],[1,0,0,0,0]]
but I'm not sure what existing theorems you're talking about to figure out how often it goes to the 1st state.
– Nick
Nov 15 at 17:07
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
Sorry for the dumb wording, or asking a question that may have been answered before, I'm not familiar with the vocabulary so I don't really know how to ask the question or what to search for.
I can best explain what I'm looking for with an example.
Let's say you have a bag of marbles, 4 red and 1 blue. Whenever you pull a red marble, you replace it with a blue marble and return it to the bag, the odds of pulling a blue marble on the next turn are increased. Whenever you pull a blue marble you return all of the original red mables marbles back to the bag and remove all but 1 blue marble.
On turn 1 there's a 20% chance to pull the blue marble.
If you pulled a red on turn 1, then there is a 40% chance to pull a blue on Turn 2, but if you did pull the blue on Turn 1 then there's a 20% chance to pull the blue on Turn 2.
So at first you have 20% chance, if you fail then you have a 40% chance, if you fail again then you have a 60% chance, if you fail again you have an 80% chance, and if you fail that you are gauranteed to get ablue on the 5th turn. Every time you pull a blue it resets back to 20%.
I wrote a program to simulate 1000 turns in a row, ran it multiple times, and I get results ranging from 375-420 blues per 1000 turns.
So I believe the answer is somewhere between 37.5% and 42%, but is there some sort of formula that can be used to calculate how likely you are to pull a blue without knowing what happened on previous turns?
probability probability-distributions conditional-probability
Sorry for the dumb wording, or asking a question that may have been answered before, I'm not familiar with the vocabulary so I don't really know how to ask the question or what to search for.
I can best explain what I'm looking for with an example.
Let's say you have a bag of marbles, 4 red and 1 blue. Whenever you pull a red marble, you replace it with a blue marble and return it to the bag, the odds of pulling a blue marble on the next turn are increased. Whenever you pull a blue marble you return all of the original red mables marbles back to the bag and remove all but 1 blue marble.
On turn 1 there's a 20% chance to pull the blue marble.
If you pulled a red on turn 1, then there is a 40% chance to pull a blue on Turn 2, but if you did pull the blue on Turn 1 then there's a 20% chance to pull the blue on Turn 2.
So at first you have 20% chance, if you fail then you have a 40% chance, if you fail again then you have a 60% chance, if you fail again you have an 80% chance, and if you fail that you are gauranteed to get ablue on the 5th turn. Every time you pull a blue it resets back to 20%.
I wrote a program to simulate 1000 turns in a row, ran it multiple times, and I get results ranging from 375-420 blues per 1000 turns.
So I believe the answer is somewhere between 37.5% and 42%, but is there some sort of formula that can be used to calculate how likely you are to pull a blue without knowing what happened on previous turns?
probability probability-distributions conditional-probability
probability probability-distributions conditional-probability
edited Nov 13 at 14:47
asked Nov 12 at 22:10
Nick
12115
12115
Learn about markov chains. You can treat the composition of your marble bag as the state, and specify transition probabilities between the possible state values. Then analyze the markov chain using existing theorems.
– jnez71
Nov 12 at 22:24
You probably need to add your definition of "win" to the question.
– Phil H
Nov 12 at 23:11
Ok, so I have this Markov chain[[0.2,0.8,0,0,0],[0.4,0,0.6,0,0],[0.6,0,0,0.4,0],[0.8,0,0,0,0.2],[1,0,0,0,0]]
but I'm not sure what existing theorems you're talking about to figure out how often it goes to the 1st state.
– Nick
Nov 15 at 17:07
add a comment |
Learn about markov chains. You can treat the composition of your marble bag as the state, and specify transition probabilities between the possible state values. Then analyze the markov chain using existing theorems.
– jnez71
Nov 12 at 22:24
You probably need to add your definition of "win" to the question.
– Phil H
Nov 12 at 23:11
Ok, so I have this Markov chain[[0.2,0.8,0,0,0],[0.4,0,0.6,0,0],[0.6,0,0,0.4,0],[0.8,0,0,0,0.2],[1,0,0,0,0]]
but I'm not sure what existing theorems you're talking about to figure out how often it goes to the 1st state.
– Nick
Nov 15 at 17:07
Learn about markov chains. You can treat the composition of your marble bag as the state, and specify transition probabilities between the possible state values. Then analyze the markov chain using existing theorems.
– jnez71
Nov 12 at 22:24
Learn about markov chains. You can treat the composition of your marble bag as the state, and specify transition probabilities between the possible state values. Then analyze the markov chain using existing theorems.
– jnez71
Nov 12 at 22:24
You probably need to add your definition of "win" to the question.
– Phil H
Nov 12 at 23:11
You probably need to add your definition of "win" to the question.
– Phil H
Nov 12 at 23:11
Ok, so I have this Markov chain
[[0.2,0.8,0,0,0],[0.4,0,0.6,0,0],[0.6,0,0,0.4,0],[0.8,0,0,0,0.2],[1,0,0,0,0]]
but I'm not sure what existing theorems you're talking about to figure out how often it goes to the 1st state.– Nick
Nov 15 at 17:07
Ok, so I have this Markov chain
[[0.2,0.8,0,0,0],[0.4,0,0.6,0,0],[0.6,0,0,0.4,0],[0.8,0,0,0,0.2],[1,0,0,0,0]]
but I'm not sure what existing theorems you're talking about to figure out how often it goes to the 1st state.– Nick
Nov 15 at 17:07
add a comment |
1 Answer
1
active
oldest
votes
up vote
0
down vote
I found a kind of similar question this and was able to adapt that answer to my problem.
$$sumlimits_{n=1}^{5}bigg(ncdot0.2ncdotprodlimits_{j=0}^{n-1}(1-0.2j)bigg)$$
This gives me ~39.8% which also happens to line up with my expected result, and very close to the center of the range I thought it was in (38.75%, only off by 0.05%!)
This sums the chance of each attempt
- First attempt (1) * the probability of winning this attempt (0.2 * 1) * the probability of being on this attempt (100%) = 1 * 0.2 * 1 * 1 = 0.2
- 2nd attmept (2) * the probability of winning this attempt (0.2 * 2) * the probability of being on this attempt (100% * 80%) = 2 * 0.2 * 2 * 0.8 = 0.64
- 3rd attempt (3) * the probability of winning this attempt (0.2 * 3) * the probability of being on this attempt (100% * 80% * 60%) = 3 * 0.2 * 3 * 0.48 = 0.864
- 4th attempt (4) * the probability of winning this attempt (0.2 * 4) * the probability of being on this attempt (100% * 80% * 60% * 40%) = 4 * 0.2 * 4 * 0.192 = 0.6144
- 5th attempt (5) * the probability of winning this attempt (0.2 * 5) * the probability of being on this attempt (100% * 80% * 60% * 40% * 20%) = 5 * 0.2 * 5 * 0.0384 = 0.192
Add them all up, it will take 0.2 + 0.64 + 0.864 + 0.6144 + 0.192 = 2.5104 attempts per "win", or 1 / 2.5104 = ~39.8% chance per attempt.
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
I found a kind of similar question this and was able to adapt that answer to my problem.
$$sumlimits_{n=1}^{5}bigg(ncdot0.2ncdotprodlimits_{j=0}^{n-1}(1-0.2j)bigg)$$
This gives me ~39.8% which also happens to line up with my expected result, and very close to the center of the range I thought it was in (38.75%, only off by 0.05%!)
This sums the chance of each attempt
- First attempt (1) * the probability of winning this attempt (0.2 * 1) * the probability of being on this attempt (100%) = 1 * 0.2 * 1 * 1 = 0.2
- 2nd attmept (2) * the probability of winning this attempt (0.2 * 2) * the probability of being on this attempt (100% * 80%) = 2 * 0.2 * 2 * 0.8 = 0.64
- 3rd attempt (3) * the probability of winning this attempt (0.2 * 3) * the probability of being on this attempt (100% * 80% * 60%) = 3 * 0.2 * 3 * 0.48 = 0.864
- 4th attempt (4) * the probability of winning this attempt (0.2 * 4) * the probability of being on this attempt (100% * 80% * 60% * 40%) = 4 * 0.2 * 4 * 0.192 = 0.6144
- 5th attempt (5) * the probability of winning this attempt (0.2 * 5) * the probability of being on this attempt (100% * 80% * 60% * 40% * 20%) = 5 * 0.2 * 5 * 0.0384 = 0.192
Add them all up, it will take 0.2 + 0.64 + 0.864 + 0.6144 + 0.192 = 2.5104 attempts per "win", or 1 / 2.5104 = ~39.8% chance per attempt.
add a comment |
up vote
0
down vote
I found a kind of similar question this and was able to adapt that answer to my problem.
$$sumlimits_{n=1}^{5}bigg(ncdot0.2ncdotprodlimits_{j=0}^{n-1}(1-0.2j)bigg)$$
This gives me ~39.8% which also happens to line up with my expected result, and very close to the center of the range I thought it was in (38.75%, only off by 0.05%!)
This sums the chance of each attempt
- First attempt (1) * the probability of winning this attempt (0.2 * 1) * the probability of being on this attempt (100%) = 1 * 0.2 * 1 * 1 = 0.2
- 2nd attmept (2) * the probability of winning this attempt (0.2 * 2) * the probability of being on this attempt (100% * 80%) = 2 * 0.2 * 2 * 0.8 = 0.64
- 3rd attempt (3) * the probability of winning this attempt (0.2 * 3) * the probability of being on this attempt (100% * 80% * 60%) = 3 * 0.2 * 3 * 0.48 = 0.864
- 4th attempt (4) * the probability of winning this attempt (0.2 * 4) * the probability of being on this attempt (100% * 80% * 60% * 40%) = 4 * 0.2 * 4 * 0.192 = 0.6144
- 5th attempt (5) * the probability of winning this attempt (0.2 * 5) * the probability of being on this attempt (100% * 80% * 60% * 40% * 20%) = 5 * 0.2 * 5 * 0.0384 = 0.192
Add them all up, it will take 0.2 + 0.64 + 0.864 + 0.6144 + 0.192 = 2.5104 attempts per "win", or 1 / 2.5104 = ~39.8% chance per attempt.
add a comment |
up vote
0
down vote
up vote
0
down vote
I found a kind of similar question this and was able to adapt that answer to my problem.
$$sumlimits_{n=1}^{5}bigg(ncdot0.2ncdotprodlimits_{j=0}^{n-1}(1-0.2j)bigg)$$
This gives me ~39.8% which also happens to line up with my expected result, and very close to the center of the range I thought it was in (38.75%, only off by 0.05%!)
This sums the chance of each attempt
- First attempt (1) * the probability of winning this attempt (0.2 * 1) * the probability of being on this attempt (100%) = 1 * 0.2 * 1 * 1 = 0.2
- 2nd attmept (2) * the probability of winning this attempt (0.2 * 2) * the probability of being on this attempt (100% * 80%) = 2 * 0.2 * 2 * 0.8 = 0.64
- 3rd attempt (3) * the probability of winning this attempt (0.2 * 3) * the probability of being on this attempt (100% * 80% * 60%) = 3 * 0.2 * 3 * 0.48 = 0.864
- 4th attempt (4) * the probability of winning this attempt (0.2 * 4) * the probability of being on this attempt (100% * 80% * 60% * 40%) = 4 * 0.2 * 4 * 0.192 = 0.6144
- 5th attempt (5) * the probability of winning this attempt (0.2 * 5) * the probability of being on this attempt (100% * 80% * 60% * 40% * 20%) = 5 * 0.2 * 5 * 0.0384 = 0.192
Add them all up, it will take 0.2 + 0.64 + 0.864 + 0.6144 + 0.192 = 2.5104 attempts per "win", or 1 / 2.5104 = ~39.8% chance per attempt.
I found a kind of similar question this and was able to adapt that answer to my problem.
$$sumlimits_{n=1}^{5}bigg(ncdot0.2ncdotprodlimits_{j=0}^{n-1}(1-0.2j)bigg)$$
This gives me ~39.8% which also happens to line up with my expected result, and very close to the center of the range I thought it was in (38.75%, only off by 0.05%!)
This sums the chance of each attempt
- First attempt (1) * the probability of winning this attempt (0.2 * 1) * the probability of being on this attempt (100%) = 1 * 0.2 * 1 * 1 = 0.2
- 2nd attmept (2) * the probability of winning this attempt (0.2 * 2) * the probability of being on this attempt (100% * 80%) = 2 * 0.2 * 2 * 0.8 = 0.64
- 3rd attempt (3) * the probability of winning this attempt (0.2 * 3) * the probability of being on this attempt (100% * 80% * 60%) = 3 * 0.2 * 3 * 0.48 = 0.864
- 4th attempt (4) * the probability of winning this attempt (0.2 * 4) * the probability of being on this attempt (100% * 80% * 60% * 40%) = 4 * 0.2 * 4 * 0.192 = 0.6144
- 5th attempt (5) * the probability of winning this attempt (0.2 * 5) * the probability of being on this attempt (100% * 80% * 60% * 40% * 20%) = 5 * 0.2 * 5 * 0.0384 = 0.192
Add them all up, it will take 0.2 + 0.64 + 0.864 + 0.6144 + 0.192 = 2.5104 attempts per "win", or 1 / 2.5104 = ~39.8% chance per attempt.
answered Nov 15 at 19:50
Nick
12115
12115
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2995962%2fhow-to-calculate-probably-when-the-odds-change-over-time%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Learn about markov chains. You can treat the composition of your marble bag as the state, and specify transition probabilities between the possible state values. Then analyze the markov chain using existing theorems.
– jnez71
Nov 12 at 22:24
You probably need to add your definition of "win" to the question.
– Phil H
Nov 12 at 23:11
Ok, so I have this Markov chain
[[0.2,0.8,0,0,0],[0.4,0,0.6,0,0],[0.6,0,0,0.4,0],[0.8,0,0,0,0.2],[1,0,0,0,0]]
but I'm not sure what existing theorems you're talking about to figure out how often it goes to the 1st state.– Nick
Nov 15 at 17:07