Resolving this summation using the given pmf
up vote
0
down vote
favorite
So I'm trying to derive an expected value (related to Bayesian risk/loss function) and I've derived all except one final part. To finish the final part I need to derive one of the following expected values (either will work)
Define the probability mass function
$$p_N (n) = {n-1 choose x-1} frac{Gamma(a+b)}{Gamma(a) + Gamma(b)}frac{Gamma(a+x)+Gamma(b+n-x)}{Gamma(a+b+n)}$$
for $n =x,x+1,x+2,dots$ and also define the conditional pmf
$$p_N(n|p) = {n-1 choose x-1} p^x (1-p)^{n-x}$$
To complete the final step I need either one of:
$$E_N left[ left(frac{x-1}{N-1} right)^2right]$$
or
$$E_{N} = left[ left(frac{x-1}{N-1} right)^2 bigg| p right]$$
In previous questions, I've derived the necessary expected values by absorbing the terms into the probability functions in order to construct a new distribution function, and then obtaining the expected value by normalizing it so that it sums/integrates to $1$. But for these ones I'm stuck due to the fact it's squared and you're left with a single fraciton that can't be absorbed into the combination.
Does anyone see a way forward?
combinatorics summation expected-value
add a comment |
up vote
0
down vote
favorite
So I'm trying to derive an expected value (related to Bayesian risk/loss function) and I've derived all except one final part. To finish the final part I need to derive one of the following expected values (either will work)
Define the probability mass function
$$p_N (n) = {n-1 choose x-1} frac{Gamma(a+b)}{Gamma(a) + Gamma(b)}frac{Gamma(a+x)+Gamma(b+n-x)}{Gamma(a+b+n)}$$
for $n =x,x+1,x+2,dots$ and also define the conditional pmf
$$p_N(n|p) = {n-1 choose x-1} p^x (1-p)^{n-x}$$
To complete the final step I need either one of:
$$E_N left[ left(frac{x-1}{N-1} right)^2right]$$
or
$$E_{N} = left[ left(frac{x-1}{N-1} right)^2 bigg| p right]$$
In previous questions, I've derived the necessary expected values by absorbing the terms into the probability functions in order to construct a new distribution function, and then obtaining the expected value by normalizing it so that it sums/integrates to $1$. But for these ones I'm stuck due to the fact it's squared and you're left with a single fraciton that can't be absorbed into the combination.
Does anyone see a way forward?
combinatorics summation expected-value
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
So I'm trying to derive an expected value (related to Bayesian risk/loss function) and I've derived all except one final part. To finish the final part I need to derive one of the following expected values (either will work)
Define the probability mass function
$$p_N (n) = {n-1 choose x-1} frac{Gamma(a+b)}{Gamma(a) + Gamma(b)}frac{Gamma(a+x)+Gamma(b+n-x)}{Gamma(a+b+n)}$$
for $n =x,x+1,x+2,dots$ and also define the conditional pmf
$$p_N(n|p) = {n-1 choose x-1} p^x (1-p)^{n-x}$$
To complete the final step I need either one of:
$$E_N left[ left(frac{x-1}{N-1} right)^2right]$$
or
$$E_{N} = left[ left(frac{x-1}{N-1} right)^2 bigg| p right]$$
In previous questions, I've derived the necessary expected values by absorbing the terms into the probability functions in order to construct a new distribution function, and then obtaining the expected value by normalizing it so that it sums/integrates to $1$. But for these ones I'm stuck due to the fact it's squared and you're left with a single fraciton that can't be absorbed into the combination.
Does anyone see a way forward?
combinatorics summation expected-value
So I'm trying to derive an expected value (related to Bayesian risk/loss function) and I've derived all except one final part. To finish the final part I need to derive one of the following expected values (either will work)
Define the probability mass function
$$p_N (n) = {n-1 choose x-1} frac{Gamma(a+b)}{Gamma(a) + Gamma(b)}frac{Gamma(a+x)+Gamma(b+n-x)}{Gamma(a+b+n)}$$
for $n =x,x+1,x+2,dots$ and also define the conditional pmf
$$p_N(n|p) = {n-1 choose x-1} p^x (1-p)^{n-x}$$
To complete the final step I need either one of:
$$E_N left[ left(frac{x-1}{N-1} right)^2right]$$
or
$$E_{N} = left[ left(frac{x-1}{N-1} right)^2 bigg| p right]$$
In previous questions, I've derived the necessary expected values by absorbing the terms into the probability functions in order to construct a new distribution function, and then obtaining the expected value by normalizing it so that it sums/integrates to $1$. But for these ones I'm stuck due to the fact it's squared and you're left with a single fraciton that can't be absorbed into the combination.
Does anyone see a way forward?
combinatorics summation expected-value
combinatorics summation expected-value
asked 10 mins ago
Xiaomi
909114
909114
add a comment |
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2997930%2fresolving-this-summation-using-the-given-pmf%23new-answer', 'question_page');
}
);
Post as a guest
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password