Identifiability of Normal From Conditional Probability
up vote
7
down vote
favorite
Let $Z_x sim mathcal{N}(x,1)$, $D_1 = [0,c]$, and $D=[-c,c]$. Can we determine $x$ from
$$f(x) = mathbb{P}(Z_xin D_1 | Z_xin D) = frac{Phi(c - x) - Phi(-x)}{Phi(c - x) - Phi(-c-x)}?$$
In particular, can we validate the (numerically obvious) claim that $f$ is monotone, ranging from $0$ to $1$? Even $lim_{xtoinfty}f(x) = 1$ doesn't seem obvious to me; L'Hospital's rule isn't illuminating there.
A clear approach to this is to consider the derivative
$$
begin{align*}
f'(x) &= frac{f(x)bigl(phi(c-x)-phi(-c-x)bigr) - bigl(phi(c-x) - phi(-x)bigr)}{mathbb{P}(Z_xin D)}\
&propto f(x)bigl(phi(c-x)-phi(-c-x)bigr) - bigl(phi(c-x) - phi(-x)bigr),
end{align*}
$$
and show that $f'>0$ uniformly, but I can't seem to bound this either. Answers to either would be extremely helpful, but injectivity of $f$ is more important for my application. If you could come up with a version of this that works for higher dimensional Gaussians ($D_i$ are orthants/quadrants of spheres then) that would be perfect.
real-analysis probability monotone-functions upper-lower-bounds
add a comment |
up vote
7
down vote
favorite
Let $Z_x sim mathcal{N}(x,1)$, $D_1 = [0,c]$, and $D=[-c,c]$. Can we determine $x$ from
$$f(x) = mathbb{P}(Z_xin D_1 | Z_xin D) = frac{Phi(c - x) - Phi(-x)}{Phi(c - x) - Phi(-c-x)}?$$
In particular, can we validate the (numerically obvious) claim that $f$ is monotone, ranging from $0$ to $1$? Even $lim_{xtoinfty}f(x) = 1$ doesn't seem obvious to me; L'Hospital's rule isn't illuminating there.
A clear approach to this is to consider the derivative
$$
begin{align*}
f'(x) &= frac{f(x)bigl(phi(c-x)-phi(-c-x)bigr) - bigl(phi(c-x) - phi(-x)bigr)}{mathbb{P}(Z_xin D)}\
&propto f(x)bigl(phi(c-x)-phi(-c-x)bigr) - bigl(phi(c-x) - phi(-x)bigr),
end{align*}
$$
and show that $f'>0$ uniformly, but I can't seem to bound this either. Answers to either would be extremely helpful, but injectivity of $f$ is more important for my application. If you could come up with a version of this that works for higher dimensional Gaussians ($D_i$ are orthants/quadrants of spheres then) that would be perfect.
real-analysis probability monotone-functions upper-lower-bounds
1
Note that this depends on the Gaussianity of $X$. If $X$ is exponential then these ratios are fixed.
– cdipaolo
Nov 7 at 20:08
add a comment |
up vote
7
down vote
favorite
up vote
7
down vote
favorite
Let $Z_x sim mathcal{N}(x,1)$, $D_1 = [0,c]$, and $D=[-c,c]$. Can we determine $x$ from
$$f(x) = mathbb{P}(Z_xin D_1 | Z_xin D) = frac{Phi(c - x) - Phi(-x)}{Phi(c - x) - Phi(-c-x)}?$$
In particular, can we validate the (numerically obvious) claim that $f$ is monotone, ranging from $0$ to $1$? Even $lim_{xtoinfty}f(x) = 1$ doesn't seem obvious to me; L'Hospital's rule isn't illuminating there.
A clear approach to this is to consider the derivative
$$
begin{align*}
f'(x) &= frac{f(x)bigl(phi(c-x)-phi(-c-x)bigr) - bigl(phi(c-x) - phi(-x)bigr)}{mathbb{P}(Z_xin D)}\
&propto f(x)bigl(phi(c-x)-phi(-c-x)bigr) - bigl(phi(c-x) - phi(-x)bigr),
end{align*}
$$
and show that $f'>0$ uniformly, but I can't seem to bound this either. Answers to either would be extremely helpful, but injectivity of $f$ is more important for my application. If you could come up with a version of this that works for higher dimensional Gaussians ($D_i$ are orthants/quadrants of spheres then) that would be perfect.
real-analysis probability monotone-functions upper-lower-bounds
Let $Z_x sim mathcal{N}(x,1)$, $D_1 = [0,c]$, and $D=[-c,c]$. Can we determine $x$ from
$$f(x) = mathbb{P}(Z_xin D_1 | Z_xin D) = frac{Phi(c - x) - Phi(-x)}{Phi(c - x) - Phi(-c-x)}?$$
In particular, can we validate the (numerically obvious) claim that $f$ is monotone, ranging from $0$ to $1$? Even $lim_{xtoinfty}f(x) = 1$ doesn't seem obvious to me; L'Hospital's rule isn't illuminating there.
A clear approach to this is to consider the derivative
$$
begin{align*}
f'(x) &= frac{f(x)bigl(phi(c-x)-phi(-c-x)bigr) - bigl(phi(c-x) - phi(-x)bigr)}{mathbb{P}(Z_xin D)}\
&propto f(x)bigl(phi(c-x)-phi(-c-x)bigr) - bigl(phi(c-x) - phi(-x)bigr),
end{align*}
$$
and show that $f'>0$ uniformly, but I can't seem to bound this either. Answers to either would be extremely helpful, but injectivity of $f$ is more important for my application. If you could come up with a version of this that works for higher dimensional Gaussians ($D_i$ are orthants/quadrants of spheres then) that would be perfect.
real-analysis probability monotone-functions upper-lower-bounds
real-analysis probability monotone-functions upper-lower-bounds
edited Nov 7 at 20:02
asked Aug 6 at 5:56
cdipaolo
570312
570312
1
Note that this depends on the Gaussianity of $X$. If $X$ is exponential then these ratios are fixed.
– cdipaolo
Nov 7 at 20:08
add a comment |
1
Note that this depends on the Gaussianity of $X$. If $X$ is exponential then these ratios are fixed.
– cdipaolo
Nov 7 at 20:08
1
1
Note that this depends on the Gaussianity of $X$. If $X$ is exponential then these ratios are fixed.
– cdipaolo
Nov 7 at 20:08
Note that this depends on the Gaussianity of $X$. If $X$ is exponential then these ratios are fixed.
– cdipaolo
Nov 7 at 20:08
add a comment |
1 Answer
1
active
oldest
votes
up vote
3
down vote
accepted
First of all assume $c=1$ and $x>0$
now call
$$I_1(x) equiv sqrt{2pi}mathbb{P}(Z_xin D_1)=int_0^1e^{-frac{(t-x)^2}{2}}dt$$
and
$$I(x) equiv sqrt{2pi} mathbb{P}(Z_xin D)=int_{-1}^1e^{-frac{(t-x)^2}{2}}dt$$
then $$I(x)-I_1(x) = int_{-1}^0e^{-frac{(t-x)^2}{2}}dt leq e^{-x^2/2}$$
But now choose an arbitrary $0<epsilon<1$ then
$$I_1(x) geq int_{epsilon}^1e^{-frac{(t-x)^2}{2}}dt geq (1-epsilon)e^{-(x-epsilon)^2/2}$$
but $e^{-x^2/2} = o_{xrightarrow+infty}(e^{-(x-epsilon)^2/2})$
so it's easy to conclude that $I(x) - I_1(x) = o(I_1(x))$ in other words
$$lim_{xrightarrow+infty} f(x) = 1$$
By a similar argument one has $lim_{xrightarrow-infty}f(x)=0$.
Next for monotonicity:
$$f' = frac{I_1'I-I_1I'}{I^2}$$
so we want to show that $frac{I'_1}{I_1}geq frac{I'}{I}$, so first let's write
$$I'(x) = int_{-1}^1(t-x)e^{-frac{(t-x)^2}{2}}dt = -xI(x) + int_{-1}^1te^{-frac{(t-x)^2}{2}}dt$$
and
$$I_1'(x) = int_{0}^1(t-x)e^{-frac{(t-x)^2}{2}}dt = -xI_1(x) + int_{0}^1te^{-frac{(t-x)^2}{2}}dt$$
therefore
$$frac{I_1'}{I_1} - frac{I'}{I} = frac{1}{I_1}int_{0}^1te^{-frac{(t-x)^2}{2}}dt - frac{1}{I}int_{-1}^1te^{-frac{(t-x)^2}{2}}dt geq 0$$
where the inequality holds because $I_1<I$ and $int_{0}^1te^{-frac{(t-x)^2}{2}}dt > int_{-1}^1te^{-frac{(t-x)^2}{2}}dt$
So in 1D the function is a monotonic bijection from $mathbb{R}rightarrow ]0,1[$
It is pretty clear that this line of argument extends to higher dimensions essentially by doing similar explicit tricks on the edge of the octant domain in each direction and evaluating the gradient of f(x) explicitly
New contributor
1
Awesome this is extremely helpful. Thank you!
– cdipaolo
yesterday
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
3
down vote
accepted
First of all assume $c=1$ and $x>0$
now call
$$I_1(x) equiv sqrt{2pi}mathbb{P}(Z_xin D_1)=int_0^1e^{-frac{(t-x)^2}{2}}dt$$
and
$$I(x) equiv sqrt{2pi} mathbb{P}(Z_xin D)=int_{-1}^1e^{-frac{(t-x)^2}{2}}dt$$
then $$I(x)-I_1(x) = int_{-1}^0e^{-frac{(t-x)^2}{2}}dt leq e^{-x^2/2}$$
But now choose an arbitrary $0<epsilon<1$ then
$$I_1(x) geq int_{epsilon}^1e^{-frac{(t-x)^2}{2}}dt geq (1-epsilon)e^{-(x-epsilon)^2/2}$$
but $e^{-x^2/2} = o_{xrightarrow+infty}(e^{-(x-epsilon)^2/2})$
so it's easy to conclude that $I(x) - I_1(x) = o(I_1(x))$ in other words
$$lim_{xrightarrow+infty} f(x) = 1$$
By a similar argument one has $lim_{xrightarrow-infty}f(x)=0$.
Next for monotonicity:
$$f' = frac{I_1'I-I_1I'}{I^2}$$
so we want to show that $frac{I'_1}{I_1}geq frac{I'}{I}$, so first let's write
$$I'(x) = int_{-1}^1(t-x)e^{-frac{(t-x)^2}{2}}dt = -xI(x) + int_{-1}^1te^{-frac{(t-x)^2}{2}}dt$$
and
$$I_1'(x) = int_{0}^1(t-x)e^{-frac{(t-x)^2}{2}}dt = -xI_1(x) + int_{0}^1te^{-frac{(t-x)^2}{2}}dt$$
therefore
$$frac{I_1'}{I_1} - frac{I'}{I} = frac{1}{I_1}int_{0}^1te^{-frac{(t-x)^2}{2}}dt - frac{1}{I}int_{-1}^1te^{-frac{(t-x)^2}{2}}dt geq 0$$
where the inequality holds because $I_1<I$ and $int_{0}^1te^{-frac{(t-x)^2}{2}}dt > int_{-1}^1te^{-frac{(t-x)^2}{2}}dt$
So in 1D the function is a monotonic bijection from $mathbb{R}rightarrow ]0,1[$
It is pretty clear that this line of argument extends to higher dimensions essentially by doing similar explicit tricks on the edge of the octant domain in each direction and evaluating the gradient of f(x) explicitly
New contributor
1
Awesome this is extremely helpful. Thank you!
– cdipaolo
yesterday
add a comment |
up vote
3
down vote
accepted
First of all assume $c=1$ and $x>0$
now call
$$I_1(x) equiv sqrt{2pi}mathbb{P}(Z_xin D_1)=int_0^1e^{-frac{(t-x)^2}{2}}dt$$
and
$$I(x) equiv sqrt{2pi} mathbb{P}(Z_xin D)=int_{-1}^1e^{-frac{(t-x)^2}{2}}dt$$
then $$I(x)-I_1(x) = int_{-1}^0e^{-frac{(t-x)^2}{2}}dt leq e^{-x^2/2}$$
But now choose an arbitrary $0<epsilon<1$ then
$$I_1(x) geq int_{epsilon}^1e^{-frac{(t-x)^2}{2}}dt geq (1-epsilon)e^{-(x-epsilon)^2/2}$$
but $e^{-x^2/2} = o_{xrightarrow+infty}(e^{-(x-epsilon)^2/2})$
so it's easy to conclude that $I(x) - I_1(x) = o(I_1(x))$ in other words
$$lim_{xrightarrow+infty} f(x) = 1$$
By a similar argument one has $lim_{xrightarrow-infty}f(x)=0$.
Next for monotonicity:
$$f' = frac{I_1'I-I_1I'}{I^2}$$
so we want to show that $frac{I'_1}{I_1}geq frac{I'}{I}$, so first let's write
$$I'(x) = int_{-1}^1(t-x)e^{-frac{(t-x)^2}{2}}dt = -xI(x) + int_{-1}^1te^{-frac{(t-x)^2}{2}}dt$$
and
$$I_1'(x) = int_{0}^1(t-x)e^{-frac{(t-x)^2}{2}}dt = -xI_1(x) + int_{0}^1te^{-frac{(t-x)^2}{2}}dt$$
therefore
$$frac{I_1'}{I_1} - frac{I'}{I} = frac{1}{I_1}int_{0}^1te^{-frac{(t-x)^2}{2}}dt - frac{1}{I}int_{-1}^1te^{-frac{(t-x)^2}{2}}dt geq 0$$
where the inequality holds because $I_1<I$ and $int_{0}^1te^{-frac{(t-x)^2}{2}}dt > int_{-1}^1te^{-frac{(t-x)^2}{2}}dt$
So in 1D the function is a monotonic bijection from $mathbb{R}rightarrow ]0,1[$
It is pretty clear that this line of argument extends to higher dimensions essentially by doing similar explicit tricks on the edge of the octant domain in each direction and evaluating the gradient of f(x) explicitly
New contributor
1
Awesome this is extremely helpful. Thank you!
– cdipaolo
yesterday
add a comment |
up vote
3
down vote
accepted
up vote
3
down vote
accepted
First of all assume $c=1$ and $x>0$
now call
$$I_1(x) equiv sqrt{2pi}mathbb{P}(Z_xin D_1)=int_0^1e^{-frac{(t-x)^2}{2}}dt$$
and
$$I(x) equiv sqrt{2pi} mathbb{P}(Z_xin D)=int_{-1}^1e^{-frac{(t-x)^2}{2}}dt$$
then $$I(x)-I_1(x) = int_{-1}^0e^{-frac{(t-x)^2}{2}}dt leq e^{-x^2/2}$$
But now choose an arbitrary $0<epsilon<1$ then
$$I_1(x) geq int_{epsilon}^1e^{-frac{(t-x)^2}{2}}dt geq (1-epsilon)e^{-(x-epsilon)^2/2}$$
but $e^{-x^2/2} = o_{xrightarrow+infty}(e^{-(x-epsilon)^2/2})$
so it's easy to conclude that $I(x) - I_1(x) = o(I_1(x))$ in other words
$$lim_{xrightarrow+infty} f(x) = 1$$
By a similar argument one has $lim_{xrightarrow-infty}f(x)=0$.
Next for monotonicity:
$$f' = frac{I_1'I-I_1I'}{I^2}$$
so we want to show that $frac{I'_1}{I_1}geq frac{I'}{I}$, so first let's write
$$I'(x) = int_{-1}^1(t-x)e^{-frac{(t-x)^2}{2}}dt = -xI(x) + int_{-1}^1te^{-frac{(t-x)^2}{2}}dt$$
and
$$I_1'(x) = int_{0}^1(t-x)e^{-frac{(t-x)^2}{2}}dt = -xI_1(x) + int_{0}^1te^{-frac{(t-x)^2}{2}}dt$$
therefore
$$frac{I_1'}{I_1} - frac{I'}{I} = frac{1}{I_1}int_{0}^1te^{-frac{(t-x)^2}{2}}dt - frac{1}{I}int_{-1}^1te^{-frac{(t-x)^2}{2}}dt geq 0$$
where the inequality holds because $I_1<I$ and $int_{0}^1te^{-frac{(t-x)^2}{2}}dt > int_{-1}^1te^{-frac{(t-x)^2}{2}}dt$
So in 1D the function is a monotonic bijection from $mathbb{R}rightarrow ]0,1[$
It is pretty clear that this line of argument extends to higher dimensions essentially by doing similar explicit tricks on the edge of the octant domain in each direction and evaluating the gradient of f(x) explicitly
New contributor
First of all assume $c=1$ and $x>0$
now call
$$I_1(x) equiv sqrt{2pi}mathbb{P}(Z_xin D_1)=int_0^1e^{-frac{(t-x)^2}{2}}dt$$
and
$$I(x) equiv sqrt{2pi} mathbb{P}(Z_xin D)=int_{-1}^1e^{-frac{(t-x)^2}{2}}dt$$
then $$I(x)-I_1(x) = int_{-1}^0e^{-frac{(t-x)^2}{2}}dt leq e^{-x^2/2}$$
But now choose an arbitrary $0<epsilon<1$ then
$$I_1(x) geq int_{epsilon}^1e^{-frac{(t-x)^2}{2}}dt geq (1-epsilon)e^{-(x-epsilon)^2/2}$$
but $e^{-x^2/2} = o_{xrightarrow+infty}(e^{-(x-epsilon)^2/2})$
so it's easy to conclude that $I(x) - I_1(x) = o(I_1(x))$ in other words
$$lim_{xrightarrow+infty} f(x) = 1$$
By a similar argument one has $lim_{xrightarrow-infty}f(x)=0$.
Next for monotonicity:
$$f' = frac{I_1'I-I_1I'}{I^2}$$
so we want to show that $frac{I'_1}{I_1}geq frac{I'}{I}$, so first let's write
$$I'(x) = int_{-1}^1(t-x)e^{-frac{(t-x)^2}{2}}dt = -xI(x) + int_{-1}^1te^{-frac{(t-x)^2}{2}}dt$$
and
$$I_1'(x) = int_{0}^1(t-x)e^{-frac{(t-x)^2}{2}}dt = -xI_1(x) + int_{0}^1te^{-frac{(t-x)^2}{2}}dt$$
therefore
$$frac{I_1'}{I_1} - frac{I'}{I} = frac{1}{I_1}int_{0}^1te^{-frac{(t-x)^2}{2}}dt - frac{1}{I}int_{-1}^1te^{-frac{(t-x)^2}{2}}dt geq 0$$
where the inequality holds because $I_1<I$ and $int_{0}^1te^{-frac{(t-x)^2}{2}}dt > int_{-1}^1te^{-frac{(t-x)^2}{2}}dt$
So in 1D the function is a monotonic bijection from $mathbb{R}rightarrow ]0,1[$
It is pretty clear that this line of argument extends to higher dimensions essentially by doing similar explicit tricks on the edge of the octant domain in each direction and evaluating the gradient of f(x) explicitly
New contributor
edited 2 days ago
New contributor
answered 2 days ago
Ezy
54428
54428
New contributor
New contributor
1
Awesome this is extremely helpful. Thank you!
– cdipaolo
yesterday
add a comment |
1
Awesome this is extremely helpful. Thank you!
– cdipaolo
yesterday
1
1
Awesome this is extremely helpful. Thank you!
– cdipaolo
yesterday
Awesome this is extremely helpful. Thank you!
– cdipaolo
yesterday
add a comment |
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2873613%2fidentifiability-of-normal-from-conditional-probability%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
Note that this depends on the Gaussianity of $X$. If $X$ is exponential then these ratios are fixed.
– cdipaolo
Nov 7 at 20:08