Identifiability of Normal From Conditional Probability











up vote
7
down vote

favorite
3












Let $Z_x sim mathcal{N}(x,1)$, $D_1 = [0,c]$, and $D=[-c,c]$. Can we determine $x$ from
$$f(x) = mathbb{P}(Z_xin D_1 | Z_xin D) = frac{Phi(c - x) - Phi(-x)}{Phi(c - x) - Phi(-c-x)}?$$
In particular, can we validate the (numerically obvious) claim that $f$ is monotone, ranging from $0$ to $1$? Even $lim_{xtoinfty}f(x) = 1$ doesn't seem obvious to me; L'Hospital's rule isn't illuminating there.





A clear approach to this is to consider the derivative
$$
begin{align*}
f'(x) &= frac{f(x)bigl(phi(c-x)-phi(-c-x)bigr) - bigl(phi(c-x) - phi(-x)bigr)}{mathbb{P}(Z_xin D)}\
&propto f(x)bigl(phi(c-x)-phi(-c-x)bigr) - bigl(phi(c-x) - phi(-x)bigr),
end{align*}
$$

and show that $f'>0$ uniformly, but I can't seem to bound this either. Answers to either would be extremely helpful, but injectivity of $f$ is more important for my application. If you could come up with a version of this that works for higher dimensional Gaussians ($D_i$ are orthants/quadrants of spheres then) that would be perfect.










share|cite|improve this question




















  • 1




    Note that this depends on the Gaussianity of $X$. If $X$ is exponential then these ratios are fixed.
    – cdipaolo
    Nov 7 at 20:08















up vote
7
down vote

favorite
3












Let $Z_x sim mathcal{N}(x,1)$, $D_1 = [0,c]$, and $D=[-c,c]$. Can we determine $x$ from
$$f(x) = mathbb{P}(Z_xin D_1 | Z_xin D) = frac{Phi(c - x) - Phi(-x)}{Phi(c - x) - Phi(-c-x)}?$$
In particular, can we validate the (numerically obvious) claim that $f$ is monotone, ranging from $0$ to $1$? Even $lim_{xtoinfty}f(x) = 1$ doesn't seem obvious to me; L'Hospital's rule isn't illuminating there.





A clear approach to this is to consider the derivative
$$
begin{align*}
f'(x) &= frac{f(x)bigl(phi(c-x)-phi(-c-x)bigr) - bigl(phi(c-x) - phi(-x)bigr)}{mathbb{P}(Z_xin D)}\
&propto f(x)bigl(phi(c-x)-phi(-c-x)bigr) - bigl(phi(c-x) - phi(-x)bigr),
end{align*}
$$

and show that $f'>0$ uniformly, but I can't seem to bound this either. Answers to either would be extremely helpful, but injectivity of $f$ is more important for my application. If you could come up with a version of this that works for higher dimensional Gaussians ($D_i$ are orthants/quadrants of spheres then) that would be perfect.










share|cite|improve this question




















  • 1




    Note that this depends on the Gaussianity of $X$. If $X$ is exponential then these ratios are fixed.
    – cdipaolo
    Nov 7 at 20:08













up vote
7
down vote

favorite
3









up vote
7
down vote

favorite
3






3





Let $Z_x sim mathcal{N}(x,1)$, $D_1 = [0,c]$, and $D=[-c,c]$. Can we determine $x$ from
$$f(x) = mathbb{P}(Z_xin D_1 | Z_xin D) = frac{Phi(c - x) - Phi(-x)}{Phi(c - x) - Phi(-c-x)}?$$
In particular, can we validate the (numerically obvious) claim that $f$ is monotone, ranging from $0$ to $1$? Even $lim_{xtoinfty}f(x) = 1$ doesn't seem obvious to me; L'Hospital's rule isn't illuminating there.





A clear approach to this is to consider the derivative
$$
begin{align*}
f'(x) &= frac{f(x)bigl(phi(c-x)-phi(-c-x)bigr) - bigl(phi(c-x) - phi(-x)bigr)}{mathbb{P}(Z_xin D)}\
&propto f(x)bigl(phi(c-x)-phi(-c-x)bigr) - bigl(phi(c-x) - phi(-x)bigr),
end{align*}
$$

and show that $f'>0$ uniformly, but I can't seem to bound this either. Answers to either would be extremely helpful, but injectivity of $f$ is more important for my application. If you could come up with a version of this that works for higher dimensional Gaussians ($D_i$ are orthants/quadrants of spheres then) that would be perfect.










share|cite|improve this question















Let $Z_x sim mathcal{N}(x,1)$, $D_1 = [0,c]$, and $D=[-c,c]$. Can we determine $x$ from
$$f(x) = mathbb{P}(Z_xin D_1 | Z_xin D) = frac{Phi(c - x) - Phi(-x)}{Phi(c - x) - Phi(-c-x)}?$$
In particular, can we validate the (numerically obvious) claim that $f$ is monotone, ranging from $0$ to $1$? Even $lim_{xtoinfty}f(x) = 1$ doesn't seem obvious to me; L'Hospital's rule isn't illuminating there.





A clear approach to this is to consider the derivative
$$
begin{align*}
f'(x) &= frac{f(x)bigl(phi(c-x)-phi(-c-x)bigr) - bigl(phi(c-x) - phi(-x)bigr)}{mathbb{P}(Z_xin D)}\
&propto f(x)bigl(phi(c-x)-phi(-c-x)bigr) - bigl(phi(c-x) - phi(-x)bigr),
end{align*}
$$

and show that $f'>0$ uniformly, but I can't seem to bound this either. Answers to either would be extremely helpful, but injectivity of $f$ is more important for my application. If you could come up with a version of this that works for higher dimensional Gaussians ($D_i$ are orthants/quadrants of spheres then) that would be perfect.







real-analysis probability monotone-functions upper-lower-bounds






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Nov 7 at 20:02

























asked Aug 6 at 5:56









cdipaolo

570312




570312








  • 1




    Note that this depends on the Gaussianity of $X$. If $X$ is exponential then these ratios are fixed.
    – cdipaolo
    Nov 7 at 20:08














  • 1




    Note that this depends on the Gaussianity of $X$. If $X$ is exponential then these ratios are fixed.
    – cdipaolo
    Nov 7 at 20:08








1




1




Note that this depends on the Gaussianity of $X$. If $X$ is exponential then these ratios are fixed.
– cdipaolo
Nov 7 at 20:08




Note that this depends on the Gaussianity of $X$. If $X$ is exponential then these ratios are fixed.
– cdipaolo
Nov 7 at 20:08










1 Answer
1






active

oldest

votes

















up vote
3
down vote



accepted
+200










First of all assume $c=1$ and $x>0$



now call



$$I_1(x) equiv sqrt{2pi}mathbb{P}(Z_xin D_1)=int_0^1e^{-frac{(t-x)^2}{2}}dt$$



and



$$I(x) equiv sqrt{2pi} mathbb{P}(Z_xin D)=int_{-1}^1e^{-frac{(t-x)^2}{2}}dt$$



then $$I(x)-I_1(x) = int_{-1}^0e^{-frac{(t-x)^2}{2}}dt leq e^{-x^2/2}$$



But now choose an arbitrary $0<epsilon<1$ then



$$I_1(x) geq int_{epsilon}^1e^{-frac{(t-x)^2}{2}}dt geq (1-epsilon)e^{-(x-epsilon)^2/2}$$



but $e^{-x^2/2} = o_{xrightarrow+infty}(e^{-(x-epsilon)^2/2})$



so it's easy to conclude that $I(x) - I_1(x) = o(I_1(x))$ in other words



$$lim_{xrightarrow+infty} f(x) = 1$$



By a similar argument one has $lim_{xrightarrow-infty}f(x)=0$.



Next for monotonicity:



$$f' = frac{I_1'I-I_1I'}{I^2}$$



so we want to show that $frac{I'_1}{I_1}geq frac{I'}{I}$, so first let's write



$$I'(x) = int_{-1}^1(t-x)e^{-frac{(t-x)^2}{2}}dt = -xI(x) + int_{-1}^1te^{-frac{(t-x)^2}{2}}dt$$



and



$$I_1'(x) = int_{0}^1(t-x)e^{-frac{(t-x)^2}{2}}dt = -xI_1(x) + int_{0}^1te^{-frac{(t-x)^2}{2}}dt$$



therefore



$$frac{I_1'}{I_1} - frac{I'}{I} = frac{1}{I_1}int_{0}^1te^{-frac{(t-x)^2}{2}}dt - frac{1}{I}int_{-1}^1te^{-frac{(t-x)^2}{2}}dt geq 0$$



where the inequality holds because $I_1<I$ and $int_{0}^1te^{-frac{(t-x)^2}{2}}dt > int_{-1}^1te^{-frac{(t-x)^2}{2}}dt$



So in 1D the function is a monotonic bijection from $mathbb{R}rightarrow ]0,1[$



It is pretty clear that this line of argument extends to higher dimensions essentially by doing similar explicit tricks on the edge of the octant domain in each direction and evaluating the gradient of f(x) explicitly






share|cite|improve this answer










New contributor




Ezy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.














  • 1




    Awesome this is extremely helpful. Thank you!
    – cdipaolo
    yesterday











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














 

draft saved


draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2873613%2fidentifiability-of-normal-from-conditional-probability%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
3
down vote



accepted
+200










First of all assume $c=1$ and $x>0$



now call



$$I_1(x) equiv sqrt{2pi}mathbb{P}(Z_xin D_1)=int_0^1e^{-frac{(t-x)^2}{2}}dt$$



and



$$I(x) equiv sqrt{2pi} mathbb{P}(Z_xin D)=int_{-1}^1e^{-frac{(t-x)^2}{2}}dt$$



then $$I(x)-I_1(x) = int_{-1}^0e^{-frac{(t-x)^2}{2}}dt leq e^{-x^2/2}$$



But now choose an arbitrary $0<epsilon<1$ then



$$I_1(x) geq int_{epsilon}^1e^{-frac{(t-x)^2}{2}}dt geq (1-epsilon)e^{-(x-epsilon)^2/2}$$



but $e^{-x^2/2} = o_{xrightarrow+infty}(e^{-(x-epsilon)^2/2})$



so it's easy to conclude that $I(x) - I_1(x) = o(I_1(x))$ in other words



$$lim_{xrightarrow+infty} f(x) = 1$$



By a similar argument one has $lim_{xrightarrow-infty}f(x)=0$.



Next for monotonicity:



$$f' = frac{I_1'I-I_1I'}{I^2}$$



so we want to show that $frac{I'_1}{I_1}geq frac{I'}{I}$, so first let's write



$$I'(x) = int_{-1}^1(t-x)e^{-frac{(t-x)^2}{2}}dt = -xI(x) + int_{-1}^1te^{-frac{(t-x)^2}{2}}dt$$



and



$$I_1'(x) = int_{0}^1(t-x)e^{-frac{(t-x)^2}{2}}dt = -xI_1(x) + int_{0}^1te^{-frac{(t-x)^2}{2}}dt$$



therefore



$$frac{I_1'}{I_1} - frac{I'}{I} = frac{1}{I_1}int_{0}^1te^{-frac{(t-x)^2}{2}}dt - frac{1}{I}int_{-1}^1te^{-frac{(t-x)^2}{2}}dt geq 0$$



where the inequality holds because $I_1<I$ and $int_{0}^1te^{-frac{(t-x)^2}{2}}dt > int_{-1}^1te^{-frac{(t-x)^2}{2}}dt$



So in 1D the function is a monotonic bijection from $mathbb{R}rightarrow ]0,1[$



It is pretty clear that this line of argument extends to higher dimensions essentially by doing similar explicit tricks on the edge of the octant domain in each direction and evaluating the gradient of f(x) explicitly






share|cite|improve this answer










New contributor




Ezy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.














  • 1




    Awesome this is extremely helpful. Thank you!
    – cdipaolo
    yesterday















up vote
3
down vote



accepted
+200










First of all assume $c=1$ and $x>0$



now call



$$I_1(x) equiv sqrt{2pi}mathbb{P}(Z_xin D_1)=int_0^1e^{-frac{(t-x)^2}{2}}dt$$



and



$$I(x) equiv sqrt{2pi} mathbb{P}(Z_xin D)=int_{-1}^1e^{-frac{(t-x)^2}{2}}dt$$



then $$I(x)-I_1(x) = int_{-1}^0e^{-frac{(t-x)^2}{2}}dt leq e^{-x^2/2}$$



But now choose an arbitrary $0<epsilon<1$ then



$$I_1(x) geq int_{epsilon}^1e^{-frac{(t-x)^2}{2}}dt geq (1-epsilon)e^{-(x-epsilon)^2/2}$$



but $e^{-x^2/2} = o_{xrightarrow+infty}(e^{-(x-epsilon)^2/2})$



so it's easy to conclude that $I(x) - I_1(x) = o(I_1(x))$ in other words



$$lim_{xrightarrow+infty} f(x) = 1$$



By a similar argument one has $lim_{xrightarrow-infty}f(x)=0$.



Next for monotonicity:



$$f' = frac{I_1'I-I_1I'}{I^2}$$



so we want to show that $frac{I'_1}{I_1}geq frac{I'}{I}$, so first let's write



$$I'(x) = int_{-1}^1(t-x)e^{-frac{(t-x)^2}{2}}dt = -xI(x) + int_{-1}^1te^{-frac{(t-x)^2}{2}}dt$$



and



$$I_1'(x) = int_{0}^1(t-x)e^{-frac{(t-x)^2}{2}}dt = -xI_1(x) + int_{0}^1te^{-frac{(t-x)^2}{2}}dt$$



therefore



$$frac{I_1'}{I_1} - frac{I'}{I} = frac{1}{I_1}int_{0}^1te^{-frac{(t-x)^2}{2}}dt - frac{1}{I}int_{-1}^1te^{-frac{(t-x)^2}{2}}dt geq 0$$



where the inequality holds because $I_1<I$ and $int_{0}^1te^{-frac{(t-x)^2}{2}}dt > int_{-1}^1te^{-frac{(t-x)^2}{2}}dt$



So in 1D the function is a monotonic bijection from $mathbb{R}rightarrow ]0,1[$



It is pretty clear that this line of argument extends to higher dimensions essentially by doing similar explicit tricks on the edge of the octant domain in each direction and evaluating the gradient of f(x) explicitly






share|cite|improve this answer










New contributor




Ezy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.














  • 1




    Awesome this is extremely helpful. Thank you!
    – cdipaolo
    yesterday













up vote
3
down vote



accepted
+200







up vote
3
down vote



accepted
+200




+200




First of all assume $c=1$ and $x>0$



now call



$$I_1(x) equiv sqrt{2pi}mathbb{P}(Z_xin D_1)=int_0^1e^{-frac{(t-x)^2}{2}}dt$$



and



$$I(x) equiv sqrt{2pi} mathbb{P}(Z_xin D)=int_{-1}^1e^{-frac{(t-x)^2}{2}}dt$$



then $$I(x)-I_1(x) = int_{-1}^0e^{-frac{(t-x)^2}{2}}dt leq e^{-x^2/2}$$



But now choose an arbitrary $0<epsilon<1$ then



$$I_1(x) geq int_{epsilon}^1e^{-frac{(t-x)^2}{2}}dt geq (1-epsilon)e^{-(x-epsilon)^2/2}$$



but $e^{-x^2/2} = o_{xrightarrow+infty}(e^{-(x-epsilon)^2/2})$



so it's easy to conclude that $I(x) - I_1(x) = o(I_1(x))$ in other words



$$lim_{xrightarrow+infty} f(x) = 1$$



By a similar argument one has $lim_{xrightarrow-infty}f(x)=0$.



Next for monotonicity:



$$f' = frac{I_1'I-I_1I'}{I^2}$$



so we want to show that $frac{I'_1}{I_1}geq frac{I'}{I}$, so first let's write



$$I'(x) = int_{-1}^1(t-x)e^{-frac{(t-x)^2}{2}}dt = -xI(x) + int_{-1}^1te^{-frac{(t-x)^2}{2}}dt$$



and



$$I_1'(x) = int_{0}^1(t-x)e^{-frac{(t-x)^2}{2}}dt = -xI_1(x) + int_{0}^1te^{-frac{(t-x)^2}{2}}dt$$



therefore



$$frac{I_1'}{I_1} - frac{I'}{I} = frac{1}{I_1}int_{0}^1te^{-frac{(t-x)^2}{2}}dt - frac{1}{I}int_{-1}^1te^{-frac{(t-x)^2}{2}}dt geq 0$$



where the inequality holds because $I_1<I$ and $int_{0}^1te^{-frac{(t-x)^2}{2}}dt > int_{-1}^1te^{-frac{(t-x)^2}{2}}dt$



So in 1D the function is a monotonic bijection from $mathbb{R}rightarrow ]0,1[$



It is pretty clear that this line of argument extends to higher dimensions essentially by doing similar explicit tricks on the edge of the octant domain in each direction and evaluating the gradient of f(x) explicitly






share|cite|improve this answer










New contributor




Ezy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









First of all assume $c=1$ and $x>0$



now call



$$I_1(x) equiv sqrt{2pi}mathbb{P}(Z_xin D_1)=int_0^1e^{-frac{(t-x)^2}{2}}dt$$



and



$$I(x) equiv sqrt{2pi} mathbb{P}(Z_xin D)=int_{-1}^1e^{-frac{(t-x)^2}{2}}dt$$



then $$I(x)-I_1(x) = int_{-1}^0e^{-frac{(t-x)^2}{2}}dt leq e^{-x^2/2}$$



But now choose an arbitrary $0<epsilon<1$ then



$$I_1(x) geq int_{epsilon}^1e^{-frac{(t-x)^2}{2}}dt geq (1-epsilon)e^{-(x-epsilon)^2/2}$$



but $e^{-x^2/2} = o_{xrightarrow+infty}(e^{-(x-epsilon)^2/2})$



so it's easy to conclude that $I(x) - I_1(x) = o(I_1(x))$ in other words



$$lim_{xrightarrow+infty} f(x) = 1$$



By a similar argument one has $lim_{xrightarrow-infty}f(x)=0$.



Next for monotonicity:



$$f' = frac{I_1'I-I_1I'}{I^2}$$



so we want to show that $frac{I'_1}{I_1}geq frac{I'}{I}$, so first let's write



$$I'(x) = int_{-1}^1(t-x)e^{-frac{(t-x)^2}{2}}dt = -xI(x) + int_{-1}^1te^{-frac{(t-x)^2}{2}}dt$$



and



$$I_1'(x) = int_{0}^1(t-x)e^{-frac{(t-x)^2}{2}}dt = -xI_1(x) + int_{0}^1te^{-frac{(t-x)^2}{2}}dt$$



therefore



$$frac{I_1'}{I_1} - frac{I'}{I} = frac{1}{I_1}int_{0}^1te^{-frac{(t-x)^2}{2}}dt - frac{1}{I}int_{-1}^1te^{-frac{(t-x)^2}{2}}dt geq 0$$



where the inequality holds because $I_1<I$ and $int_{0}^1te^{-frac{(t-x)^2}{2}}dt > int_{-1}^1te^{-frac{(t-x)^2}{2}}dt$



So in 1D the function is a monotonic bijection from $mathbb{R}rightarrow ]0,1[$



It is pretty clear that this line of argument extends to higher dimensions essentially by doing similar explicit tricks on the edge of the octant domain in each direction and evaluating the gradient of f(x) explicitly







share|cite|improve this answer










New contributor




Ezy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|cite|improve this answer



share|cite|improve this answer








edited 2 days ago





















New contributor




Ezy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









answered 2 days ago









Ezy

54428




54428




New contributor




Ezy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





Ezy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






Ezy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








  • 1




    Awesome this is extremely helpful. Thank you!
    – cdipaolo
    yesterday














  • 1




    Awesome this is extremely helpful. Thank you!
    – cdipaolo
    yesterday








1




1




Awesome this is extremely helpful. Thank you!
– cdipaolo
yesterday




Awesome this is extremely helpful. Thank you!
– cdipaolo
yesterday


















 

draft saved


draft discarded



















































 


draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2873613%2fidentifiability-of-normal-from-conditional-probability%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Plaza Victoria

In PowerPoint, is there a keyboard shortcut for bulleted / numbered list?

How to put 3 figures in Latex with 2 figures side by side and 1 below these side by side images but in...