Stochastic differential equations with null mean and unit variance
$begingroup$
I have the following:
$ dot{x} = frac{dx}{dt}= Aleft( xright) + sqrt{Bleft( xright)}etaleft( tright) $
where $ Aleft( xright)=a_0 - a_1x $ and $ Bleft( xright)=b_0-b_1x+b_2x^2 $. All $ a_k,b_k geq 0 $. $ eta $ is related to a gaussian with null mean and unit variance.
Defining $ Gleft( tauright)=langle xleft( tright)xleft( t+tauright)rangle $ and supposing $ a_0 = 0 $ we have to prove that:
$ Gleft( tauright) = Gleft( 0right) e^{a_1tau} $.
I tried this:
1) Considering $ tau $ small enough to allow the use of approximation $ xleft( t+tauright)=xleft( tright)+frac{1}{2}tau dot{x}left( tright) $, I do:
begin{align*}
Gleft( tauright) &= langle xleft( tright)xleft( t+tauright) rangle
\ &= int xleft( tright)left[ xleft( tright) + frac{1}{2}tau dot{x}left( tright) right]rholeft( xright)dx
\ &= int xleft( tright)^2rholeft( xright)dx + frac{1}{2}tauint xleft( tright)dot{x}left( tright)rholeft( xright)dx
\ &= int xleft( tright)^2rholeft( xright)dx + frac{1}{4}tauint frac{dx^2}{dt}rholeft( xright)dx
\ &= langle xleft( tright)^2rangle + frac{tau}{4}langlefrac{dx^2}{dt}rangle .
end{align*}
Since the system is in thermodynamic equilibrium, $frac{drho}{dt} = 0 $ and then:
$ Gleft( tauright) = langle x^2rangle + frac{tau}{4}frac{d}{dt}langle x^2 rangle $
I don't see how this result can help me to get the proof. In this way the $ a_0=0 $ hypothesis was not required, which makes me think I'm in a way won't help me. The only thing I can see from here is something like:
$$ Gleft( tauright) = langle x^2rangleleft( 1 + frac{tau}{4}frac{d}{dt}right) Rightarrow Gleft(tau^primeright)=langle x^2rangle e^{frac{tau^prime}{4}} = Gleft( 0right) e^{frac{tau^prime}{4}} neq Gleft( 0right) e^{a_1tau} ,$$
where $ tau $ is small and $ tau^prime $ arbitrary.
2) Doing the same approximation of "1)" I decided to use the $ dot{x} $ equation:
begin{align*}
Gleft( tauright) &= langle xleft( tright)^2rangle + frac{1}{2}tauint xleft( tright)dot{x}left( tright)rholeft( xright)dx
\ &= langle x^2rangle + frac{tau}{2}int xleft( tright)left[ Aleft( xright) + sqrt{Bleft( xright)}etaleft( tright)right]rholeft( xright)dx
\ &= langle x^2rangleleft( 1 - taufrac{a_1}{2}right) + frac{tau}{2}etaleft( tright)int xleft( tright)sqrt{Bleft( xright)}rholeft( xright)dx .
end{align*}
I stopped here because the coefficients of $ B $ don't appear at the expression what I want to get. If I neglect the last integral making $ etarightarrow 0 $ I have something like:
$ Gleft( tauright) = langle x^2 rangleleft( 1 - taufrac{a_1}{2}right)^1 approx langle x^2 rangleleft( 1 - taufrac{a_1}{2}right)^{1+tau} = langle x^2 rangleleft( 1 - frac{1}{n}frac{a_1}{2}right)^{1+frac{1}{n}} .$
The last step was based on the arquimedian property of real set, $ n $ is a natural number. I almost can see the $ nrightarrow infty $ making $ Gleft( tauright) = langle x^2 rangle e^{-frac{a_1}{2}} = Gleft( 0right) e^{-frac{a_1}{2}} neq Gleft( 0right) e^{a_1tau} $ that's what I want.
This problem comes from Statistical Mechanics discipline of Mastering program on physics. As I assume $ tau $ very small to make these approximations, I think the $ Gleft(tauright) $ is something like infinitesimal generator of something in the system.
I appreciate some guidance to solve this.
I appreciate most some guidance with mathematical rigor, telling why some step can (or cannot) be taken.
statistical-mechanics stochastic-processes
$endgroup$
migrated from physics.stackexchange.com Dec 18 '18 at 0:13
This question came from our site for active researchers, academics and students of physics.
add a comment |
$begingroup$
I have the following:
$ dot{x} = frac{dx}{dt}= Aleft( xright) + sqrt{Bleft( xright)}etaleft( tright) $
where $ Aleft( xright)=a_0 - a_1x $ and $ Bleft( xright)=b_0-b_1x+b_2x^2 $. All $ a_k,b_k geq 0 $. $ eta $ is related to a gaussian with null mean and unit variance.
Defining $ Gleft( tauright)=langle xleft( tright)xleft( t+tauright)rangle $ and supposing $ a_0 = 0 $ we have to prove that:
$ Gleft( tauright) = Gleft( 0right) e^{a_1tau} $.
I tried this:
1) Considering $ tau $ small enough to allow the use of approximation $ xleft( t+tauright)=xleft( tright)+frac{1}{2}tau dot{x}left( tright) $, I do:
begin{align*}
Gleft( tauright) &= langle xleft( tright)xleft( t+tauright) rangle
\ &= int xleft( tright)left[ xleft( tright) + frac{1}{2}tau dot{x}left( tright) right]rholeft( xright)dx
\ &= int xleft( tright)^2rholeft( xright)dx + frac{1}{2}tauint xleft( tright)dot{x}left( tright)rholeft( xright)dx
\ &= int xleft( tright)^2rholeft( xright)dx + frac{1}{4}tauint frac{dx^2}{dt}rholeft( xright)dx
\ &= langle xleft( tright)^2rangle + frac{tau}{4}langlefrac{dx^2}{dt}rangle .
end{align*}
Since the system is in thermodynamic equilibrium, $frac{drho}{dt} = 0 $ and then:
$ Gleft( tauright) = langle x^2rangle + frac{tau}{4}frac{d}{dt}langle x^2 rangle $
I don't see how this result can help me to get the proof. In this way the $ a_0=0 $ hypothesis was not required, which makes me think I'm in a way won't help me. The only thing I can see from here is something like:
$$ Gleft( tauright) = langle x^2rangleleft( 1 + frac{tau}{4}frac{d}{dt}right) Rightarrow Gleft(tau^primeright)=langle x^2rangle e^{frac{tau^prime}{4}} = Gleft( 0right) e^{frac{tau^prime}{4}} neq Gleft( 0right) e^{a_1tau} ,$$
where $ tau $ is small and $ tau^prime $ arbitrary.
2) Doing the same approximation of "1)" I decided to use the $ dot{x} $ equation:
begin{align*}
Gleft( tauright) &= langle xleft( tright)^2rangle + frac{1}{2}tauint xleft( tright)dot{x}left( tright)rholeft( xright)dx
\ &= langle x^2rangle + frac{tau}{2}int xleft( tright)left[ Aleft( xright) + sqrt{Bleft( xright)}etaleft( tright)right]rholeft( xright)dx
\ &= langle x^2rangleleft( 1 - taufrac{a_1}{2}right) + frac{tau}{2}etaleft( tright)int xleft( tright)sqrt{Bleft( xright)}rholeft( xright)dx .
end{align*}
I stopped here because the coefficients of $ B $ don't appear at the expression what I want to get. If I neglect the last integral making $ etarightarrow 0 $ I have something like:
$ Gleft( tauright) = langle x^2 rangleleft( 1 - taufrac{a_1}{2}right)^1 approx langle x^2 rangleleft( 1 - taufrac{a_1}{2}right)^{1+tau} = langle x^2 rangleleft( 1 - frac{1}{n}frac{a_1}{2}right)^{1+frac{1}{n}} .$
The last step was based on the arquimedian property of real set, $ n $ is a natural number. I almost can see the $ nrightarrow infty $ making $ Gleft( tauright) = langle x^2 rangle e^{-frac{a_1}{2}} = Gleft( 0right) e^{-frac{a_1}{2}} neq Gleft( 0right) e^{a_1tau} $ that's what I want.
This problem comes from Statistical Mechanics discipline of Mastering program on physics. As I assume $ tau $ very small to make these approximations, I think the $ Gleft(tauright) $ is something like infinitesimal generator of something in the system.
I appreciate some guidance to solve this.
I appreciate most some guidance with mathematical rigor, telling why some step can (or cannot) be taken.
statistical-mechanics stochastic-processes
$endgroup$
migrated from physics.stackexchange.com Dec 18 '18 at 0:13
This question came from our site for active researchers, academics and students of physics.
add a comment |
$begingroup$
I have the following:
$ dot{x} = frac{dx}{dt}= Aleft( xright) + sqrt{Bleft( xright)}etaleft( tright) $
where $ Aleft( xright)=a_0 - a_1x $ and $ Bleft( xright)=b_0-b_1x+b_2x^2 $. All $ a_k,b_k geq 0 $. $ eta $ is related to a gaussian with null mean and unit variance.
Defining $ Gleft( tauright)=langle xleft( tright)xleft( t+tauright)rangle $ and supposing $ a_0 = 0 $ we have to prove that:
$ Gleft( tauright) = Gleft( 0right) e^{a_1tau} $.
I tried this:
1) Considering $ tau $ small enough to allow the use of approximation $ xleft( t+tauright)=xleft( tright)+frac{1}{2}tau dot{x}left( tright) $, I do:
begin{align*}
Gleft( tauright) &= langle xleft( tright)xleft( t+tauright) rangle
\ &= int xleft( tright)left[ xleft( tright) + frac{1}{2}tau dot{x}left( tright) right]rholeft( xright)dx
\ &= int xleft( tright)^2rholeft( xright)dx + frac{1}{2}tauint xleft( tright)dot{x}left( tright)rholeft( xright)dx
\ &= int xleft( tright)^2rholeft( xright)dx + frac{1}{4}tauint frac{dx^2}{dt}rholeft( xright)dx
\ &= langle xleft( tright)^2rangle + frac{tau}{4}langlefrac{dx^2}{dt}rangle .
end{align*}
Since the system is in thermodynamic equilibrium, $frac{drho}{dt} = 0 $ and then:
$ Gleft( tauright) = langle x^2rangle + frac{tau}{4}frac{d}{dt}langle x^2 rangle $
I don't see how this result can help me to get the proof. In this way the $ a_0=0 $ hypothesis was not required, which makes me think I'm in a way won't help me. The only thing I can see from here is something like:
$$ Gleft( tauright) = langle x^2rangleleft( 1 + frac{tau}{4}frac{d}{dt}right) Rightarrow Gleft(tau^primeright)=langle x^2rangle e^{frac{tau^prime}{4}} = Gleft( 0right) e^{frac{tau^prime}{4}} neq Gleft( 0right) e^{a_1tau} ,$$
where $ tau $ is small and $ tau^prime $ arbitrary.
2) Doing the same approximation of "1)" I decided to use the $ dot{x} $ equation:
begin{align*}
Gleft( tauright) &= langle xleft( tright)^2rangle + frac{1}{2}tauint xleft( tright)dot{x}left( tright)rholeft( xright)dx
\ &= langle x^2rangle + frac{tau}{2}int xleft( tright)left[ Aleft( xright) + sqrt{Bleft( xright)}etaleft( tright)right]rholeft( xright)dx
\ &= langle x^2rangleleft( 1 - taufrac{a_1}{2}right) + frac{tau}{2}etaleft( tright)int xleft( tright)sqrt{Bleft( xright)}rholeft( xright)dx .
end{align*}
I stopped here because the coefficients of $ B $ don't appear at the expression what I want to get. If I neglect the last integral making $ etarightarrow 0 $ I have something like:
$ Gleft( tauright) = langle x^2 rangleleft( 1 - taufrac{a_1}{2}right)^1 approx langle x^2 rangleleft( 1 - taufrac{a_1}{2}right)^{1+tau} = langle x^2 rangleleft( 1 - frac{1}{n}frac{a_1}{2}right)^{1+frac{1}{n}} .$
The last step was based on the arquimedian property of real set, $ n $ is a natural number. I almost can see the $ nrightarrow infty $ making $ Gleft( tauright) = langle x^2 rangle e^{-frac{a_1}{2}} = Gleft( 0right) e^{-frac{a_1}{2}} neq Gleft( 0right) e^{a_1tau} $ that's what I want.
This problem comes from Statistical Mechanics discipline of Mastering program on physics. As I assume $ tau $ very small to make these approximations, I think the $ Gleft(tauright) $ is something like infinitesimal generator of something in the system.
I appreciate some guidance to solve this.
I appreciate most some guidance with mathematical rigor, telling why some step can (or cannot) be taken.
statistical-mechanics stochastic-processes
$endgroup$
I have the following:
$ dot{x} = frac{dx}{dt}= Aleft( xright) + sqrt{Bleft( xright)}etaleft( tright) $
where $ Aleft( xright)=a_0 - a_1x $ and $ Bleft( xright)=b_0-b_1x+b_2x^2 $. All $ a_k,b_k geq 0 $. $ eta $ is related to a gaussian with null mean and unit variance.
Defining $ Gleft( tauright)=langle xleft( tright)xleft( t+tauright)rangle $ and supposing $ a_0 = 0 $ we have to prove that:
$ Gleft( tauright) = Gleft( 0right) e^{a_1tau} $.
I tried this:
1) Considering $ tau $ small enough to allow the use of approximation $ xleft( t+tauright)=xleft( tright)+frac{1}{2}tau dot{x}left( tright) $, I do:
begin{align*}
Gleft( tauright) &= langle xleft( tright)xleft( t+tauright) rangle
\ &= int xleft( tright)left[ xleft( tright) + frac{1}{2}tau dot{x}left( tright) right]rholeft( xright)dx
\ &= int xleft( tright)^2rholeft( xright)dx + frac{1}{2}tauint xleft( tright)dot{x}left( tright)rholeft( xright)dx
\ &= int xleft( tright)^2rholeft( xright)dx + frac{1}{4}tauint frac{dx^2}{dt}rholeft( xright)dx
\ &= langle xleft( tright)^2rangle + frac{tau}{4}langlefrac{dx^2}{dt}rangle .
end{align*}
Since the system is in thermodynamic equilibrium, $frac{drho}{dt} = 0 $ and then:
$ Gleft( tauright) = langle x^2rangle + frac{tau}{4}frac{d}{dt}langle x^2 rangle $
I don't see how this result can help me to get the proof. In this way the $ a_0=0 $ hypothesis was not required, which makes me think I'm in a way won't help me. The only thing I can see from here is something like:
$$ Gleft( tauright) = langle x^2rangleleft( 1 + frac{tau}{4}frac{d}{dt}right) Rightarrow Gleft(tau^primeright)=langle x^2rangle e^{frac{tau^prime}{4}} = Gleft( 0right) e^{frac{tau^prime}{4}} neq Gleft( 0right) e^{a_1tau} ,$$
where $ tau $ is small and $ tau^prime $ arbitrary.
2) Doing the same approximation of "1)" I decided to use the $ dot{x} $ equation:
begin{align*}
Gleft( tauright) &= langle xleft( tright)^2rangle + frac{1}{2}tauint xleft( tright)dot{x}left( tright)rholeft( xright)dx
\ &= langle x^2rangle + frac{tau}{2}int xleft( tright)left[ Aleft( xright) + sqrt{Bleft( xright)}etaleft( tright)right]rholeft( xright)dx
\ &= langle x^2rangleleft( 1 - taufrac{a_1}{2}right) + frac{tau}{2}etaleft( tright)int xleft( tright)sqrt{Bleft( xright)}rholeft( xright)dx .
end{align*}
I stopped here because the coefficients of $ B $ don't appear at the expression what I want to get. If I neglect the last integral making $ etarightarrow 0 $ I have something like:
$ Gleft( tauright) = langle x^2 rangleleft( 1 - taufrac{a_1}{2}right)^1 approx langle x^2 rangleleft( 1 - taufrac{a_1}{2}right)^{1+tau} = langle x^2 rangleleft( 1 - frac{1}{n}frac{a_1}{2}right)^{1+frac{1}{n}} .$
The last step was based on the arquimedian property of real set, $ n $ is a natural number. I almost can see the $ nrightarrow infty $ making $ Gleft( tauright) = langle x^2 rangle e^{-frac{a_1}{2}} = Gleft( 0right) e^{-frac{a_1}{2}} neq Gleft( 0right) e^{a_1tau} $ that's what I want.
This problem comes from Statistical Mechanics discipline of Mastering program on physics. As I assume $ tau $ very small to make these approximations, I think the $ Gleft(tauright) $ is something like infinitesimal generator of something in the system.
I appreciate some guidance to solve this.
I appreciate most some guidance with mathematical rigor, telling why some step can (or cannot) be taken.
statistical-mechanics stochastic-processes
statistical-mechanics stochastic-processes
asked Dec 8 '18 at 15:21
Enrique RenéEnrique René
286
286
migrated from physics.stackexchange.com Dec 18 '18 at 0:13
This question came from our site for active researchers, academics and students of physics.
migrated from physics.stackexchange.com Dec 18 '18 at 0:13
This question came from our site for active researchers, academics and students of physics.
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
First of all, there a couple of errors in your computations. For example, the average you are taking are over time so you should use $rho(t)dt$, not $rho(x)dx$!. Also the Taylor approximation should be $$x(t+tau)sim x(t)+tau dot{x}$$
Moreover, approximating $G(tau)$ for small $tau$ would just give you an hint of what would happen at small $tau$, you would not be able to recover the full $G(tau)$. If you had not made the mistakes you did, indeed, following your computations but slightly corrected and using $left< * right>$ for the average of $*$ (i.e. $left< * right> = int_t * rho(t)dt$ ):
$$G(tau)sim left< x(t)(x(t) +taudot{x} ) right>=left<x^2(t)+tau x(t)dot{x} right>=left< x^2(t) right> +tauleft<x(t)dot{x} right> $$
now, using the expression you have for $dot{x}$ and the fact that $left< x^2(t) right>=G(0)$:
$$G(tau)sim G(0) + tau left < xleft(-a_1 x+sqrt{B(x)}eta(t)right) right>$$
i.e.
$$G(tau)sim G(0)-tau a_1left<x^2right>+tauleft< xsqrt{B(x)}eta(t)right>$$
now we make the assumption that $eta(t)$ is not correlated with the $x$-terms [notice that this is the only step in which I actually have to assume. I think it is right or that any similar assumption applies, but maybe think about it), i.e. that we can write:
$$G(tau)sim G(0)-a_1tau left<x^2right>+tau left<xsqrt{B(x)}right> left< eta(t)right>$$
and now because $left< eta(t)right>=0$ we get, again because $left <x^2(t)right>=G(0)$:
$$G(tau)sim G(0)(1-a_1tau)$$
which is the small $tau$ expansion of the solution you need:
$$G(tau)=G(0)e^{-a_1tau}sim G(0)(1-a_1tau)$$
(I get a minus sign which you don't have, which I think is also right as otherwise the correlation would increase over time, which is weird... who of us made the mistake..?)
Anyways this procedure could have given you a hint, and a small-$tau$ proof of the result, but not the final solution.
What instead if try to compute
$${d G(tau)over dtau} = left< x(t){dx(t+tau)over d tau}right>$$
(where I only take the derivative of the second one because the first on has no $tau$ dependence)?. So as ${dx(t+tau)over d tau}={dx(tau)over d tau}|_{t+tau}=dot{x}|_{t+tau}$:
$${d G(tau)over dtau} = left< x(t)left(-a_1x(t+tau)+sqrt{B(x)}eta(t+tau)right)right>$$
for the exact same reasons as before $$left<eta(t+tau)right>=0$$ and we are left with
$${d G(tau)over dtau} = -a_1left< x(t)x(t+tau)right>=-a_1G(tau)$$
so that our solution is, solving the easy $dot{y}=-Ayrightarrow y(t)=y(0)e^{-At}$ differential equation
$$G(tau)=G(0)e^{-a_1tau}$$
(again with a minus sign which I trust - but I am open to discussion!)
Hope this helps not only solving it, but also showing some of your mistakes and wrong (but still not trivial!) reasoning.
$endgroup$
$begingroup$
In fact your proof (with $-a_1$ signal) is right. Probably some typo from who build the exercise. This help me to clarify some points also, as you point out my mistakes. Thanks a lot!
$endgroup$
– Enrique René
Dec 8 '18 at 22:30
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3044627%2fstochastic-differential-equations-with-null-mean-and-unit-variance%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
First of all, there a couple of errors in your computations. For example, the average you are taking are over time so you should use $rho(t)dt$, not $rho(x)dx$!. Also the Taylor approximation should be $$x(t+tau)sim x(t)+tau dot{x}$$
Moreover, approximating $G(tau)$ for small $tau$ would just give you an hint of what would happen at small $tau$, you would not be able to recover the full $G(tau)$. If you had not made the mistakes you did, indeed, following your computations but slightly corrected and using $left< * right>$ for the average of $*$ (i.e. $left< * right> = int_t * rho(t)dt$ ):
$$G(tau)sim left< x(t)(x(t) +taudot{x} ) right>=left<x^2(t)+tau x(t)dot{x} right>=left< x^2(t) right> +tauleft<x(t)dot{x} right> $$
now, using the expression you have for $dot{x}$ and the fact that $left< x^2(t) right>=G(0)$:
$$G(tau)sim G(0) + tau left < xleft(-a_1 x+sqrt{B(x)}eta(t)right) right>$$
i.e.
$$G(tau)sim G(0)-tau a_1left<x^2right>+tauleft< xsqrt{B(x)}eta(t)right>$$
now we make the assumption that $eta(t)$ is not correlated with the $x$-terms [notice that this is the only step in which I actually have to assume. I think it is right or that any similar assumption applies, but maybe think about it), i.e. that we can write:
$$G(tau)sim G(0)-a_1tau left<x^2right>+tau left<xsqrt{B(x)}right> left< eta(t)right>$$
and now because $left< eta(t)right>=0$ we get, again because $left <x^2(t)right>=G(0)$:
$$G(tau)sim G(0)(1-a_1tau)$$
which is the small $tau$ expansion of the solution you need:
$$G(tau)=G(0)e^{-a_1tau}sim G(0)(1-a_1tau)$$
(I get a minus sign which you don't have, which I think is also right as otherwise the correlation would increase over time, which is weird... who of us made the mistake..?)
Anyways this procedure could have given you a hint, and a small-$tau$ proof of the result, but not the final solution.
What instead if try to compute
$${d G(tau)over dtau} = left< x(t){dx(t+tau)over d tau}right>$$
(where I only take the derivative of the second one because the first on has no $tau$ dependence)?. So as ${dx(t+tau)over d tau}={dx(tau)over d tau}|_{t+tau}=dot{x}|_{t+tau}$:
$${d G(tau)over dtau} = left< x(t)left(-a_1x(t+tau)+sqrt{B(x)}eta(t+tau)right)right>$$
for the exact same reasons as before $$left<eta(t+tau)right>=0$$ and we are left with
$${d G(tau)over dtau} = -a_1left< x(t)x(t+tau)right>=-a_1G(tau)$$
so that our solution is, solving the easy $dot{y}=-Ayrightarrow y(t)=y(0)e^{-At}$ differential equation
$$G(tau)=G(0)e^{-a_1tau}$$
(again with a minus sign which I trust - but I am open to discussion!)
Hope this helps not only solving it, but also showing some of your mistakes and wrong (but still not trivial!) reasoning.
$endgroup$
$begingroup$
In fact your proof (with $-a_1$ signal) is right. Probably some typo from who build the exercise. This help me to clarify some points also, as you point out my mistakes. Thanks a lot!
$endgroup$
– Enrique René
Dec 8 '18 at 22:30
add a comment |
$begingroup$
First of all, there a couple of errors in your computations. For example, the average you are taking are over time so you should use $rho(t)dt$, not $rho(x)dx$!. Also the Taylor approximation should be $$x(t+tau)sim x(t)+tau dot{x}$$
Moreover, approximating $G(tau)$ for small $tau$ would just give you an hint of what would happen at small $tau$, you would not be able to recover the full $G(tau)$. If you had not made the mistakes you did, indeed, following your computations but slightly corrected and using $left< * right>$ for the average of $*$ (i.e. $left< * right> = int_t * rho(t)dt$ ):
$$G(tau)sim left< x(t)(x(t) +taudot{x} ) right>=left<x^2(t)+tau x(t)dot{x} right>=left< x^2(t) right> +tauleft<x(t)dot{x} right> $$
now, using the expression you have for $dot{x}$ and the fact that $left< x^2(t) right>=G(0)$:
$$G(tau)sim G(0) + tau left < xleft(-a_1 x+sqrt{B(x)}eta(t)right) right>$$
i.e.
$$G(tau)sim G(0)-tau a_1left<x^2right>+tauleft< xsqrt{B(x)}eta(t)right>$$
now we make the assumption that $eta(t)$ is not correlated with the $x$-terms [notice that this is the only step in which I actually have to assume. I think it is right or that any similar assumption applies, but maybe think about it), i.e. that we can write:
$$G(tau)sim G(0)-a_1tau left<x^2right>+tau left<xsqrt{B(x)}right> left< eta(t)right>$$
and now because $left< eta(t)right>=0$ we get, again because $left <x^2(t)right>=G(0)$:
$$G(tau)sim G(0)(1-a_1tau)$$
which is the small $tau$ expansion of the solution you need:
$$G(tau)=G(0)e^{-a_1tau}sim G(0)(1-a_1tau)$$
(I get a minus sign which you don't have, which I think is also right as otherwise the correlation would increase over time, which is weird... who of us made the mistake..?)
Anyways this procedure could have given you a hint, and a small-$tau$ proof of the result, but not the final solution.
What instead if try to compute
$${d G(tau)over dtau} = left< x(t){dx(t+tau)over d tau}right>$$
(where I only take the derivative of the second one because the first on has no $tau$ dependence)?. So as ${dx(t+tau)over d tau}={dx(tau)over d tau}|_{t+tau}=dot{x}|_{t+tau}$:
$${d G(tau)over dtau} = left< x(t)left(-a_1x(t+tau)+sqrt{B(x)}eta(t+tau)right)right>$$
for the exact same reasons as before $$left<eta(t+tau)right>=0$$ and we are left with
$${d G(tau)over dtau} = -a_1left< x(t)x(t+tau)right>=-a_1G(tau)$$
so that our solution is, solving the easy $dot{y}=-Ayrightarrow y(t)=y(0)e^{-At}$ differential equation
$$G(tau)=G(0)e^{-a_1tau}$$
(again with a minus sign which I trust - but I am open to discussion!)
Hope this helps not only solving it, but also showing some of your mistakes and wrong (but still not trivial!) reasoning.
$endgroup$
$begingroup$
In fact your proof (with $-a_1$ signal) is right. Probably some typo from who build the exercise. This help me to clarify some points also, as you point out my mistakes. Thanks a lot!
$endgroup$
– Enrique René
Dec 8 '18 at 22:30
add a comment |
$begingroup$
First of all, there a couple of errors in your computations. For example, the average you are taking are over time so you should use $rho(t)dt$, not $rho(x)dx$!. Also the Taylor approximation should be $$x(t+tau)sim x(t)+tau dot{x}$$
Moreover, approximating $G(tau)$ for small $tau$ would just give you an hint of what would happen at small $tau$, you would not be able to recover the full $G(tau)$. If you had not made the mistakes you did, indeed, following your computations but slightly corrected and using $left< * right>$ for the average of $*$ (i.e. $left< * right> = int_t * rho(t)dt$ ):
$$G(tau)sim left< x(t)(x(t) +taudot{x} ) right>=left<x^2(t)+tau x(t)dot{x} right>=left< x^2(t) right> +tauleft<x(t)dot{x} right> $$
now, using the expression you have for $dot{x}$ and the fact that $left< x^2(t) right>=G(0)$:
$$G(tau)sim G(0) + tau left < xleft(-a_1 x+sqrt{B(x)}eta(t)right) right>$$
i.e.
$$G(tau)sim G(0)-tau a_1left<x^2right>+tauleft< xsqrt{B(x)}eta(t)right>$$
now we make the assumption that $eta(t)$ is not correlated with the $x$-terms [notice that this is the only step in which I actually have to assume. I think it is right or that any similar assumption applies, but maybe think about it), i.e. that we can write:
$$G(tau)sim G(0)-a_1tau left<x^2right>+tau left<xsqrt{B(x)}right> left< eta(t)right>$$
and now because $left< eta(t)right>=0$ we get, again because $left <x^2(t)right>=G(0)$:
$$G(tau)sim G(0)(1-a_1tau)$$
which is the small $tau$ expansion of the solution you need:
$$G(tau)=G(0)e^{-a_1tau}sim G(0)(1-a_1tau)$$
(I get a minus sign which you don't have, which I think is also right as otherwise the correlation would increase over time, which is weird... who of us made the mistake..?)
Anyways this procedure could have given you a hint, and a small-$tau$ proof of the result, but not the final solution.
What instead if try to compute
$${d G(tau)over dtau} = left< x(t){dx(t+tau)over d tau}right>$$
(where I only take the derivative of the second one because the first on has no $tau$ dependence)?. So as ${dx(t+tau)over d tau}={dx(tau)over d tau}|_{t+tau}=dot{x}|_{t+tau}$:
$${d G(tau)over dtau} = left< x(t)left(-a_1x(t+tau)+sqrt{B(x)}eta(t+tau)right)right>$$
for the exact same reasons as before $$left<eta(t+tau)right>=0$$ and we are left with
$${d G(tau)over dtau} = -a_1left< x(t)x(t+tau)right>=-a_1G(tau)$$
so that our solution is, solving the easy $dot{y}=-Ayrightarrow y(t)=y(0)e^{-At}$ differential equation
$$G(tau)=G(0)e^{-a_1tau}$$
(again with a minus sign which I trust - but I am open to discussion!)
Hope this helps not only solving it, but also showing some of your mistakes and wrong (but still not trivial!) reasoning.
$endgroup$
First of all, there a couple of errors in your computations. For example, the average you are taking are over time so you should use $rho(t)dt$, not $rho(x)dx$!. Also the Taylor approximation should be $$x(t+tau)sim x(t)+tau dot{x}$$
Moreover, approximating $G(tau)$ for small $tau$ would just give you an hint of what would happen at small $tau$, you would not be able to recover the full $G(tau)$. If you had not made the mistakes you did, indeed, following your computations but slightly corrected and using $left< * right>$ for the average of $*$ (i.e. $left< * right> = int_t * rho(t)dt$ ):
$$G(tau)sim left< x(t)(x(t) +taudot{x} ) right>=left<x^2(t)+tau x(t)dot{x} right>=left< x^2(t) right> +tauleft<x(t)dot{x} right> $$
now, using the expression you have for $dot{x}$ and the fact that $left< x^2(t) right>=G(0)$:
$$G(tau)sim G(0) + tau left < xleft(-a_1 x+sqrt{B(x)}eta(t)right) right>$$
i.e.
$$G(tau)sim G(0)-tau a_1left<x^2right>+tauleft< xsqrt{B(x)}eta(t)right>$$
now we make the assumption that $eta(t)$ is not correlated with the $x$-terms [notice that this is the only step in which I actually have to assume. I think it is right or that any similar assumption applies, but maybe think about it), i.e. that we can write:
$$G(tau)sim G(0)-a_1tau left<x^2right>+tau left<xsqrt{B(x)}right> left< eta(t)right>$$
and now because $left< eta(t)right>=0$ we get, again because $left <x^2(t)right>=G(0)$:
$$G(tau)sim G(0)(1-a_1tau)$$
which is the small $tau$ expansion of the solution you need:
$$G(tau)=G(0)e^{-a_1tau}sim G(0)(1-a_1tau)$$
(I get a minus sign which you don't have, which I think is also right as otherwise the correlation would increase over time, which is weird... who of us made the mistake..?)
Anyways this procedure could have given you a hint, and a small-$tau$ proof of the result, but not the final solution.
What instead if try to compute
$${d G(tau)over dtau} = left< x(t){dx(t+tau)over d tau}right>$$
(where I only take the derivative of the second one because the first on has no $tau$ dependence)?. So as ${dx(t+tau)over d tau}={dx(tau)over d tau}|_{t+tau}=dot{x}|_{t+tau}$:
$${d G(tau)over dtau} = left< x(t)left(-a_1x(t+tau)+sqrt{B(x)}eta(t+tau)right)right>$$
for the exact same reasons as before $$left<eta(t+tau)right>=0$$ and we are left with
$${d G(tau)over dtau} = -a_1left< x(t)x(t+tau)right>=-a_1G(tau)$$
so that our solution is, solving the easy $dot{y}=-Ayrightarrow y(t)=y(0)e^{-At}$ differential equation
$$G(tau)=G(0)e^{-a_1tau}$$
(again with a minus sign which I trust - but I am open to discussion!)
Hope this helps not only solving it, but also showing some of your mistakes and wrong (but still not trivial!) reasoning.
answered Dec 8 '18 at 16:19
JalfredPJalfredP
1612
1612
$begingroup$
In fact your proof (with $-a_1$ signal) is right. Probably some typo from who build the exercise. This help me to clarify some points also, as you point out my mistakes. Thanks a lot!
$endgroup$
– Enrique René
Dec 8 '18 at 22:30
add a comment |
$begingroup$
In fact your proof (with $-a_1$ signal) is right. Probably some typo from who build the exercise. This help me to clarify some points also, as you point out my mistakes. Thanks a lot!
$endgroup$
– Enrique René
Dec 8 '18 at 22:30
$begingroup$
In fact your proof (with $-a_1$ signal) is right. Probably some typo from who build the exercise. This help me to clarify some points also, as you point out my mistakes. Thanks a lot!
$endgroup$
– Enrique René
Dec 8 '18 at 22:30
$begingroup$
In fact your proof (with $-a_1$ signal) is right. Probably some typo from who build the exercise. This help me to clarify some points also, as you point out my mistakes. Thanks a lot!
$endgroup$
– Enrique René
Dec 8 '18 at 22:30
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3044627%2fstochastic-differential-equations-with-null-mean-and-unit-variance%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown