Derivation of the weak form for a parabolic PDE - initial-boundary problem
I am reading a paper that seems to provide a solution for the problem I am facing but being unfamiliar with variational calculus I get lost in notation.
I am trying to derive the weak form from the strong form in the following problem.
Solving for $u(t,x)$ for $ (t,x) in [0,T] times R^d $.
The set $A in R^d$ is open with boundary $partial A$.
The strong form is as follows:
$$ frac{partial u}{partial t}(t,x) - frac{1}{2} sum_i sum_j a_{ij}(x) frac{partial^2 u(t,x)}{partial x_i partial x_j} - sum_i b_i(x) frac{partial u(t,x)}{partial x_i} = 0;,; on ; (t,x) in [0,T] times A, $$
$$ u(0,x) = 1, ; x in A, $$
$$ u(t,x) = 0, ; x in partial A,; t > 0$$
The paper indicates that the weak form is as follows:
$$ frac{d u}{d t}(u(t,.),v) + g(u(t,.),v) = 0, forall v in H_0^1(A), $$
$$ u(0,.) = 1, $$
where $ g(u(t,.),v) = frac{1}{2} (a nabla u(t,.), nabla v) - left( (b-text{div } a)nabla u,vright) $.
I assume $ (a,b) = int_A a(x) b(x) dx $.
This seems to be a classical result, the issue is that I am not familiar with the notation, nor with tensor/variational calculus. I assume that it involves a multivariate integration by part, which is foreign to me.
- how to derive $g(u(t,.),v)$ ?
- what is the divergence of the matrix-valued $a$ ?
I get the part with $b$: $int_A left( sum_i b_i(x) frac{partial u(t,x)}{partial x_i} right) v(x) dx = (bnabla u,v)$.
The problem is the part with $a$.
Thanks a lot for any help !
Source of the problem:
P. Patie, C. Winter, (2008) "First exit time probability for multidimensional diffusions: A PDE-based approach"
pde brownian-motion calculus-of-variations parabolic-pde
add a comment |
I am reading a paper that seems to provide a solution for the problem I am facing but being unfamiliar with variational calculus I get lost in notation.
I am trying to derive the weak form from the strong form in the following problem.
Solving for $u(t,x)$ for $ (t,x) in [0,T] times R^d $.
The set $A in R^d$ is open with boundary $partial A$.
The strong form is as follows:
$$ frac{partial u}{partial t}(t,x) - frac{1}{2} sum_i sum_j a_{ij}(x) frac{partial^2 u(t,x)}{partial x_i partial x_j} - sum_i b_i(x) frac{partial u(t,x)}{partial x_i} = 0;,; on ; (t,x) in [0,T] times A, $$
$$ u(0,x) = 1, ; x in A, $$
$$ u(t,x) = 0, ; x in partial A,; t > 0$$
The paper indicates that the weak form is as follows:
$$ frac{d u}{d t}(u(t,.),v) + g(u(t,.),v) = 0, forall v in H_0^1(A), $$
$$ u(0,.) = 1, $$
where $ g(u(t,.),v) = frac{1}{2} (a nabla u(t,.), nabla v) - left( (b-text{div } a)nabla u,vright) $.
I assume $ (a,b) = int_A a(x) b(x) dx $.
This seems to be a classical result, the issue is that I am not familiar with the notation, nor with tensor/variational calculus. I assume that it involves a multivariate integration by part, which is foreign to me.
- how to derive $g(u(t,.),v)$ ?
- what is the divergence of the matrix-valued $a$ ?
I get the part with $b$: $int_A left( sum_i b_i(x) frac{partial u(t,x)}{partial x_i} right) v(x) dx = (bnabla u,v)$.
The problem is the part with $a$.
Thanks a lot for any help !
Source of the problem:
P. Patie, C. Winter, (2008) "First exit time probability for multidimensional diffusions: A PDE-based approach"
pde brownian-motion calculus-of-variations parabolic-pde
The answer is really just integration by parts. This should also clarify what is meant by $operatorname{div}$ in this case (you may want to use the product rule on that term).
– MaoWao
Nov 25 at 0:13
Ok, I will try to look more into this. The wikipedia part on IBP in higher dimensions was intimidating. I should find a more "textbook" style explanation.
– RemiDav
Nov 25 at 0:38
You have to be a little careful with regularity issues, but to get a rough understanding, just assume that you can extend $u(t,cdot)$ by $0$ to the entire space. Then you can just integrate every component separately and the boundary terms vanish so that you end up with $int (partial_i u)v=-int upartial_i v$.
– MaoWao
Nov 25 at 0:48
Thanks a lot for your help, I posted what I found as an answer based on your advice. The issue is that I don't find the same result as the paper (T_T). Also I am not sure what happens with weird boundaries such as $x in (0,infty)$. Does $v(x)$ vanish at infinity too ?
– RemiDav
Nov 25 at 3:56
add a comment |
I am reading a paper that seems to provide a solution for the problem I am facing but being unfamiliar with variational calculus I get lost in notation.
I am trying to derive the weak form from the strong form in the following problem.
Solving for $u(t,x)$ for $ (t,x) in [0,T] times R^d $.
The set $A in R^d$ is open with boundary $partial A$.
The strong form is as follows:
$$ frac{partial u}{partial t}(t,x) - frac{1}{2} sum_i sum_j a_{ij}(x) frac{partial^2 u(t,x)}{partial x_i partial x_j} - sum_i b_i(x) frac{partial u(t,x)}{partial x_i} = 0;,; on ; (t,x) in [0,T] times A, $$
$$ u(0,x) = 1, ; x in A, $$
$$ u(t,x) = 0, ; x in partial A,; t > 0$$
The paper indicates that the weak form is as follows:
$$ frac{d u}{d t}(u(t,.),v) + g(u(t,.),v) = 0, forall v in H_0^1(A), $$
$$ u(0,.) = 1, $$
where $ g(u(t,.),v) = frac{1}{2} (a nabla u(t,.), nabla v) - left( (b-text{div } a)nabla u,vright) $.
I assume $ (a,b) = int_A a(x) b(x) dx $.
This seems to be a classical result, the issue is that I am not familiar with the notation, nor with tensor/variational calculus. I assume that it involves a multivariate integration by part, which is foreign to me.
- how to derive $g(u(t,.),v)$ ?
- what is the divergence of the matrix-valued $a$ ?
I get the part with $b$: $int_A left( sum_i b_i(x) frac{partial u(t,x)}{partial x_i} right) v(x) dx = (bnabla u,v)$.
The problem is the part with $a$.
Thanks a lot for any help !
Source of the problem:
P. Patie, C. Winter, (2008) "First exit time probability for multidimensional diffusions: A PDE-based approach"
pde brownian-motion calculus-of-variations parabolic-pde
I am reading a paper that seems to provide a solution for the problem I am facing but being unfamiliar with variational calculus I get lost in notation.
I am trying to derive the weak form from the strong form in the following problem.
Solving for $u(t,x)$ for $ (t,x) in [0,T] times R^d $.
The set $A in R^d$ is open with boundary $partial A$.
The strong form is as follows:
$$ frac{partial u}{partial t}(t,x) - frac{1}{2} sum_i sum_j a_{ij}(x) frac{partial^2 u(t,x)}{partial x_i partial x_j} - sum_i b_i(x) frac{partial u(t,x)}{partial x_i} = 0;,; on ; (t,x) in [0,T] times A, $$
$$ u(0,x) = 1, ; x in A, $$
$$ u(t,x) = 0, ; x in partial A,; t > 0$$
The paper indicates that the weak form is as follows:
$$ frac{d u}{d t}(u(t,.),v) + g(u(t,.),v) = 0, forall v in H_0^1(A), $$
$$ u(0,.) = 1, $$
where $ g(u(t,.),v) = frac{1}{2} (a nabla u(t,.), nabla v) - left( (b-text{div } a)nabla u,vright) $.
I assume $ (a,b) = int_A a(x) b(x) dx $.
This seems to be a classical result, the issue is that I am not familiar with the notation, nor with tensor/variational calculus. I assume that it involves a multivariate integration by part, which is foreign to me.
- how to derive $g(u(t,.),v)$ ?
- what is the divergence of the matrix-valued $a$ ?
I get the part with $b$: $int_A left( sum_i b_i(x) frac{partial u(t,x)}{partial x_i} right) v(x) dx = (bnabla u,v)$.
The problem is the part with $a$.
Thanks a lot for any help !
Source of the problem:
P. Patie, C. Winter, (2008) "First exit time probability for multidimensional diffusions: A PDE-based approach"
pde brownian-motion calculus-of-variations parabolic-pde
pde brownian-motion calculus-of-variations parabolic-pde
edited Nov 25 at 1:04
asked Nov 24 at 23:13
RemiDav
62
62
The answer is really just integration by parts. This should also clarify what is meant by $operatorname{div}$ in this case (you may want to use the product rule on that term).
– MaoWao
Nov 25 at 0:13
Ok, I will try to look more into this. The wikipedia part on IBP in higher dimensions was intimidating. I should find a more "textbook" style explanation.
– RemiDav
Nov 25 at 0:38
You have to be a little careful with regularity issues, but to get a rough understanding, just assume that you can extend $u(t,cdot)$ by $0$ to the entire space. Then you can just integrate every component separately and the boundary terms vanish so that you end up with $int (partial_i u)v=-int upartial_i v$.
– MaoWao
Nov 25 at 0:48
Thanks a lot for your help, I posted what I found as an answer based on your advice. The issue is that I don't find the same result as the paper (T_T). Also I am not sure what happens with weird boundaries such as $x in (0,infty)$. Does $v(x)$ vanish at infinity too ?
– RemiDav
Nov 25 at 3:56
add a comment |
The answer is really just integration by parts. This should also clarify what is meant by $operatorname{div}$ in this case (you may want to use the product rule on that term).
– MaoWao
Nov 25 at 0:13
Ok, I will try to look more into this. The wikipedia part on IBP in higher dimensions was intimidating. I should find a more "textbook" style explanation.
– RemiDav
Nov 25 at 0:38
You have to be a little careful with regularity issues, but to get a rough understanding, just assume that you can extend $u(t,cdot)$ by $0$ to the entire space. Then you can just integrate every component separately and the boundary terms vanish so that you end up with $int (partial_i u)v=-int upartial_i v$.
– MaoWao
Nov 25 at 0:48
Thanks a lot for your help, I posted what I found as an answer based on your advice. The issue is that I don't find the same result as the paper (T_T). Also I am not sure what happens with weird boundaries such as $x in (0,infty)$. Does $v(x)$ vanish at infinity too ?
– RemiDav
Nov 25 at 3:56
The answer is really just integration by parts. This should also clarify what is meant by $operatorname{div}$ in this case (you may want to use the product rule on that term).
– MaoWao
Nov 25 at 0:13
The answer is really just integration by parts. This should also clarify what is meant by $operatorname{div}$ in this case (you may want to use the product rule on that term).
– MaoWao
Nov 25 at 0:13
Ok, I will try to look more into this. The wikipedia part on IBP in higher dimensions was intimidating. I should find a more "textbook" style explanation.
– RemiDav
Nov 25 at 0:38
Ok, I will try to look more into this. The wikipedia part on IBP in higher dimensions was intimidating. I should find a more "textbook" style explanation.
– RemiDav
Nov 25 at 0:38
You have to be a little careful with regularity issues, but to get a rough understanding, just assume that you can extend $u(t,cdot)$ by $0$ to the entire space. Then you can just integrate every component separately and the boundary terms vanish so that you end up with $int (partial_i u)v=-int upartial_i v$.
– MaoWao
Nov 25 at 0:48
You have to be a little careful with regularity issues, but to get a rough understanding, just assume that you can extend $u(t,cdot)$ by $0$ to the entire space. Then you can just integrate every component separately and the boundary terms vanish so that you end up with $int (partial_i u)v=-int upartial_i v$.
– MaoWao
Nov 25 at 0:48
Thanks a lot for your help, I posted what I found as an answer based on your advice. The issue is that I don't find the same result as the paper (T_T). Also I am not sure what happens with weird boundaries such as $x in (0,infty)$. Does $v(x)$ vanish at infinity too ?
– RemiDav
Nov 25 at 3:56
Thanks a lot for your help, I posted what I found as an answer based on your advice. The issue is that I don't find the same result as the paper (T_T). Also I am not sure what happens with weird boundaries such as $x in (0,infty)$. Does $v(x)$ vanish at infinity too ?
– RemiDav
Nov 25 at 3:56
add a comment |
1 Answer
1
active
oldest
votes
I hope this is correct, there is still some uncertainty for some parts.
Focusing on:
$$ sum_i sum_j int a_{ij}(x) partial_{ij} u(x) v(x) dx tag{1} label{1}$$
For a given dimension (e.g. $x_j$) we do an IBP.
We have $[a_{ij}(x) partial_{i} u(x) v(x)]_{partial A}=0$, since $v(x) = 0$ at the boundary.
$$ int a_{ij}(x) partial_{ij} u(x) v(x) dx = [a_{ij}(x) partial_{i} u(x) v(x)]_{partial A} - int ( partial_j a_{ij}(x)v(x)+ a_{ij}(x)partial_j v(x))partial_i u(x)dx \
= - int partial_j a_{ij}(x)v(x) partial_i u(x) dx - int a_{ij}(x)partial_j v(x)partial_i u(x)dx
$$
Now we sum the two terms over $i$ and $j$.
For the first term we have:
$$
begin{align}
sum_i sum_j int partial_j a_{ij}(x)v(x) partial_i u(x) dx &
= int left( sum_i left( sum_j partial_j a_{ij}(x)right) partial_i u(x)right) v(x) dx \
& = int left( sum_i (text{div } a)_i partial_i u(x)right) v(x) dx \
& = (text{div } a nabla u,v)
end{align}
$$
where $ (text{div } a)_i = left( sum_j partial_j a_{ij}(x) right)_i $.
For the second term we have:
$$
begin{align}
sum_i sum_j int a_{ij}(x)partial_j v(x)partial_i u(x)dx
& = int sum_j left( sum_i a_{ij}(x) partial_i u(x) ) right) partial_j v(x) dx \
& = ( anabla u , nabla v)
end{align}
$$
Plugging this in $(ref{1})$, we get:
$$ (1) = - (text{div } a nabla u,v) - ( anabla u , nabla v). tag{2} $$
Coming back to the original term, we wanted:
$$ - int left( frac{1}{2} sum_i sum_j a_{ij}(x) partial_{ij} u(x) + sum_i b_i(x) partial_i u(x) right) v(x) dx. tag{3} $$
Using the fact that $ int sum_i b_i(x) partial_i u(x) v(x) dx = (bnabla u,v),; $ and (2), we get:
$$ begin{align}
(3) & = frac{1}{2} (text{div } a nabla u,v) + frac{1}{2} ( anabla u , nabla v) - (bnabla u,v) \
& = frac{1}{2} ( anabla u , nabla v) - ( (b - frac{1}{2} text{div } a ) nabla u, v )
end{align}
$$
This is different from what the paper gives, I have an additional "$frac{1}{2}$" (who made an error ?):
$$ begin{align}
(g) & = frac{1}{2} ( anabla u , nabla v) - ( (b - text{div } a ) nabla u, v )
end{align}
$$
So I see that it mostly works if we have a nice bounded set such as $A={l_i<x_i<u_i;;i=1,...,d}$.
But I am not sure what happens if it is unbounded on one side: $A={x_i<u_i;;i=1,...,d}$.
Or worse, it is a weird boundary: $A={x_i-x_j<u_i;;i,j=1,...,d}$.
Feel free to comment, I will edit the answer accordingly.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3012222%2fderivation-of-the-weak-form-for-a-parabolic-pde-initial-boundary-problem%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
I hope this is correct, there is still some uncertainty for some parts.
Focusing on:
$$ sum_i sum_j int a_{ij}(x) partial_{ij} u(x) v(x) dx tag{1} label{1}$$
For a given dimension (e.g. $x_j$) we do an IBP.
We have $[a_{ij}(x) partial_{i} u(x) v(x)]_{partial A}=0$, since $v(x) = 0$ at the boundary.
$$ int a_{ij}(x) partial_{ij} u(x) v(x) dx = [a_{ij}(x) partial_{i} u(x) v(x)]_{partial A} - int ( partial_j a_{ij}(x)v(x)+ a_{ij}(x)partial_j v(x))partial_i u(x)dx \
= - int partial_j a_{ij}(x)v(x) partial_i u(x) dx - int a_{ij}(x)partial_j v(x)partial_i u(x)dx
$$
Now we sum the two terms over $i$ and $j$.
For the first term we have:
$$
begin{align}
sum_i sum_j int partial_j a_{ij}(x)v(x) partial_i u(x) dx &
= int left( sum_i left( sum_j partial_j a_{ij}(x)right) partial_i u(x)right) v(x) dx \
& = int left( sum_i (text{div } a)_i partial_i u(x)right) v(x) dx \
& = (text{div } a nabla u,v)
end{align}
$$
where $ (text{div } a)_i = left( sum_j partial_j a_{ij}(x) right)_i $.
For the second term we have:
$$
begin{align}
sum_i sum_j int a_{ij}(x)partial_j v(x)partial_i u(x)dx
& = int sum_j left( sum_i a_{ij}(x) partial_i u(x) ) right) partial_j v(x) dx \
& = ( anabla u , nabla v)
end{align}
$$
Plugging this in $(ref{1})$, we get:
$$ (1) = - (text{div } a nabla u,v) - ( anabla u , nabla v). tag{2} $$
Coming back to the original term, we wanted:
$$ - int left( frac{1}{2} sum_i sum_j a_{ij}(x) partial_{ij} u(x) + sum_i b_i(x) partial_i u(x) right) v(x) dx. tag{3} $$
Using the fact that $ int sum_i b_i(x) partial_i u(x) v(x) dx = (bnabla u,v),; $ and (2), we get:
$$ begin{align}
(3) & = frac{1}{2} (text{div } a nabla u,v) + frac{1}{2} ( anabla u , nabla v) - (bnabla u,v) \
& = frac{1}{2} ( anabla u , nabla v) - ( (b - frac{1}{2} text{div } a ) nabla u, v )
end{align}
$$
This is different from what the paper gives, I have an additional "$frac{1}{2}$" (who made an error ?):
$$ begin{align}
(g) & = frac{1}{2} ( anabla u , nabla v) - ( (b - text{div } a ) nabla u, v )
end{align}
$$
So I see that it mostly works if we have a nice bounded set such as $A={l_i<x_i<u_i;;i=1,...,d}$.
But I am not sure what happens if it is unbounded on one side: $A={x_i<u_i;;i=1,...,d}$.
Or worse, it is a weird boundary: $A={x_i-x_j<u_i;;i,j=1,...,d}$.
Feel free to comment, I will edit the answer accordingly.
add a comment |
I hope this is correct, there is still some uncertainty for some parts.
Focusing on:
$$ sum_i sum_j int a_{ij}(x) partial_{ij} u(x) v(x) dx tag{1} label{1}$$
For a given dimension (e.g. $x_j$) we do an IBP.
We have $[a_{ij}(x) partial_{i} u(x) v(x)]_{partial A}=0$, since $v(x) = 0$ at the boundary.
$$ int a_{ij}(x) partial_{ij} u(x) v(x) dx = [a_{ij}(x) partial_{i} u(x) v(x)]_{partial A} - int ( partial_j a_{ij}(x)v(x)+ a_{ij}(x)partial_j v(x))partial_i u(x)dx \
= - int partial_j a_{ij}(x)v(x) partial_i u(x) dx - int a_{ij}(x)partial_j v(x)partial_i u(x)dx
$$
Now we sum the two terms over $i$ and $j$.
For the first term we have:
$$
begin{align}
sum_i sum_j int partial_j a_{ij}(x)v(x) partial_i u(x) dx &
= int left( sum_i left( sum_j partial_j a_{ij}(x)right) partial_i u(x)right) v(x) dx \
& = int left( sum_i (text{div } a)_i partial_i u(x)right) v(x) dx \
& = (text{div } a nabla u,v)
end{align}
$$
where $ (text{div } a)_i = left( sum_j partial_j a_{ij}(x) right)_i $.
For the second term we have:
$$
begin{align}
sum_i sum_j int a_{ij}(x)partial_j v(x)partial_i u(x)dx
& = int sum_j left( sum_i a_{ij}(x) partial_i u(x) ) right) partial_j v(x) dx \
& = ( anabla u , nabla v)
end{align}
$$
Plugging this in $(ref{1})$, we get:
$$ (1) = - (text{div } a nabla u,v) - ( anabla u , nabla v). tag{2} $$
Coming back to the original term, we wanted:
$$ - int left( frac{1}{2} sum_i sum_j a_{ij}(x) partial_{ij} u(x) + sum_i b_i(x) partial_i u(x) right) v(x) dx. tag{3} $$
Using the fact that $ int sum_i b_i(x) partial_i u(x) v(x) dx = (bnabla u,v),; $ and (2), we get:
$$ begin{align}
(3) & = frac{1}{2} (text{div } a nabla u,v) + frac{1}{2} ( anabla u , nabla v) - (bnabla u,v) \
& = frac{1}{2} ( anabla u , nabla v) - ( (b - frac{1}{2} text{div } a ) nabla u, v )
end{align}
$$
This is different from what the paper gives, I have an additional "$frac{1}{2}$" (who made an error ?):
$$ begin{align}
(g) & = frac{1}{2} ( anabla u , nabla v) - ( (b - text{div } a ) nabla u, v )
end{align}
$$
So I see that it mostly works if we have a nice bounded set such as $A={l_i<x_i<u_i;;i=1,...,d}$.
But I am not sure what happens if it is unbounded on one side: $A={x_i<u_i;;i=1,...,d}$.
Or worse, it is a weird boundary: $A={x_i-x_j<u_i;;i,j=1,...,d}$.
Feel free to comment, I will edit the answer accordingly.
add a comment |
I hope this is correct, there is still some uncertainty for some parts.
Focusing on:
$$ sum_i sum_j int a_{ij}(x) partial_{ij} u(x) v(x) dx tag{1} label{1}$$
For a given dimension (e.g. $x_j$) we do an IBP.
We have $[a_{ij}(x) partial_{i} u(x) v(x)]_{partial A}=0$, since $v(x) = 0$ at the boundary.
$$ int a_{ij}(x) partial_{ij} u(x) v(x) dx = [a_{ij}(x) partial_{i} u(x) v(x)]_{partial A} - int ( partial_j a_{ij}(x)v(x)+ a_{ij}(x)partial_j v(x))partial_i u(x)dx \
= - int partial_j a_{ij}(x)v(x) partial_i u(x) dx - int a_{ij}(x)partial_j v(x)partial_i u(x)dx
$$
Now we sum the two terms over $i$ and $j$.
For the first term we have:
$$
begin{align}
sum_i sum_j int partial_j a_{ij}(x)v(x) partial_i u(x) dx &
= int left( sum_i left( sum_j partial_j a_{ij}(x)right) partial_i u(x)right) v(x) dx \
& = int left( sum_i (text{div } a)_i partial_i u(x)right) v(x) dx \
& = (text{div } a nabla u,v)
end{align}
$$
where $ (text{div } a)_i = left( sum_j partial_j a_{ij}(x) right)_i $.
For the second term we have:
$$
begin{align}
sum_i sum_j int a_{ij}(x)partial_j v(x)partial_i u(x)dx
& = int sum_j left( sum_i a_{ij}(x) partial_i u(x) ) right) partial_j v(x) dx \
& = ( anabla u , nabla v)
end{align}
$$
Plugging this in $(ref{1})$, we get:
$$ (1) = - (text{div } a nabla u,v) - ( anabla u , nabla v). tag{2} $$
Coming back to the original term, we wanted:
$$ - int left( frac{1}{2} sum_i sum_j a_{ij}(x) partial_{ij} u(x) + sum_i b_i(x) partial_i u(x) right) v(x) dx. tag{3} $$
Using the fact that $ int sum_i b_i(x) partial_i u(x) v(x) dx = (bnabla u,v),; $ and (2), we get:
$$ begin{align}
(3) & = frac{1}{2} (text{div } a nabla u,v) + frac{1}{2} ( anabla u , nabla v) - (bnabla u,v) \
& = frac{1}{2} ( anabla u , nabla v) - ( (b - frac{1}{2} text{div } a ) nabla u, v )
end{align}
$$
This is different from what the paper gives, I have an additional "$frac{1}{2}$" (who made an error ?):
$$ begin{align}
(g) & = frac{1}{2} ( anabla u , nabla v) - ( (b - text{div } a ) nabla u, v )
end{align}
$$
So I see that it mostly works if we have a nice bounded set such as $A={l_i<x_i<u_i;;i=1,...,d}$.
But I am not sure what happens if it is unbounded on one side: $A={x_i<u_i;;i=1,...,d}$.
Or worse, it is a weird boundary: $A={x_i-x_j<u_i;;i,j=1,...,d}$.
Feel free to comment, I will edit the answer accordingly.
I hope this is correct, there is still some uncertainty for some parts.
Focusing on:
$$ sum_i sum_j int a_{ij}(x) partial_{ij} u(x) v(x) dx tag{1} label{1}$$
For a given dimension (e.g. $x_j$) we do an IBP.
We have $[a_{ij}(x) partial_{i} u(x) v(x)]_{partial A}=0$, since $v(x) = 0$ at the boundary.
$$ int a_{ij}(x) partial_{ij} u(x) v(x) dx = [a_{ij}(x) partial_{i} u(x) v(x)]_{partial A} - int ( partial_j a_{ij}(x)v(x)+ a_{ij}(x)partial_j v(x))partial_i u(x)dx \
= - int partial_j a_{ij}(x)v(x) partial_i u(x) dx - int a_{ij}(x)partial_j v(x)partial_i u(x)dx
$$
Now we sum the two terms over $i$ and $j$.
For the first term we have:
$$
begin{align}
sum_i sum_j int partial_j a_{ij}(x)v(x) partial_i u(x) dx &
= int left( sum_i left( sum_j partial_j a_{ij}(x)right) partial_i u(x)right) v(x) dx \
& = int left( sum_i (text{div } a)_i partial_i u(x)right) v(x) dx \
& = (text{div } a nabla u,v)
end{align}
$$
where $ (text{div } a)_i = left( sum_j partial_j a_{ij}(x) right)_i $.
For the second term we have:
$$
begin{align}
sum_i sum_j int a_{ij}(x)partial_j v(x)partial_i u(x)dx
& = int sum_j left( sum_i a_{ij}(x) partial_i u(x) ) right) partial_j v(x) dx \
& = ( anabla u , nabla v)
end{align}
$$
Plugging this in $(ref{1})$, we get:
$$ (1) = - (text{div } a nabla u,v) - ( anabla u , nabla v). tag{2} $$
Coming back to the original term, we wanted:
$$ - int left( frac{1}{2} sum_i sum_j a_{ij}(x) partial_{ij} u(x) + sum_i b_i(x) partial_i u(x) right) v(x) dx. tag{3} $$
Using the fact that $ int sum_i b_i(x) partial_i u(x) v(x) dx = (bnabla u,v),; $ and (2), we get:
$$ begin{align}
(3) & = frac{1}{2} (text{div } a nabla u,v) + frac{1}{2} ( anabla u , nabla v) - (bnabla u,v) \
& = frac{1}{2} ( anabla u , nabla v) - ( (b - frac{1}{2} text{div } a ) nabla u, v )
end{align}
$$
This is different from what the paper gives, I have an additional "$frac{1}{2}$" (who made an error ?):
$$ begin{align}
(g) & = frac{1}{2} ( anabla u , nabla v) - ( (b - text{div } a ) nabla u, v )
end{align}
$$
So I see that it mostly works if we have a nice bounded set such as $A={l_i<x_i<u_i;;i=1,...,d}$.
But I am not sure what happens if it is unbounded on one side: $A={x_i<u_i;;i=1,...,d}$.
Or worse, it is a weird boundary: $A={x_i-x_j<u_i;;i,j=1,...,d}$.
Feel free to comment, I will edit the answer accordingly.
edited Nov 25 at 4:53
answered Nov 25 at 3:26
RemiDav
62
62
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3012222%2fderivation-of-the-weak-form-for-a-parabolic-pde-initial-boundary-problem%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
The answer is really just integration by parts. This should also clarify what is meant by $operatorname{div}$ in this case (you may want to use the product rule on that term).
– MaoWao
Nov 25 at 0:13
Ok, I will try to look more into this. The wikipedia part on IBP in higher dimensions was intimidating. I should find a more "textbook" style explanation.
– RemiDav
Nov 25 at 0:38
You have to be a little careful with regularity issues, but to get a rough understanding, just assume that you can extend $u(t,cdot)$ by $0$ to the entire space. Then you can just integrate every component separately and the boundary terms vanish so that you end up with $int (partial_i u)v=-int upartial_i v$.
– MaoWao
Nov 25 at 0:48
Thanks a lot for your help, I posted what I found as an answer based on your advice. The issue is that I don't find the same result as the paper (T_T). Also I am not sure what happens with weird boundaries such as $x in (0,infty)$. Does $v(x)$ vanish at infinity too ?
– RemiDav
Nov 25 at 3:56