If $A^mu$ is not determined uniquely by Maxwell's equations, what happens if we solve for it numerically?
up vote
7
down vote
favorite
Given a solution $A^{mu}(x)$ to Maxwell's equations
begin{equation}
Box A^{mu}(x)-partial^{mu}partial_{nu}A^{nu}=0tag{1}
end{equation}
which also satisfies some specified initial conditions at time $t_0$
begin{equation}
A^{mu}(vec{x},t_0)=f^{mu}(vec{x}),quad dot{A}^{mu}(vec{x},t_0)=g^{mu}(vec{x})tag{2}
end{equation}
we have that the function
begin{equation}
A^{'mu}(x)=A^{mu}(x)+partial^{mu}alpha(x)tag{3}
end{equation}
also satisfies the equations of motion, and if we arrange that the scalar function $alpha$ also satisfy that
begin{equation}
partial^{mu}alpha(vec{x},t_0)=0,quad partial^{mu}dot{alpha}(vec{x},t_0)=0 tag{4}
end{equation}
at the initial time $t_0$, then the new solution $A^{'mu}$ also satisfies the initial conditions. For example, the function
begin{equation}
alpha(vec{x},t)=(t-t_0)^5h(vec{x})e^{-(t-t_0)^2}
end{equation}
satisfies the conditions Eq.$(4)$ and also vanishes at $trightarrow pm infty$. Therefore, the solution to Eq.$(1)$ is not uniquely determined by the initial data Eq.$(2)$.
Question: If one simulates Eq.$(1)$ numerically on a computer, why is the field configuration at a later time not uniquely determined by the data in Eq.$(2)$?
electromagnetism gauge-theory maxwell-equations boundary-conditions determinism
|
show 1 more comment
up vote
7
down vote
favorite
Given a solution $A^{mu}(x)$ to Maxwell's equations
begin{equation}
Box A^{mu}(x)-partial^{mu}partial_{nu}A^{nu}=0tag{1}
end{equation}
which also satisfies some specified initial conditions at time $t_0$
begin{equation}
A^{mu}(vec{x},t_0)=f^{mu}(vec{x}),quad dot{A}^{mu}(vec{x},t_0)=g^{mu}(vec{x})tag{2}
end{equation}
we have that the function
begin{equation}
A^{'mu}(x)=A^{mu}(x)+partial^{mu}alpha(x)tag{3}
end{equation}
also satisfies the equations of motion, and if we arrange that the scalar function $alpha$ also satisfy that
begin{equation}
partial^{mu}alpha(vec{x},t_0)=0,quad partial^{mu}dot{alpha}(vec{x},t_0)=0 tag{4}
end{equation}
at the initial time $t_0$, then the new solution $A^{'mu}$ also satisfies the initial conditions. For example, the function
begin{equation}
alpha(vec{x},t)=(t-t_0)^5h(vec{x})e^{-(t-t_0)^2}
end{equation}
satisfies the conditions Eq.$(4)$ and also vanishes at $trightarrow pm infty$. Therefore, the solution to Eq.$(1)$ is not uniquely determined by the initial data Eq.$(2)$.
Question: If one simulates Eq.$(1)$ numerically on a computer, why is the field configuration at a later time not uniquely determined by the data in Eq.$(2)$?
electromagnetism gauge-theory maxwell-equations boundary-conditions determinism
3
Try and simulate it yourself. Spoiler alert: you won't be able to, at least not without fixing the gauge first. Numerically solving a PDE requires, for example, inverting a matrix/solving a linear system. This doesn't work when you have gauge invariance, because the matrix is singular.
– AccidentalFourierTransform
Nov 17 at 0:04
2
@AccidentalFourierTransform This isn't quite true. Your numerics may or not converge to a solution, depending on the algorithm. Some techniques involve solving a linear system, and they'll fail, but many techniques will e.g. trivially converge to the solution $alpha equiv 0$. The issue is non-uniqueness, not non-existence.
– tparker
Nov 17 at 2:49
@tparker I never said anything about non-existence. A linear system with singular matrix has an infinite number of solutions. So we agree the issue is about non-uniqueness, not about non-existence.
– AccidentalFourierTransform
Nov 17 at 2:53
2
Related: physics.stackexchange.com/q/20071/2451
– Qmechanic♦
Nov 17 at 3:37
1
If one simulates Eq.(1) numerically on a computer, why is the field configuration at a later time not uniquely determined by the data in Eq.(2)? I don't think the assumption is true. Even though the solution is non-unique, your algorithm can converge to a particular solution. Take the ordinary equation $x^ 2=1$. If you apply the bisection method in the interval $[0,2]$, you find the solution $x=1$, although you miss $x=-1$. Other methods might not converge. So I think that without specifying a particular numerical method, answers are going to be very vague.
– jinawee
Nov 17 at 11:46
|
show 1 more comment
up vote
7
down vote
favorite
up vote
7
down vote
favorite
Given a solution $A^{mu}(x)$ to Maxwell's equations
begin{equation}
Box A^{mu}(x)-partial^{mu}partial_{nu}A^{nu}=0tag{1}
end{equation}
which also satisfies some specified initial conditions at time $t_0$
begin{equation}
A^{mu}(vec{x},t_0)=f^{mu}(vec{x}),quad dot{A}^{mu}(vec{x},t_0)=g^{mu}(vec{x})tag{2}
end{equation}
we have that the function
begin{equation}
A^{'mu}(x)=A^{mu}(x)+partial^{mu}alpha(x)tag{3}
end{equation}
also satisfies the equations of motion, and if we arrange that the scalar function $alpha$ also satisfy that
begin{equation}
partial^{mu}alpha(vec{x},t_0)=0,quad partial^{mu}dot{alpha}(vec{x},t_0)=0 tag{4}
end{equation}
at the initial time $t_0$, then the new solution $A^{'mu}$ also satisfies the initial conditions. For example, the function
begin{equation}
alpha(vec{x},t)=(t-t_0)^5h(vec{x})e^{-(t-t_0)^2}
end{equation}
satisfies the conditions Eq.$(4)$ and also vanishes at $trightarrow pm infty$. Therefore, the solution to Eq.$(1)$ is not uniquely determined by the initial data Eq.$(2)$.
Question: If one simulates Eq.$(1)$ numerically on a computer, why is the field configuration at a later time not uniquely determined by the data in Eq.$(2)$?
electromagnetism gauge-theory maxwell-equations boundary-conditions determinism
Given a solution $A^{mu}(x)$ to Maxwell's equations
begin{equation}
Box A^{mu}(x)-partial^{mu}partial_{nu}A^{nu}=0tag{1}
end{equation}
which also satisfies some specified initial conditions at time $t_0$
begin{equation}
A^{mu}(vec{x},t_0)=f^{mu}(vec{x}),quad dot{A}^{mu}(vec{x},t_0)=g^{mu}(vec{x})tag{2}
end{equation}
we have that the function
begin{equation}
A^{'mu}(x)=A^{mu}(x)+partial^{mu}alpha(x)tag{3}
end{equation}
also satisfies the equations of motion, and if we arrange that the scalar function $alpha$ also satisfy that
begin{equation}
partial^{mu}alpha(vec{x},t_0)=0,quad partial^{mu}dot{alpha}(vec{x},t_0)=0 tag{4}
end{equation}
at the initial time $t_0$, then the new solution $A^{'mu}$ also satisfies the initial conditions. For example, the function
begin{equation}
alpha(vec{x},t)=(t-t_0)^5h(vec{x})e^{-(t-t_0)^2}
end{equation}
satisfies the conditions Eq.$(4)$ and also vanishes at $trightarrow pm infty$. Therefore, the solution to Eq.$(1)$ is not uniquely determined by the initial data Eq.$(2)$.
Question: If one simulates Eq.$(1)$ numerically on a computer, why is the field configuration at a later time not uniquely determined by the data in Eq.$(2)$?
electromagnetism gauge-theory maxwell-equations boundary-conditions determinism
electromagnetism gauge-theory maxwell-equations boundary-conditions determinism
edited Nov 17 at 10:29
knzhou
38.8k9106188
38.8k9106188
asked Nov 16 at 23:54
Luke
530411
530411
3
Try and simulate it yourself. Spoiler alert: you won't be able to, at least not without fixing the gauge first. Numerically solving a PDE requires, for example, inverting a matrix/solving a linear system. This doesn't work when you have gauge invariance, because the matrix is singular.
– AccidentalFourierTransform
Nov 17 at 0:04
2
@AccidentalFourierTransform This isn't quite true. Your numerics may or not converge to a solution, depending on the algorithm. Some techniques involve solving a linear system, and they'll fail, but many techniques will e.g. trivially converge to the solution $alpha equiv 0$. The issue is non-uniqueness, not non-existence.
– tparker
Nov 17 at 2:49
@tparker I never said anything about non-existence. A linear system with singular matrix has an infinite number of solutions. So we agree the issue is about non-uniqueness, not about non-existence.
– AccidentalFourierTransform
Nov 17 at 2:53
2
Related: physics.stackexchange.com/q/20071/2451
– Qmechanic♦
Nov 17 at 3:37
1
If one simulates Eq.(1) numerically on a computer, why is the field configuration at a later time not uniquely determined by the data in Eq.(2)? I don't think the assumption is true. Even though the solution is non-unique, your algorithm can converge to a particular solution. Take the ordinary equation $x^ 2=1$. If you apply the bisection method in the interval $[0,2]$, you find the solution $x=1$, although you miss $x=-1$. Other methods might not converge. So I think that without specifying a particular numerical method, answers are going to be very vague.
– jinawee
Nov 17 at 11:46
|
show 1 more comment
3
Try and simulate it yourself. Spoiler alert: you won't be able to, at least not without fixing the gauge first. Numerically solving a PDE requires, for example, inverting a matrix/solving a linear system. This doesn't work when you have gauge invariance, because the matrix is singular.
– AccidentalFourierTransform
Nov 17 at 0:04
2
@AccidentalFourierTransform This isn't quite true. Your numerics may or not converge to a solution, depending on the algorithm. Some techniques involve solving a linear system, and they'll fail, but many techniques will e.g. trivially converge to the solution $alpha equiv 0$. The issue is non-uniqueness, not non-existence.
– tparker
Nov 17 at 2:49
@tparker I never said anything about non-existence. A linear system with singular matrix has an infinite number of solutions. So we agree the issue is about non-uniqueness, not about non-existence.
– AccidentalFourierTransform
Nov 17 at 2:53
2
Related: physics.stackexchange.com/q/20071/2451
– Qmechanic♦
Nov 17 at 3:37
1
If one simulates Eq.(1) numerically on a computer, why is the field configuration at a later time not uniquely determined by the data in Eq.(2)? I don't think the assumption is true. Even though the solution is non-unique, your algorithm can converge to a particular solution. Take the ordinary equation $x^ 2=1$. If you apply the bisection method in the interval $[0,2]$, you find the solution $x=1$, although you miss $x=-1$. Other methods might not converge. So I think that without specifying a particular numerical method, answers are going to be very vague.
– jinawee
Nov 17 at 11:46
3
3
Try and simulate it yourself. Spoiler alert: you won't be able to, at least not without fixing the gauge first. Numerically solving a PDE requires, for example, inverting a matrix/solving a linear system. This doesn't work when you have gauge invariance, because the matrix is singular.
– AccidentalFourierTransform
Nov 17 at 0:04
Try and simulate it yourself. Spoiler alert: you won't be able to, at least not without fixing the gauge first. Numerically solving a PDE requires, for example, inverting a matrix/solving a linear system. This doesn't work when you have gauge invariance, because the matrix is singular.
– AccidentalFourierTransform
Nov 17 at 0:04
2
2
@AccidentalFourierTransform This isn't quite true. Your numerics may or not converge to a solution, depending on the algorithm. Some techniques involve solving a linear system, and they'll fail, but many techniques will e.g. trivially converge to the solution $alpha equiv 0$. The issue is non-uniqueness, not non-existence.
– tparker
Nov 17 at 2:49
@AccidentalFourierTransform This isn't quite true. Your numerics may or not converge to a solution, depending on the algorithm. Some techniques involve solving a linear system, and they'll fail, but many techniques will e.g. trivially converge to the solution $alpha equiv 0$. The issue is non-uniqueness, not non-existence.
– tparker
Nov 17 at 2:49
@tparker I never said anything about non-existence. A linear system with singular matrix has an infinite number of solutions. So we agree the issue is about non-uniqueness, not about non-existence.
– AccidentalFourierTransform
Nov 17 at 2:53
@tparker I never said anything about non-existence. A linear system with singular matrix has an infinite number of solutions. So we agree the issue is about non-uniqueness, not about non-existence.
– AccidentalFourierTransform
Nov 17 at 2:53
2
2
Related: physics.stackexchange.com/q/20071/2451
– Qmechanic♦
Nov 17 at 3:37
Related: physics.stackexchange.com/q/20071/2451
– Qmechanic♦
Nov 17 at 3:37
1
1
If one simulates Eq.(1) numerically on a computer, why is the field configuration at a later time not uniquely determined by the data in Eq.(2)? I don't think the assumption is true. Even though the solution is non-unique, your algorithm can converge to a particular solution. Take the ordinary equation $x^ 2=1$. If you apply the bisection method in the interval $[0,2]$, you find the solution $x=1$, although you miss $x=-1$. Other methods might not converge. So I think that without specifying a particular numerical method, answers are going to be very vague.
– jinawee
Nov 17 at 11:46
If one simulates Eq.(1) numerically on a computer, why is the field configuration at a later time not uniquely determined by the data in Eq.(2)? I don't think the assumption is true. Even though the solution is non-unique, your algorithm can converge to a particular solution. Take the ordinary equation $x^ 2=1$. If you apply the bisection method in the interval $[0,2]$, you find the solution $x=1$, although you miss $x=-1$. Other methods might not converge. So I think that without specifying a particular numerical method, answers are going to be very vague.
– jinawee
Nov 17 at 11:46
|
show 1 more comment
2 Answers
2
active
oldest
votes
up vote
3
down vote
Not all initial value problems have unique solution. Your example of $alpha$ function demonstrates that this initial value problem is of such kind.
In this case, the problem is in the system of partial differential equations
$$
partial_nupartial^nu A^{mu}-partial^{mu}partial_{nu}A^{nu}=0
$$
itself; it does not put enough constraint on the functions $varphi(mathbf x,t), mathbf A(mathbf x,t)$. It is somewhat similar to a situation in linear algebra that sometimes occurs where a system of $n$ linear equations for $n$ unknowns has infinity of solutions.
A slightly different way to see this: notice that nowhere in the above system of PDE can we find $partial_t^2 A^0$ or $partial_t A^0$ directly; only a spatial gradient of $partial_t A^0$ is present. The equations for $A^i$'s do not relate them directly with time derivatives of $varphi$.
This means that if we have a solution of the initial value problem $varphi(x,t),mathbf A(x,t)$ and replace the scalar potential by $varphi' = varphi(x,t)+ht^2$ at time $t = 0$ (where $h$ is a constant), the equations are still satisfied and at $t=0$, initial conditions are satisfied too. This would not be so obviously possible if the system contained directly time derivatives of $varphi$. Consider a slightly different system
$$
partial_nupartial^nu A^{mu}= 0,
$$
(which in EM theory can be derived as a result of the Lorenz gauge choice) - this does constrain $partial_t^2 varphi$, so the above argument fails. I think this system should have a unique solution, because it is very similar to a set of equations for independent harmonic oscillators. However, for proof better check with mathematicians.
add a comment |
up vote
2
down vote
Are you asking for the physical or mathematical explanation? Dan Yand's answer gives the physical explanation.
Regarding the mathematical question: On what basis would you expect the field configuration to be uniquely determined by its initial data? Unlike for (uncoupled) ODE's, there's no theorem to that effect for general linear homogeneous second-order PDEs.
The issue is not about PDE vs ODE. There are point particle systems with gauge symmetries, and whose time evolution is not uniquely fixed by the equations of motion. And vice versa: there are field systems whose time evolution is uniquely fixed by the equations of motion (say, the heat/Schrödinger equation). The issue is about invertibility of the differential operator, equiv. about existence of a unique Green function. Obstructions may appear whether the system is one-dimensional or not.
– AccidentalFourierTransform
Nov 17 at 1:22
A Lagrangian of the form $L(q_1,q_2)=f(q_1-q_2)$, for arbitrary $f$, is invariant under $q_i(t)to q_i(t)+eta(t)$. The system has a gauge symmetry. I leave this to you to pick some specific $f$ and compute Euler-Lagrange. You get two redundant equations of motion, so only one independent equation for two degrees of freedom. No unique solution. Etc. (And if we are just going to cite references, let me quote Henneaux, Teitelboim "Quantization of Gauge Systems", which is a book about point particles, not fields).
– AccidentalFourierTransform
Nov 17 at 2:51
@AccidentalFourierTransform Oops, you're right. I meant that a function $mathbb{R} to mathbb{R}$ can't have a gauge freedom, but you can get around that by adding more variables at either end of the arrow. Edited to clarify.
– tparker
Nov 17 at 2:57
A system with a single degree of freedom, if it has a gauge symmetry, has no effective degrees of freedom at all. So its dynamics are purely topological and/or due to constraints. For example, a relativistic point particle, in the reparametrisation-invariant formalism, has a gauge symmetry, and it is still $mathbb Rtomathbb R$.
– AccidentalFourierTransform
Nov 17 at 3:00
@AccidentalFourierTransform I would describe a point particle (whether relatvistic or not) with trajectory $(t(tau), x(tau))$ as being described by a a function $mathbb{R} to mathbb{R}^2$, not $mathbb{R} to mathbb{R}$.
– tparker
Nov 17 at 3:10
|
show 4 more comments
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
3
down vote
Not all initial value problems have unique solution. Your example of $alpha$ function demonstrates that this initial value problem is of such kind.
In this case, the problem is in the system of partial differential equations
$$
partial_nupartial^nu A^{mu}-partial^{mu}partial_{nu}A^{nu}=0
$$
itself; it does not put enough constraint on the functions $varphi(mathbf x,t), mathbf A(mathbf x,t)$. It is somewhat similar to a situation in linear algebra that sometimes occurs where a system of $n$ linear equations for $n$ unknowns has infinity of solutions.
A slightly different way to see this: notice that nowhere in the above system of PDE can we find $partial_t^2 A^0$ or $partial_t A^0$ directly; only a spatial gradient of $partial_t A^0$ is present. The equations for $A^i$'s do not relate them directly with time derivatives of $varphi$.
This means that if we have a solution of the initial value problem $varphi(x,t),mathbf A(x,t)$ and replace the scalar potential by $varphi' = varphi(x,t)+ht^2$ at time $t = 0$ (where $h$ is a constant), the equations are still satisfied and at $t=0$, initial conditions are satisfied too. This would not be so obviously possible if the system contained directly time derivatives of $varphi$. Consider a slightly different system
$$
partial_nupartial^nu A^{mu}= 0,
$$
(which in EM theory can be derived as a result of the Lorenz gauge choice) - this does constrain $partial_t^2 varphi$, so the above argument fails. I think this system should have a unique solution, because it is very similar to a set of equations for independent harmonic oscillators. However, for proof better check with mathematicians.
add a comment |
up vote
3
down vote
Not all initial value problems have unique solution. Your example of $alpha$ function demonstrates that this initial value problem is of such kind.
In this case, the problem is in the system of partial differential equations
$$
partial_nupartial^nu A^{mu}-partial^{mu}partial_{nu}A^{nu}=0
$$
itself; it does not put enough constraint on the functions $varphi(mathbf x,t), mathbf A(mathbf x,t)$. It is somewhat similar to a situation in linear algebra that sometimes occurs where a system of $n$ linear equations for $n$ unknowns has infinity of solutions.
A slightly different way to see this: notice that nowhere in the above system of PDE can we find $partial_t^2 A^0$ or $partial_t A^0$ directly; only a spatial gradient of $partial_t A^0$ is present. The equations for $A^i$'s do not relate them directly with time derivatives of $varphi$.
This means that if we have a solution of the initial value problem $varphi(x,t),mathbf A(x,t)$ and replace the scalar potential by $varphi' = varphi(x,t)+ht^2$ at time $t = 0$ (where $h$ is a constant), the equations are still satisfied and at $t=0$, initial conditions are satisfied too. This would not be so obviously possible if the system contained directly time derivatives of $varphi$. Consider a slightly different system
$$
partial_nupartial^nu A^{mu}= 0,
$$
(which in EM theory can be derived as a result of the Lorenz gauge choice) - this does constrain $partial_t^2 varphi$, so the above argument fails. I think this system should have a unique solution, because it is very similar to a set of equations for independent harmonic oscillators. However, for proof better check with mathematicians.
add a comment |
up vote
3
down vote
up vote
3
down vote
Not all initial value problems have unique solution. Your example of $alpha$ function demonstrates that this initial value problem is of such kind.
In this case, the problem is in the system of partial differential equations
$$
partial_nupartial^nu A^{mu}-partial^{mu}partial_{nu}A^{nu}=0
$$
itself; it does not put enough constraint on the functions $varphi(mathbf x,t), mathbf A(mathbf x,t)$. It is somewhat similar to a situation in linear algebra that sometimes occurs where a system of $n$ linear equations for $n$ unknowns has infinity of solutions.
A slightly different way to see this: notice that nowhere in the above system of PDE can we find $partial_t^2 A^0$ or $partial_t A^0$ directly; only a spatial gradient of $partial_t A^0$ is present. The equations for $A^i$'s do not relate them directly with time derivatives of $varphi$.
This means that if we have a solution of the initial value problem $varphi(x,t),mathbf A(x,t)$ and replace the scalar potential by $varphi' = varphi(x,t)+ht^2$ at time $t = 0$ (where $h$ is a constant), the equations are still satisfied and at $t=0$, initial conditions are satisfied too. This would not be so obviously possible if the system contained directly time derivatives of $varphi$. Consider a slightly different system
$$
partial_nupartial^nu A^{mu}= 0,
$$
(which in EM theory can be derived as a result of the Lorenz gauge choice) - this does constrain $partial_t^2 varphi$, so the above argument fails. I think this system should have a unique solution, because it is very similar to a set of equations for independent harmonic oscillators. However, for proof better check with mathematicians.
Not all initial value problems have unique solution. Your example of $alpha$ function demonstrates that this initial value problem is of such kind.
In this case, the problem is in the system of partial differential equations
$$
partial_nupartial^nu A^{mu}-partial^{mu}partial_{nu}A^{nu}=0
$$
itself; it does not put enough constraint on the functions $varphi(mathbf x,t), mathbf A(mathbf x,t)$. It is somewhat similar to a situation in linear algebra that sometimes occurs where a system of $n$ linear equations for $n$ unknowns has infinity of solutions.
A slightly different way to see this: notice that nowhere in the above system of PDE can we find $partial_t^2 A^0$ or $partial_t A^0$ directly; only a spatial gradient of $partial_t A^0$ is present. The equations for $A^i$'s do not relate them directly with time derivatives of $varphi$.
This means that if we have a solution of the initial value problem $varphi(x,t),mathbf A(x,t)$ and replace the scalar potential by $varphi' = varphi(x,t)+ht^2$ at time $t = 0$ (where $h$ is a constant), the equations are still satisfied and at $t=0$, initial conditions are satisfied too. This would not be so obviously possible if the system contained directly time derivatives of $varphi$. Consider a slightly different system
$$
partial_nupartial^nu A^{mu}= 0,
$$
(which in EM theory can be derived as a result of the Lorenz gauge choice) - this does constrain $partial_t^2 varphi$, so the above argument fails. I think this system should have a unique solution, because it is very similar to a set of equations for independent harmonic oscillators. However, for proof better check with mathematicians.
edited Nov 17 at 10:44
answered Nov 17 at 2:14
Ján Lalinský
13.9k1334
13.9k1334
add a comment |
add a comment |
up vote
2
down vote
Are you asking for the physical or mathematical explanation? Dan Yand's answer gives the physical explanation.
Regarding the mathematical question: On what basis would you expect the field configuration to be uniquely determined by its initial data? Unlike for (uncoupled) ODE's, there's no theorem to that effect for general linear homogeneous second-order PDEs.
The issue is not about PDE vs ODE. There are point particle systems with gauge symmetries, and whose time evolution is not uniquely fixed by the equations of motion. And vice versa: there are field systems whose time evolution is uniquely fixed by the equations of motion (say, the heat/Schrödinger equation). The issue is about invertibility of the differential operator, equiv. about existence of a unique Green function. Obstructions may appear whether the system is one-dimensional or not.
– AccidentalFourierTransform
Nov 17 at 1:22
A Lagrangian of the form $L(q_1,q_2)=f(q_1-q_2)$, for arbitrary $f$, is invariant under $q_i(t)to q_i(t)+eta(t)$. The system has a gauge symmetry. I leave this to you to pick some specific $f$ and compute Euler-Lagrange. You get two redundant equations of motion, so only one independent equation for two degrees of freedom. No unique solution. Etc. (And if we are just going to cite references, let me quote Henneaux, Teitelboim "Quantization of Gauge Systems", which is a book about point particles, not fields).
– AccidentalFourierTransform
Nov 17 at 2:51
@AccidentalFourierTransform Oops, you're right. I meant that a function $mathbb{R} to mathbb{R}$ can't have a gauge freedom, but you can get around that by adding more variables at either end of the arrow. Edited to clarify.
– tparker
Nov 17 at 2:57
A system with a single degree of freedom, if it has a gauge symmetry, has no effective degrees of freedom at all. So its dynamics are purely topological and/or due to constraints. For example, a relativistic point particle, in the reparametrisation-invariant formalism, has a gauge symmetry, and it is still $mathbb Rtomathbb R$.
– AccidentalFourierTransform
Nov 17 at 3:00
@AccidentalFourierTransform I would describe a point particle (whether relatvistic or not) with trajectory $(t(tau), x(tau))$ as being described by a a function $mathbb{R} to mathbb{R}^2$, not $mathbb{R} to mathbb{R}$.
– tparker
Nov 17 at 3:10
|
show 4 more comments
up vote
2
down vote
Are you asking for the physical or mathematical explanation? Dan Yand's answer gives the physical explanation.
Regarding the mathematical question: On what basis would you expect the field configuration to be uniquely determined by its initial data? Unlike for (uncoupled) ODE's, there's no theorem to that effect for general linear homogeneous second-order PDEs.
The issue is not about PDE vs ODE. There are point particle systems with gauge symmetries, and whose time evolution is not uniquely fixed by the equations of motion. And vice versa: there are field systems whose time evolution is uniquely fixed by the equations of motion (say, the heat/Schrödinger equation). The issue is about invertibility of the differential operator, equiv. about existence of a unique Green function. Obstructions may appear whether the system is one-dimensional or not.
– AccidentalFourierTransform
Nov 17 at 1:22
A Lagrangian of the form $L(q_1,q_2)=f(q_1-q_2)$, for arbitrary $f$, is invariant under $q_i(t)to q_i(t)+eta(t)$. The system has a gauge symmetry. I leave this to you to pick some specific $f$ and compute Euler-Lagrange. You get two redundant equations of motion, so only one independent equation for two degrees of freedom. No unique solution. Etc. (And if we are just going to cite references, let me quote Henneaux, Teitelboim "Quantization of Gauge Systems", which is a book about point particles, not fields).
– AccidentalFourierTransform
Nov 17 at 2:51
@AccidentalFourierTransform Oops, you're right. I meant that a function $mathbb{R} to mathbb{R}$ can't have a gauge freedom, but you can get around that by adding more variables at either end of the arrow. Edited to clarify.
– tparker
Nov 17 at 2:57
A system with a single degree of freedom, if it has a gauge symmetry, has no effective degrees of freedom at all. So its dynamics are purely topological and/or due to constraints. For example, a relativistic point particle, in the reparametrisation-invariant formalism, has a gauge symmetry, and it is still $mathbb Rtomathbb R$.
– AccidentalFourierTransform
Nov 17 at 3:00
@AccidentalFourierTransform I would describe a point particle (whether relatvistic or not) with trajectory $(t(tau), x(tau))$ as being described by a a function $mathbb{R} to mathbb{R}^2$, not $mathbb{R} to mathbb{R}$.
– tparker
Nov 17 at 3:10
|
show 4 more comments
up vote
2
down vote
up vote
2
down vote
Are you asking for the physical or mathematical explanation? Dan Yand's answer gives the physical explanation.
Regarding the mathematical question: On what basis would you expect the field configuration to be uniquely determined by its initial data? Unlike for (uncoupled) ODE's, there's no theorem to that effect for general linear homogeneous second-order PDEs.
Are you asking for the physical or mathematical explanation? Dan Yand's answer gives the physical explanation.
Regarding the mathematical question: On what basis would you expect the field configuration to be uniquely determined by its initial data? Unlike for (uncoupled) ODE's, there's no theorem to that effect for general linear homogeneous second-order PDEs.
edited Nov 17 at 2:56
answered Nov 17 at 0:53
tparker
22.3k147120
22.3k147120
The issue is not about PDE vs ODE. There are point particle systems with gauge symmetries, and whose time evolution is not uniquely fixed by the equations of motion. And vice versa: there are field systems whose time evolution is uniquely fixed by the equations of motion (say, the heat/Schrödinger equation). The issue is about invertibility of the differential operator, equiv. about existence of a unique Green function. Obstructions may appear whether the system is one-dimensional or not.
– AccidentalFourierTransform
Nov 17 at 1:22
A Lagrangian of the form $L(q_1,q_2)=f(q_1-q_2)$, for arbitrary $f$, is invariant under $q_i(t)to q_i(t)+eta(t)$. The system has a gauge symmetry. I leave this to you to pick some specific $f$ and compute Euler-Lagrange. You get two redundant equations of motion, so only one independent equation for two degrees of freedom. No unique solution. Etc. (And if we are just going to cite references, let me quote Henneaux, Teitelboim "Quantization of Gauge Systems", which is a book about point particles, not fields).
– AccidentalFourierTransform
Nov 17 at 2:51
@AccidentalFourierTransform Oops, you're right. I meant that a function $mathbb{R} to mathbb{R}$ can't have a gauge freedom, but you can get around that by adding more variables at either end of the arrow. Edited to clarify.
– tparker
Nov 17 at 2:57
A system with a single degree of freedom, if it has a gauge symmetry, has no effective degrees of freedom at all. So its dynamics are purely topological and/or due to constraints. For example, a relativistic point particle, in the reparametrisation-invariant formalism, has a gauge symmetry, and it is still $mathbb Rtomathbb R$.
– AccidentalFourierTransform
Nov 17 at 3:00
@AccidentalFourierTransform I would describe a point particle (whether relatvistic or not) with trajectory $(t(tau), x(tau))$ as being described by a a function $mathbb{R} to mathbb{R}^2$, not $mathbb{R} to mathbb{R}$.
– tparker
Nov 17 at 3:10
|
show 4 more comments
The issue is not about PDE vs ODE. There are point particle systems with gauge symmetries, and whose time evolution is not uniquely fixed by the equations of motion. And vice versa: there are field systems whose time evolution is uniquely fixed by the equations of motion (say, the heat/Schrödinger equation). The issue is about invertibility of the differential operator, equiv. about existence of a unique Green function. Obstructions may appear whether the system is one-dimensional or not.
– AccidentalFourierTransform
Nov 17 at 1:22
A Lagrangian of the form $L(q_1,q_2)=f(q_1-q_2)$, for arbitrary $f$, is invariant under $q_i(t)to q_i(t)+eta(t)$. The system has a gauge symmetry. I leave this to you to pick some specific $f$ and compute Euler-Lagrange. You get two redundant equations of motion, so only one independent equation for two degrees of freedom. No unique solution. Etc. (And if we are just going to cite references, let me quote Henneaux, Teitelboim "Quantization of Gauge Systems", which is a book about point particles, not fields).
– AccidentalFourierTransform
Nov 17 at 2:51
@AccidentalFourierTransform Oops, you're right. I meant that a function $mathbb{R} to mathbb{R}$ can't have a gauge freedom, but you can get around that by adding more variables at either end of the arrow. Edited to clarify.
– tparker
Nov 17 at 2:57
A system with a single degree of freedom, if it has a gauge symmetry, has no effective degrees of freedom at all. So its dynamics are purely topological and/or due to constraints. For example, a relativistic point particle, in the reparametrisation-invariant formalism, has a gauge symmetry, and it is still $mathbb Rtomathbb R$.
– AccidentalFourierTransform
Nov 17 at 3:00
@AccidentalFourierTransform I would describe a point particle (whether relatvistic or not) with trajectory $(t(tau), x(tau))$ as being described by a a function $mathbb{R} to mathbb{R}^2$, not $mathbb{R} to mathbb{R}$.
– tparker
Nov 17 at 3:10
The issue is not about PDE vs ODE. There are point particle systems with gauge symmetries, and whose time evolution is not uniquely fixed by the equations of motion. And vice versa: there are field systems whose time evolution is uniquely fixed by the equations of motion (say, the heat/Schrödinger equation). The issue is about invertibility of the differential operator, equiv. about existence of a unique Green function. Obstructions may appear whether the system is one-dimensional or not.
– AccidentalFourierTransform
Nov 17 at 1:22
The issue is not about PDE vs ODE. There are point particle systems with gauge symmetries, and whose time evolution is not uniquely fixed by the equations of motion. And vice versa: there are field systems whose time evolution is uniquely fixed by the equations of motion (say, the heat/Schrödinger equation). The issue is about invertibility of the differential operator, equiv. about existence of a unique Green function. Obstructions may appear whether the system is one-dimensional or not.
– AccidentalFourierTransform
Nov 17 at 1:22
A Lagrangian of the form $L(q_1,q_2)=f(q_1-q_2)$, for arbitrary $f$, is invariant under $q_i(t)to q_i(t)+eta(t)$. The system has a gauge symmetry. I leave this to you to pick some specific $f$ and compute Euler-Lagrange. You get two redundant equations of motion, so only one independent equation for two degrees of freedom. No unique solution. Etc. (And if we are just going to cite references, let me quote Henneaux, Teitelboim "Quantization of Gauge Systems", which is a book about point particles, not fields).
– AccidentalFourierTransform
Nov 17 at 2:51
A Lagrangian of the form $L(q_1,q_2)=f(q_1-q_2)$, for arbitrary $f$, is invariant under $q_i(t)to q_i(t)+eta(t)$. The system has a gauge symmetry. I leave this to you to pick some specific $f$ and compute Euler-Lagrange. You get two redundant equations of motion, so only one independent equation for two degrees of freedom. No unique solution. Etc. (And if we are just going to cite references, let me quote Henneaux, Teitelboim "Quantization of Gauge Systems", which is a book about point particles, not fields).
– AccidentalFourierTransform
Nov 17 at 2:51
@AccidentalFourierTransform Oops, you're right. I meant that a function $mathbb{R} to mathbb{R}$ can't have a gauge freedom, but you can get around that by adding more variables at either end of the arrow. Edited to clarify.
– tparker
Nov 17 at 2:57
@AccidentalFourierTransform Oops, you're right. I meant that a function $mathbb{R} to mathbb{R}$ can't have a gauge freedom, but you can get around that by adding more variables at either end of the arrow. Edited to clarify.
– tparker
Nov 17 at 2:57
A system with a single degree of freedom, if it has a gauge symmetry, has no effective degrees of freedom at all. So its dynamics are purely topological and/or due to constraints. For example, a relativistic point particle, in the reparametrisation-invariant formalism, has a gauge symmetry, and it is still $mathbb Rtomathbb R$.
– AccidentalFourierTransform
Nov 17 at 3:00
A system with a single degree of freedom, if it has a gauge symmetry, has no effective degrees of freedom at all. So its dynamics are purely topological and/or due to constraints. For example, a relativistic point particle, in the reparametrisation-invariant formalism, has a gauge symmetry, and it is still $mathbb Rtomathbb R$.
– AccidentalFourierTransform
Nov 17 at 3:00
@AccidentalFourierTransform I would describe a point particle (whether relatvistic or not) with trajectory $(t(tau), x(tau))$ as being described by a a function $mathbb{R} to mathbb{R}^2$, not $mathbb{R} to mathbb{R}$.
– tparker
Nov 17 at 3:10
@AccidentalFourierTransform I would describe a point particle (whether relatvistic or not) with trajectory $(t(tau), x(tau))$ as being described by a a function $mathbb{R} to mathbb{R}^2$, not $mathbb{R} to mathbb{R}$.
– tparker
Nov 17 at 3:10
|
show 4 more comments
Thanks for contributing an answer to Physics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphysics.stackexchange.com%2fquestions%2f441414%2fif-a-mu-is-not-determined-uniquely-by-maxwells-equations-what-happens-if-we%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
3
Try and simulate it yourself. Spoiler alert: you won't be able to, at least not without fixing the gauge first. Numerically solving a PDE requires, for example, inverting a matrix/solving a linear system. This doesn't work when you have gauge invariance, because the matrix is singular.
– AccidentalFourierTransform
Nov 17 at 0:04
2
@AccidentalFourierTransform This isn't quite true. Your numerics may or not converge to a solution, depending on the algorithm. Some techniques involve solving a linear system, and they'll fail, but many techniques will e.g. trivially converge to the solution $alpha equiv 0$. The issue is non-uniqueness, not non-existence.
– tparker
Nov 17 at 2:49
@tparker I never said anything about non-existence. A linear system with singular matrix has an infinite number of solutions. So we agree the issue is about non-uniqueness, not about non-existence.
– AccidentalFourierTransform
Nov 17 at 2:53
2
Related: physics.stackexchange.com/q/20071/2451
– Qmechanic♦
Nov 17 at 3:37
1
If one simulates Eq.(1) numerically on a computer, why is the field configuration at a later time not uniquely determined by the data in Eq.(2)? I don't think the assumption is true. Even though the solution is non-unique, your algorithm can converge to a particular solution. Take the ordinary equation $x^ 2=1$. If you apply the bisection method in the interval $[0,2]$, you find the solution $x=1$, although you miss $x=-1$. Other methods might not converge. So I think that without specifying a particular numerical method, answers are going to be very vague.
– jinawee
Nov 17 at 11:46