Joint Gaussian PDF Change of Coordinates












0














My textbook says the following:




Given a vector $mathrm{mathbf{x}}$ of random variables $x_i$ for $i = 1, dots, N,$ with mean $bar{mathrm{mathbf{x}}} = E[mathrm{mathbf{x}}]$, where $E[cdot]$ represents the expected, and $Delta mathrm{mathbf{x}} = mathrm{mathbf{x}} - bar{mathrm{mathbf{x}}}$, the covariance matrix $Sigma$ is an $N times N$ matrix given by



$$Sigma = E[Delta mathrm{mathbf{x}} Delta mathrm{mathbf{x}}^T]$$



so that $Sigma_{i j} = E[ Delta x_i Delta x_j]$. The diagonal entries of the matrix $Sigma$ are the variances of the individual variables $x_i$, whereas the off-diagonal entries are the cross-covariance values.



The variables $x_i$ are said to conform to a joint Gaussian distribution, if the probability distribution of $mathrm{mathbf{x}}$ is of the form



$$P(bar{mathrm{mathbf{x}}} + Delta mathrm{mathbf{x}}) = (2 pi) ^{-N/2} det(Sigma^{-1})^{1/2} exp(-(Delta mathrm{mathbf{x}})^T Sigma^{-1} (Delta mathrm{mathbf{x}})/2) tag{A2.1}$$



for some positive-semidefinite matrix $Sigma^{-1}$.



$vdots$



Change of coordinates. Since $Sigma$ is symmetric and positive-definite, it may be written as $Sigma = U^TDU$, where $U$ is an orthogonal matrix and $D = (sigma_1^2, sigma_2^2, dots, sigma_N^2)$ is diagonal. Writing $mathrm{mathbf{x}}' = U mathrm{mathbf{x}}$ and $bar{mathrm{mathbf{x}}}' = U bar{mathrm{mathbf{x}}}$, and substituting in (A2.1), leads to



$$ begin{align*}exp(-(mathrm{mathbf{x}} - bar{mathrm{mathbf{x}}})^T Sigma^{-1} (mathrm{mathbf{x}} - bar{mathrm{mathbf{x}}})/2) &= exp(-(mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')^T U Sigma^{-1} U^T (mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')/2) \ &= exp(-(mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')^T D^{-1} (mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')/2) end{align*}$$



Thus, the orthogonal change of coordinates from $mathrm{mathbf{x}}$ to $mathrm{mathbf{x}}' = U mathrm{mathbf{x}}$ transforms a general Gaussian PDF into one with diagonal covariance matrix. A further scaling by $sigma_i$ in each coordinate direction may be applied to transform it to an isotropic Gaussian distribution. Equivalently stated, a change of coordinates may be applied to transform Mahalanobis distance to ordinary Euclidean distance.




I don't understand how the author derived these expressions:




$$ begin{align*}exp(-(mathrm{mathbf{x}} - bar{mathrm{mathbf{x}}})^T Sigma^{-1} (mathrm{mathbf{x}} - bar{mathrm{mathbf{x}}})/2) &= exp(-(mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')^T U Sigma^{-1} U^T (mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')/2) \ &= exp(-(mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')^T D^{-1} (mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')/2) end{align*}$$




My attempt was as follows:



$$ begin{align*}exp(-(mathrm{mathbf{x}} - bar{mathrm{mathbf{x}}})^T Sigma^{-1} (mathrm{mathbf{x}} - bar{mathrm{mathbf{x}}})/2) &= exp(-(U^{-1}mathrm{mathbf{x}}' - U^{-1} bar{mathrm{mathbf{x}}}')^T Sigma^{-1}(U^{-1} mathrm{mathbf{x}} - U^{-1} bar{mathrm{mathbf{x}}})/2 ) \ &= exp(-(U^{-1}(mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}'))^T Sigma^{-1} (U^{-1}(mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}'))/2) \ &= exp(-((mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')^T U) Sigma^{-1} dfrac{U^T}{2} (mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')/2) tag{*} end{align*}$$



(*) Since $(AB)^T = B^T A^T$ and $U^T = U^{-1}$ ($U$ is orthogonal).



As you can see, I can't figure out how to derive the expressions that the author outlines. In fact, based on my work, as shown above, I can't see how such a derivation is possible?



I would greatly appreciate it if people could please take the time to demonstrate this.










share|cite|improve this question


















  • 1




    Isn't your derivation exactly the proof? In your last line you add a factor $frac{1}{2}$ erroneously but otherwise I am not sure I see the issue. The last step just uses that $Sigma = U^{T}DU Rightarrow Sigma^{-1} = U^{T}D^{-1}U$
    – Jonathan
    Nov 21 at 21:58












  • @Jonathan thanks for the response. Yes, I mistakenly added in a factor $dfrac{1}{2}$; thanks for pointing that out. Can you please explain how you found the last implication?
    – The Pointer
    Nov 21 at 22:10








  • 1




    Sure! I'll write an answer.
    – Jonathan
    Nov 21 at 22:12
















0














My textbook says the following:




Given a vector $mathrm{mathbf{x}}$ of random variables $x_i$ for $i = 1, dots, N,$ with mean $bar{mathrm{mathbf{x}}} = E[mathrm{mathbf{x}}]$, where $E[cdot]$ represents the expected, and $Delta mathrm{mathbf{x}} = mathrm{mathbf{x}} - bar{mathrm{mathbf{x}}}$, the covariance matrix $Sigma$ is an $N times N$ matrix given by



$$Sigma = E[Delta mathrm{mathbf{x}} Delta mathrm{mathbf{x}}^T]$$



so that $Sigma_{i j} = E[ Delta x_i Delta x_j]$. The diagonal entries of the matrix $Sigma$ are the variances of the individual variables $x_i$, whereas the off-diagonal entries are the cross-covariance values.



The variables $x_i$ are said to conform to a joint Gaussian distribution, if the probability distribution of $mathrm{mathbf{x}}$ is of the form



$$P(bar{mathrm{mathbf{x}}} + Delta mathrm{mathbf{x}}) = (2 pi) ^{-N/2} det(Sigma^{-1})^{1/2} exp(-(Delta mathrm{mathbf{x}})^T Sigma^{-1} (Delta mathrm{mathbf{x}})/2) tag{A2.1}$$



for some positive-semidefinite matrix $Sigma^{-1}$.



$vdots$



Change of coordinates. Since $Sigma$ is symmetric and positive-definite, it may be written as $Sigma = U^TDU$, where $U$ is an orthogonal matrix and $D = (sigma_1^2, sigma_2^2, dots, sigma_N^2)$ is diagonal. Writing $mathrm{mathbf{x}}' = U mathrm{mathbf{x}}$ and $bar{mathrm{mathbf{x}}}' = U bar{mathrm{mathbf{x}}}$, and substituting in (A2.1), leads to



$$ begin{align*}exp(-(mathrm{mathbf{x}} - bar{mathrm{mathbf{x}}})^T Sigma^{-1} (mathrm{mathbf{x}} - bar{mathrm{mathbf{x}}})/2) &= exp(-(mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')^T U Sigma^{-1} U^T (mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')/2) \ &= exp(-(mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')^T D^{-1} (mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')/2) end{align*}$$



Thus, the orthogonal change of coordinates from $mathrm{mathbf{x}}$ to $mathrm{mathbf{x}}' = U mathrm{mathbf{x}}$ transforms a general Gaussian PDF into one with diagonal covariance matrix. A further scaling by $sigma_i$ in each coordinate direction may be applied to transform it to an isotropic Gaussian distribution. Equivalently stated, a change of coordinates may be applied to transform Mahalanobis distance to ordinary Euclidean distance.




I don't understand how the author derived these expressions:




$$ begin{align*}exp(-(mathrm{mathbf{x}} - bar{mathrm{mathbf{x}}})^T Sigma^{-1} (mathrm{mathbf{x}} - bar{mathrm{mathbf{x}}})/2) &= exp(-(mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')^T U Sigma^{-1} U^T (mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')/2) \ &= exp(-(mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')^T D^{-1} (mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')/2) end{align*}$$




My attempt was as follows:



$$ begin{align*}exp(-(mathrm{mathbf{x}} - bar{mathrm{mathbf{x}}})^T Sigma^{-1} (mathrm{mathbf{x}} - bar{mathrm{mathbf{x}}})/2) &= exp(-(U^{-1}mathrm{mathbf{x}}' - U^{-1} bar{mathrm{mathbf{x}}}')^T Sigma^{-1}(U^{-1} mathrm{mathbf{x}} - U^{-1} bar{mathrm{mathbf{x}}})/2 ) \ &= exp(-(U^{-1}(mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}'))^T Sigma^{-1} (U^{-1}(mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}'))/2) \ &= exp(-((mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')^T U) Sigma^{-1} dfrac{U^T}{2} (mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')/2) tag{*} end{align*}$$



(*) Since $(AB)^T = B^T A^T$ and $U^T = U^{-1}$ ($U$ is orthogonal).



As you can see, I can't figure out how to derive the expressions that the author outlines. In fact, based on my work, as shown above, I can't see how such a derivation is possible?



I would greatly appreciate it if people could please take the time to demonstrate this.










share|cite|improve this question


















  • 1




    Isn't your derivation exactly the proof? In your last line you add a factor $frac{1}{2}$ erroneously but otherwise I am not sure I see the issue. The last step just uses that $Sigma = U^{T}DU Rightarrow Sigma^{-1} = U^{T}D^{-1}U$
    – Jonathan
    Nov 21 at 21:58












  • @Jonathan thanks for the response. Yes, I mistakenly added in a factor $dfrac{1}{2}$; thanks for pointing that out. Can you please explain how you found the last implication?
    – The Pointer
    Nov 21 at 22:10








  • 1




    Sure! I'll write an answer.
    – Jonathan
    Nov 21 at 22:12














0












0








0







My textbook says the following:




Given a vector $mathrm{mathbf{x}}$ of random variables $x_i$ for $i = 1, dots, N,$ with mean $bar{mathrm{mathbf{x}}} = E[mathrm{mathbf{x}}]$, where $E[cdot]$ represents the expected, and $Delta mathrm{mathbf{x}} = mathrm{mathbf{x}} - bar{mathrm{mathbf{x}}}$, the covariance matrix $Sigma$ is an $N times N$ matrix given by



$$Sigma = E[Delta mathrm{mathbf{x}} Delta mathrm{mathbf{x}}^T]$$



so that $Sigma_{i j} = E[ Delta x_i Delta x_j]$. The diagonal entries of the matrix $Sigma$ are the variances of the individual variables $x_i$, whereas the off-diagonal entries are the cross-covariance values.



The variables $x_i$ are said to conform to a joint Gaussian distribution, if the probability distribution of $mathrm{mathbf{x}}$ is of the form



$$P(bar{mathrm{mathbf{x}}} + Delta mathrm{mathbf{x}}) = (2 pi) ^{-N/2} det(Sigma^{-1})^{1/2} exp(-(Delta mathrm{mathbf{x}})^T Sigma^{-1} (Delta mathrm{mathbf{x}})/2) tag{A2.1}$$



for some positive-semidefinite matrix $Sigma^{-1}$.



$vdots$



Change of coordinates. Since $Sigma$ is symmetric and positive-definite, it may be written as $Sigma = U^TDU$, where $U$ is an orthogonal matrix and $D = (sigma_1^2, sigma_2^2, dots, sigma_N^2)$ is diagonal. Writing $mathrm{mathbf{x}}' = U mathrm{mathbf{x}}$ and $bar{mathrm{mathbf{x}}}' = U bar{mathrm{mathbf{x}}}$, and substituting in (A2.1), leads to



$$ begin{align*}exp(-(mathrm{mathbf{x}} - bar{mathrm{mathbf{x}}})^T Sigma^{-1} (mathrm{mathbf{x}} - bar{mathrm{mathbf{x}}})/2) &= exp(-(mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')^T U Sigma^{-1} U^T (mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')/2) \ &= exp(-(mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')^T D^{-1} (mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')/2) end{align*}$$



Thus, the orthogonal change of coordinates from $mathrm{mathbf{x}}$ to $mathrm{mathbf{x}}' = U mathrm{mathbf{x}}$ transforms a general Gaussian PDF into one with diagonal covariance matrix. A further scaling by $sigma_i$ in each coordinate direction may be applied to transform it to an isotropic Gaussian distribution. Equivalently stated, a change of coordinates may be applied to transform Mahalanobis distance to ordinary Euclidean distance.




I don't understand how the author derived these expressions:




$$ begin{align*}exp(-(mathrm{mathbf{x}} - bar{mathrm{mathbf{x}}})^T Sigma^{-1} (mathrm{mathbf{x}} - bar{mathrm{mathbf{x}}})/2) &= exp(-(mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')^T U Sigma^{-1} U^T (mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')/2) \ &= exp(-(mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')^T D^{-1} (mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')/2) end{align*}$$




My attempt was as follows:



$$ begin{align*}exp(-(mathrm{mathbf{x}} - bar{mathrm{mathbf{x}}})^T Sigma^{-1} (mathrm{mathbf{x}} - bar{mathrm{mathbf{x}}})/2) &= exp(-(U^{-1}mathrm{mathbf{x}}' - U^{-1} bar{mathrm{mathbf{x}}}')^T Sigma^{-1}(U^{-1} mathrm{mathbf{x}} - U^{-1} bar{mathrm{mathbf{x}}})/2 ) \ &= exp(-(U^{-1}(mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}'))^T Sigma^{-1} (U^{-1}(mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}'))/2) \ &= exp(-((mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')^T U) Sigma^{-1} dfrac{U^T}{2} (mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')/2) tag{*} end{align*}$$



(*) Since $(AB)^T = B^T A^T$ and $U^T = U^{-1}$ ($U$ is orthogonal).



As you can see, I can't figure out how to derive the expressions that the author outlines. In fact, based on my work, as shown above, I can't see how such a derivation is possible?



I would greatly appreciate it if people could please take the time to demonstrate this.










share|cite|improve this question













My textbook says the following:




Given a vector $mathrm{mathbf{x}}$ of random variables $x_i$ for $i = 1, dots, N,$ with mean $bar{mathrm{mathbf{x}}} = E[mathrm{mathbf{x}}]$, where $E[cdot]$ represents the expected, and $Delta mathrm{mathbf{x}} = mathrm{mathbf{x}} - bar{mathrm{mathbf{x}}}$, the covariance matrix $Sigma$ is an $N times N$ matrix given by



$$Sigma = E[Delta mathrm{mathbf{x}} Delta mathrm{mathbf{x}}^T]$$



so that $Sigma_{i j} = E[ Delta x_i Delta x_j]$. The diagonal entries of the matrix $Sigma$ are the variances of the individual variables $x_i$, whereas the off-diagonal entries are the cross-covariance values.



The variables $x_i$ are said to conform to a joint Gaussian distribution, if the probability distribution of $mathrm{mathbf{x}}$ is of the form



$$P(bar{mathrm{mathbf{x}}} + Delta mathrm{mathbf{x}}) = (2 pi) ^{-N/2} det(Sigma^{-1})^{1/2} exp(-(Delta mathrm{mathbf{x}})^T Sigma^{-1} (Delta mathrm{mathbf{x}})/2) tag{A2.1}$$



for some positive-semidefinite matrix $Sigma^{-1}$.



$vdots$



Change of coordinates. Since $Sigma$ is symmetric and positive-definite, it may be written as $Sigma = U^TDU$, where $U$ is an orthogonal matrix and $D = (sigma_1^2, sigma_2^2, dots, sigma_N^2)$ is diagonal. Writing $mathrm{mathbf{x}}' = U mathrm{mathbf{x}}$ and $bar{mathrm{mathbf{x}}}' = U bar{mathrm{mathbf{x}}}$, and substituting in (A2.1), leads to



$$ begin{align*}exp(-(mathrm{mathbf{x}} - bar{mathrm{mathbf{x}}})^T Sigma^{-1} (mathrm{mathbf{x}} - bar{mathrm{mathbf{x}}})/2) &= exp(-(mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')^T U Sigma^{-1} U^T (mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')/2) \ &= exp(-(mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')^T D^{-1} (mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')/2) end{align*}$$



Thus, the orthogonal change of coordinates from $mathrm{mathbf{x}}$ to $mathrm{mathbf{x}}' = U mathrm{mathbf{x}}$ transforms a general Gaussian PDF into one with diagonal covariance matrix. A further scaling by $sigma_i$ in each coordinate direction may be applied to transform it to an isotropic Gaussian distribution. Equivalently stated, a change of coordinates may be applied to transform Mahalanobis distance to ordinary Euclidean distance.




I don't understand how the author derived these expressions:




$$ begin{align*}exp(-(mathrm{mathbf{x}} - bar{mathrm{mathbf{x}}})^T Sigma^{-1} (mathrm{mathbf{x}} - bar{mathrm{mathbf{x}}})/2) &= exp(-(mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')^T U Sigma^{-1} U^T (mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')/2) \ &= exp(-(mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')^T D^{-1} (mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')/2) end{align*}$$




My attempt was as follows:



$$ begin{align*}exp(-(mathrm{mathbf{x}} - bar{mathrm{mathbf{x}}})^T Sigma^{-1} (mathrm{mathbf{x}} - bar{mathrm{mathbf{x}}})/2) &= exp(-(U^{-1}mathrm{mathbf{x}}' - U^{-1} bar{mathrm{mathbf{x}}}')^T Sigma^{-1}(U^{-1} mathrm{mathbf{x}} - U^{-1} bar{mathrm{mathbf{x}}})/2 ) \ &= exp(-(U^{-1}(mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}'))^T Sigma^{-1} (U^{-1}(mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}'))/2) \ &= exp(-((mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')^T U) Sigma^{-1} dfrac{U^T}{2} (mathrm{mathbf{x}}' - bar{mathrm{mathbf{x}}}')/2) tag{*} end{align*}$$



(*) Since $(AB)^T = B^T A^T$ and $U^T = U^{-1}$ ($U$ is orthogonal).



As you can see, I can't figure out how to derive the expressions that the author outlines. In fact, based on my work, as shown above, I can't see how such a derivation is possible?



I would greatly appreciate it if people could please take the time to demonstrate this.







linear-algebra probability statistics orthogonal-matrices change-of-variable






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Nov 21 at 21:46









The Pointer

2,56821333




2,56821333








  • 1




    Isn't your derivation exactly the proof? In your last line you add a factor $frac{1}{2}$ erroneously but otherwise I am not sure I see the issue. The last step just uses that $Sigma = U^{T}DU Rightarrow Sigma^{-1} = U^{T}D^{-1}U$
    – Jonathan
    Nov 21 at 21:58












  • @Jonathan thanks for the response. Yes, I mistakenly added in a factor $dfrac{1}{2}$; thanks for pointing that out. Can you please explain how you found the last implication?
    – The Pointer
    Nov 21 at 22:10








  • 1




    Sure! I'll write an answer.
    – Jonathan
    Nov 21 at 22:12














  • 1




    Isn't your derivation exactly the proof? In your last line you add a factor $frac{1}{2}$ erroneously but otherwise I am not sure I see the issue. The last step just uses that $Sigma = U^{T}DU Rightarrow Sigma^{-1} = U^{T}D^{-1}U$
    – Jonathan
    Nov 21 at 21:58












  • @Jonathan thanks for the response. Yes, I mistakenly added in a factor $dfrac{1}{2}$; thanks for pointing that out. Can you please explain how you found the last implication?
    – The Pointer
    Nov 21 at 22:10








  • 1




    Sure! I'll write an answer.
    – Jonathan
    Nov 21 at 22:12








1




1




Isn't your derivation exactly the proof? In your last line you add a factor $frac{1}{2}$ erroneously but otherwise I am not sure I see the issue. The last step just uses that $Sigma = U^{T}DU Rightarrow Sigma^{-1} = U^{T}D^{-1}U$
– Jonathan
Nov 21 at 21:58






Isn't your derivation exactly the proof? In your last line you add a factor $frac{1}{2}$ erroneously but otherwise I am not sure I see the issue. The last step just uses that $Sigma = U^{T}DU Rightarrow Sigma^{-1} = U^{T}D^{-1}U$
– Jonathan
Nov 21 at 21:58














@Jonathan thanks for the response. Yes, I mistakenly added in a factor $dfrac{1}{2}$; thanks for pointing that out. Can you please explain how you found the last implication?
– The Pointer
Nov 21 at 22:10






@Jonathan thanks for the response. Yes, I mistakenly added in a factor $dfrac{1}{2}$; thanks for pointing that out. Can you please explain how you found the last implication?
– The Pointer
Nov 21 at 22:10






1




1




Sure! I'll write an answer.
– Jonathan
Nov 21 at 22:12




Sure! I'll write an answer.
– Jonathan
Nov 21 at 22:12










1 Answer
1






active

oldest

votes


















1














You are only missing the implication $Sigma = U^{T}DU Rightarrow Sigma^{-1} = U^{T}D^{-1}U$. Now, by definition we can write $Sigma = U^{T}DU$. For invertible matrices $A,B$ it holds that $(AB)^{-1} = B^{-1}A^{-1}$. Therefore
$$
Sigma^{-1} = (U^{T}DU)^{-1} = U^{-1}(U^{T}D)^{-1} = U^{-1}D^{-1}(U^{T})^{-1} = U^TD^{-1}U
$$

where we used the fact that $U^{-1} = U^T$.






share|cite|improve this answer





















  • Ahh, wait, what about the $-1$ factor? This means we have $-U$ instead of $U$?
    – The Pointer
    Nov 21 at 22:35








  • 1




    The $-1$ factor is preserved through the entire calculation.
    – Jonathan
    Nov 21 at 22:36












  • Ahh, yes, I’m just confusing myself. Thanks again.
    – The Pointer
    Nov 21 at 22:38










  • No problem, glad to help.
    – Jonathan
    Nov 21 at 22:39











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3008416%2fjoint-gaussian-pdf-change-of-coordinates%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1














You are only missing the implication $Sigma = U^{T}DU Rightarrow Sigma^{-1} = U^{T}D^{-1}U$. Now, by definition we can write $Sigma = U^{T}DU$. For invertible matrices $A,B$ it holds that $(AB)^{-1} = B^{-1}A^{-1}$. Therefore
$$
Sigma^{-1} = (U^{T}DU)^{-1} = U^{-1}(U^{T}D)^{-1} = U^{-1}D^{-1}(U^{T})^{-1} = U^TD^{-1}U
$$

where we used the fact that $U^{-1} = U^T$.






share|cite|improve this answer





















  • Ahh, wait, what about the $-1$ factor? This means we have $-U$ instead of $U$?
    – The Pointer
    Nov 21 at 22:35








  • 1




    The $-1$ factor is preserved through the entire calculation.
    – Jonathan
    Nov 21 at 22:36












  • Ahh, yes, I’m just confusing myself. Thanks again.
    – The Pointer
    Nov 21 at 22:38










  • No problem, glad to help.
    – Jonathan
    Nov 21 at 22:39
















1














You are only missing the implication $Sigma = U^{T}DU Rightarrow Sigma^{-1} = U^{T}D^{-1}U$. Now, by definition we can write $Sigma = U^{T}DU$. For invertible matrices $A,B$ it holds that $(AB)^{-1} = B^{-1}A^{-1}$. Therefore
$$
Sigma^{-1} = (U^{T}DU)^{-1} = U^{-1}(U^{T}D)^{-1} = U^{-1}D^{-1}(U^{T})^{-1} = U^TD^{-1}U
$$

where we used the fact that $U^{-1} = U^T$.






share|cite|improve this answer





















  • Ahh, wait, what about the $-1$ factor? This means we have $-U$ instead of $U$?
    – The Pointer
    Nov 21 at 22:35








  • 1




    The $-1$ factor is preserved through the entire calculation.
    – Jonathan
    Nov 21 at 22:36












  • Ahh, yes, I’m just confusing myself. Thanks again.
    – The Pointer
    Nov 21 at 22:38










  • No problem, glad to help.
    – Jonathan
    Nov 21 at 22:39














1












1








1






You are only missing the implication $Sigma = U^{T}DU Rightarrow Sigma^{-1} = U^{T}D^{-1}U$. Now, by definition we can write $Sigma = U^{T}DU$. For invertible matrices $A,B$ it holds that $(AB)^{-1} = B^{-1}A^{-1}$. Therefore
$$
Sigma^{-1} = (U^{T}DU)^{-1} = U^{-1}(U^{T}D)^{-1} = U^{-1}D^{-1}(U^{T})^{-1} = U^TD^{-1}U
$$

where we used the fact that $U^{-1} = U^T$.






share|cite|improve this answer












You are only missing the implication $Sigma = U^{T}DU Rightarrow Sigma^{-1} = U^{T}D^{-1}U$. Now, by definition we can write $Sigma = U^{T}DU$. For invertible matrices $A,B$ it holds that $(AB)^{-1} = B^{-1}A^{-1}$. Therefore
$$
Sigma^{-1} = (U^{T}DU)^{-1} = U^{-1}(U^{T}D)^{-1} = U^{-1}D^{-1}(U^{T})^{-1} = U^TD^{-1}U
$$

where we used the fact that $U^{-1} = U^T$.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Nov 21 at 22:17









Jonathan

16412




16412












  • Ahh, wait, what about the $-1$ factor? This means we have $-U$ instead of $U$?
    – The Pointer
    Nov 21 at 22:35








  • 1




    The $-1$ factor is preserved through the entire calculation.
    – Jonathan
    Nov 21 at 22:36












  • Ahh, yes, I’m just confusing myself. Thanks again.
    – The Pointer
    Nov 21 at 22:38










  • No problem, glad to help.
    – Jonathan
    Nov 21 at 22:39


















  • Ahh, wait, what about the $-1$ factor? This means we have $-U$ instead of $U$?
    – The Pointer
    Nov 21 at 22:35








  • 1




    The $-1$ factor is preserved through the entire calculation.
    – Jonathan
    Nov 21 at 22:36












  • Ahh, yes, I’m just confusing myself. Thanks again.
    – The Pointer
    Nov 21 at 22:38










  • No problem, glad to help.
    – Jonathan
    Nov 21 at 22:39
















Ahh, wait, what about the $-1$ factor? This means we have $-U$ instead of $U$?
– The Pointer
Nov 21 at 22:35






Ahh, wait, what about the $-1$ factor? This means we have $-U$ instead of $U$?
– The Pointer
Nov 21 at 22:35






1




1




The $-1$ factor is preserved through the entire calculation.
– Jonathan
Nov 21 at 22:36






The $-1$ factor is preserved through the entire calculation.
– Jonathan
Nov 21 at 22:36














Ahh, yes, I’m just confusing myself. Thanks again.
– The Pointer
Nov 21 at 22:38




Ahh, yes, I’m just confusing myself. Thanks again.
– The Pointer
Nov 21 at 22:38












No problem, glad to help.
– Jonathan
Nov 21 at 22:39




No problem, glad to help.
– Jonathan
Nov 21 at 22:39


















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3008416%2fjoint-gaussian-pdf-change-of-coordinates%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Plaza Victoria

In PowerPoint, is there a keyboard shortcut for bulleted / numbered list?

How to put 3 figures in Latex with 2 figures side by side and 1 below these side by side images but in...