If a matrix A is both triangular and unitary, then it is diagonal












4












$begingroup$


I have a proof I was hoping someone could review. Thanks in advance!



Let A be a matrix that is both triangular and unitary.
Then the columns of A are of length 1 and are orthogonal to each other.
Assume A is upper triangular, in this case.
Then set a$_n$ = column $n$ in A.



Clearly, since A is upper triangular and the columns are of length 1, then $ a$_1$ = $$
begin{bmatrix}
pm 1 \
0 \
0 \
vdots \
0 \
end{bmatrix}
$$
$



Now column $a_2$ has a similar composition with $a_1$. That is, on the diagonal of the entire matrix A, its second component here, $y_2$, has the value $pm$ 1 and zeros in all other components. This is due to first only two options exists for non-zero entries in the columns since A is upper triangular. Second since $a_1$*$a_2$ = 0 by orthogonality we have:



$$
1*y_1 + 0*y_2 + 0 + 0 + ... + 0 = 0
$$



Where $a_2$ is made up of components $y_1$, $y_2$ , ... $y_n$. So clearly $y_1$ must be zero, and for i > 2 we know $y_i$ = 0, by the upper triangular property



$ a$_2$ = $$
begin{bmatrix}
0 \
pm 1 \
0 \
vdots \
0 \
end{bmatrix}
$$
$



The remainder of the proof uses induction after examining these first two cases as the base case.



Let the following claim hold for $a_n$, we wish to prove that $a_(n+1)$ is a zero vector except in its n+1 component (where $pm$ 1 resides). Then given the $a_(n+1)$ column we note that by the orthogonality with all the previous n columns we can show that 0 must be the value of the first n components in $a_(n+1)$.



for any ith component in the column for i < n+1 we have:



$a_1$ * $a_n+1$ = 0 => the 1st component must be zero (similarly to above)



$a_2$ * $a_n+1$ = 0 => the 2nd component must be zero (similarly to above)
and so on ... until we conclude that each component up to the n+1 component must be equal to zero by orthogonality.



Hence only the $n+1$ component in the column $a_(n+1)$ can be equal to $pm$ 1 since all other inner products with the previous n columns indicate the remaining components must be zero (and any components below are zero due to A being upper triangular).



Hence for the upper triangular case, the matrix A must be diagonal.



I believe the lower triangular case to be a similar proof, but have not delved into it yet.



Thanks!










share|cite|improve this question











$endgroup$








  • 2




    $begingroup$
    pm gives $pm$.
    $endgroup$
    – Lord Shark the Unknown
    Dec 23 '18 at 6:26










  • $begingroup$
    Your proof looks okay to me
    $endgroup$
    – Shubham Johri
    Dec 23 '18 at 12:00
















4












$begingroup$


I have a proof I was hoping someone could review. Thanks in advance!



Let A be a matrix that is both triangular and unitary.
Then the columns of A are of length 1 and are orthogonal to each other.
Assume A is upper triangular, in this case.
Then set a$_n$ = column $n$ in A.



Clearly, since A is upper triangular and the columns are of length 1, then $ a$_1$ = $$
begin{bmatrix}
pm 1 \
0 \
0 \
vdots \
0 \
end{bmatrix}
$$
$



Now column $a_2$ has a similar composition with $a_1$. That is, on the diagonal of the entire matrix A, its second component here, $y_2$, has the value $pm$ 1 and zeros in all other components. This is due to first only two options exists for non-zero entries in the columns since A is upper triangular. Second since $a_1$*$a_2$ = 0 by orthogonality we have:



$$
1*y_1 + 0*y_2 + 0 + 0 + ... + 0 = 0
$$



Where $a_2$ is made up of components $y_1$, $y_2$ , ... $y_n$. So clearly $y_1$ must be zero, and for i > 2 we know $y_i$ = 0, by the upper triangular property



$ a$_2$ = $$
begin{bmatrix}
0 \
pm 1 \
0 \
vdots \
0 \
end{bmatrix}
$$
$



The remainder of the proof uses induction after examining these first two cases as the base case.



Let the following claim hold for $a_n$, we wish to prove that $a_(n+1)$ is a zero vector except in its n+1 component (where $pm$ 1 resides). Then given the $a_(n+1)$ column we note that by the orthogonality with all the previous n columns we can show that 0 must be the value of the first n components in $a_(n+1)$.



for any ith component in the column for i < n+1 we have:



$a_1$ * $a_n+1$ = 0 => the 1st component must be zero (similarly to above)



$a_2$ * $a_n+1$ = 0 => the 2nd component must be zero (similarly to above)
and so on ... until we conclude that each component up to the n+1 component must be equal to zero by orthogonality.



Hence only the $n+1$ component in the column $a_(n+1)$ can be equal to $pm$ 1 since all other inner products with the previous n columns indicate the remaining components must be zero (and any components below are zero due to A being upper triangular).



Hence for the upper triangular case, the matrix A must be diagonal.



I believe the lower triangular case to be a similar proof, but have not delved into it yet.



Thanks!










share|cite|improve this question











$endgroup$








  • 2




    $begingroup$
    pm gives $pm$.
    $endgroup$
    – Lord Shark the Unknown
    Dec 23 '18 at 6:26










  • $begingroup$
    Your proof looks okay to me
    $endgroup$
    – Shubham Johri
    Dec 23 '18 at 12:00














4












4








4


1



$begingroup$


I have a proof I was hoping someone could review. Thanks in advance!



Let A be a matrix that is both triangular and unitary.
Then the columns of A are of length 1 and are orthogonal to each other.
Assume A is upper triangular, in this case.
Then set a$_n$ = column $n$ in A.



Clearly, since A is upper triangular and the columns are of length 1, then $ a$_1$ = $$
begin{bmatrix}
pm 1 \
0 \
0 \
vdots \
0 \
end{bmatrix}
$$
$



Now column $a_2$ has a similar composition with $a_1$. That is, on the diagonal of the entire matrix A, its second component here, $y_2$, has the value $pm$ 1 and zeros in all other components. This is due to first only two options exists for non-zero entries in the columns since A is upper triangular. Second since $a_1$*$a_2$ = 0 by orthogonality we have:



$$
1*y_1 + 0*y_2 + 0 + 0 + ... + 0 = 0
$$



Where $a_2$ is made up of components $y_1$, $y_2$ , ... $y_n$. So clearly $y_1$ must be zero, and for i > 2 we know $y_i$ = 0, by the upper triangular property



$ a$_2$ = $$
begin{bmatrix}
0 \
pm 1 \
0 \
vdots \
0 \
end{bmatrix}
$$
$



The remainder of the proof uses induction after examining these first two cases as the base case.



Let the following claim hold for $a_n$, we wish to prove that $a_(n+1)$ is a zero vector except in its n+1 component (where $pm$ 1 resides). Then given the $a_(n+1)$ column we note that by the orthogonality with all the previous n columns we can show that 0 must be the value of the first n components in $a_(n+1)$.



for any ith component in the column for i < n+1 we have:



$a_1$ * $a_n+1$ = 0 => the 1st component must be zero (similarly to above)



$a_2$ * $a_n+1$ = 0 => the 2nd component must be zero (similarly to above)
and so on ... until we conclude that each component up to the n+1 component must be equal to zero by orthogonality.



Hence only the $n+1$ component in the column $a_(n+1)$ can be equal to $pm$ 1 since all other inner products with the previous n columns indicate the remaining components must be zero (and any components below are zero due to A being upper triangular).



Hence for the upper triangular case, the matrix A must be diagonal.



I believe the lower triangular case to be a similar proof, but have not delved into it yet.



Thanks!










share|cite|improve this question











$endgroup$




I have a proof I was hoping someone could review. Thanks in advance!



Let A be a matrix that is both triangular and unitary.
Then the columns of A are of length 1 and are orthogonal to each other.
Assume A is upper triangular, in this case.
Then set a$_n$ = column $n$ in A.



Clearly, since A is upper triangular and the columns are of length 1, then $ a$_1$ = $$
begin{bmatrix}
pm 1 \
0 \
0 \
vdots \
0 \
end{bmatrix}
$$
$



Now column $a_2$ has a similar composition with $a_1$. That is, on the diagonal of the entire matrix A, its second component here, $y_2$, has the value $pm$ 1 and zeros in all other components. This is due to first only two options exists for non-zero entries in the columns since A is upper triangular. Second since $a_1$*$a_2$ = 0 by orthogonality we have:



$$
1*y_1 + 0*y_2 + 0 + 0 + ... + 0 = 0
$$



Where $a_2$ is made up of components $y_1$, $y_2$ , ... $y_n$. So clearly $y_1$ must be zero, and for i > 2 we know $y_i$ = 0, by the upper triangular property



$ a$_2$ = $$
begin{bmatrix}
0 \
pm 1 \
0 \
vdots \
0 \
end{bmatrix}
$$
$



The remainder of the proof uses induction after examining these first two cases as the base case.



Let the following claim hold for $a_n$, we wish to prove that $a_(n+1)$ is a zero vector except in its n+1 component (where $pm$ 1 resides). Then given the $a_(n+1)$ column we note that by the orthogonality with all the previous n columns we can show that 0 must be the value of the first n components in $a_(n+1)$.



for any ith component in the column for i < n+1 we have:



$a_1$ * $a_n+1$ = 0 => the 1st component must be zero (similarly to above)



$a_2$ * $a_n+1$ = 0 => the 2nd component must be zero (similarly to above)
and so on ... until we conclude that each component up to the n+1 component must be equal to zero by orthogonality.



Hence only the $n+1$ component in the column $a_(n+1)$ can be equal to $pm$ 1 since all other inner products with the previous n columns indicate the remaining components must be zero (and any components below are zero due to A being upper triangular).



Hence for the upper triangular case, the matrix A must be diagonal.



I believe the lower triangular case to be a similar proof, but have not delved into it yet.



Thanks!







linear-algebra proof-verification






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 23 '18 at 8:35









user1551

74.5k566129




74.5k566129










asked Dec 23 '18 at 6:08









H_1317H_1317

19310




19310








  • 2




    $begingroup$
    pm gives $pm$.
    $endgroup$
    – Lord Shark the Unknown
    Dec 23 '18 at 6:26










  • $begingroup$
    Your proof looks okay to me
    $endgroup$
    – Shubham Johri
    Dec 23 '18 at 12:00














  • 2




    $begingroup$
    pm gives $pm$.
    $endgroup$
    – Lord Shark the Unknown
    Dec 23 '18 at 6:26










  • $begingroup$
    Your proof looks okay to me
    $endgroup$
    – Shubham Johri
    Dec 23 '18 at 12:00








2




2




$begingroup$
pm gives $pm$.
$endgroup$
– Lord Shark the Unknown
Dec 23 '18 at 6:26




$begingroup$
pm gives $pm$.
$endgroup$
– Lord Shark the Unknown
Dec 23 '18 at 6:26












$begingroup$
Your proof looks okay to me
$endgroup$
– Shubham Johri
Dec 23 '18 at 12:00




$begingroup$
Your proof looks okay to me
$endgroup$
– Shubham Johri
Dec 23 '18 at 12:00










2 Answers
2






active

oldest

votes


















3












$begingroup$

I think our OP H_1317's proof is conceptually correct.



Here is a more somewhat more abstract proof:



Suppose $A$ is upper triangular; then I claim that $A^{-1}$ is also upper triangular; for we may write



$A = D + T, tag 1$



where $D$ is diagonal and $T$ is strictly upper triangular; that is, the diagonal entries of $T$ are all zero; we observe that, since $A$ is triangular, $det(A)$ is the product of the diagonal entries of $A$; since $A$ is unitary, it is non-singular and thus $det(A) ne 0$, so none of the diagonal entries of $A$ vanish, and the same applies to $D$; therefore $D$ is invertible and we may write



$A = D(I + D^{-1}T); tag 2$



we next observe that $D^{-1}T$ is itself strictly upper triangular, hence nilpotent; in fact we have



$(D^{-1}T)^n = 0, tag 3$



where $n = text{size}(A)$; the nilpotence of $D^{-1}T$ allows us to write an explicit inverse for $I + D^{-1}T$; indeed, we have the well-known formula



$(I + D^{-1}T) displaystyle sum_0^{n - 1} (-D^{-1}T)^k = I + (-1)^n (D^{-1}T)^n = I; tag 4$



thus,



$(I + D^{-1}T)^{-1} = displaystyle sum_0^{n - 1} (-D^{-1}T)^k; tag 5$



since every matrix $(-D^{-1}T)^k$ occurring in this sum is upper triangular, we see that $(I + D^{-1}T)^{-1}$ is upper triangular as well; from (2),



$A^{-1} = (I + D^{-1}T)^{-1}D^{-1}, tag 6$



which shows that $A^{-1}$ is upper triangular.



Having established my claim, we now invoke the unitarity of $A$:



$A^dagger A = AA^dagger = I, tag 7$



i.e.,



$A^dagger = A^{-1}; tag 8$



we have by definition



$A^dagger = (A^ast)^T = ((D + T)^ast)^T = (D^ast + T^ast)^T = (D^ast)^T + (T^ast)^T, tag 9$



from which we see that $A^dagger$ is lower triangular when $A$, and hence $A^{-1}$, is upper triangular;
then only way (8) can hold is with $T = 0$; therefore we see that



$A = D tag{10}$



is a diagonal matrix.



Of course, if $A$ is lower triangular the same result binds, the proof almost identical to that given above. $OEDelta$.






share|cite|improve this answer











$endgroup$









  • 1




    $begingroup$
    Thanks for the alternate proof. would my conceptually correct answer be equivalent to normally correct here?
    $endgroup$
    – H_1317
    Dec 23 '18 at 21:12






  • 1




    $begingroup$
    @H_1317: Yes, I think so; the only reason I said "conceptually correct" instead of simpley "correct" was that I was having a little trouble sorting out your equations $a_1 ast a_n + 1 = 0$ etc. But I think I've got it now.
    $endgroup$
    – Robert Lewis
    Dec 23 '18 at 21:14






  • 1




    $begingroup$
    Ahh yes — those equations in the induction part were my biggest worry for a potential error or maybe just not convincing enough.
    $endgroup$
    – H_1317
    Dec 23 '18 at 21:16



















0












$begingroup$

Here is a proof that an invertible lower triangular matrix has a lower triangular inverse only using elementary row operations. From this it clearly follows that a unitary triangular matrix must be diagonal.



Suppose a lower triangular matrix $A$ is invertible, say
$$
A=begin{bmatrix}
a_{11} & 0 & 0 & dots & 0 \
a_{21} & a_{22} & 0 & dots & 0 \
vdots & vdots & vdots & ddots & vdots \
a_{n1} & a_{n2} & a_{n3} & dots & a_{nn}
end{bmatrix}
$$

Then $a_{ii}ne0$, for $i=1,2,dots,n$.



Denote by $E_i(c)$ the matrix obtained from the $ntimes n$ identity matrix by multiplying the $i$-th row by $cne0$; similarly, denote by $E_{ij}(d)$ the matrix obtained from the identity by adding to the $i$-th row the $j$-th row multiplied by $d$ (here it is assumed that $ine j$).



Note that $E_i(c)$ ($cne0$) and $E_{ij}(d)$ ($ine j$) are invertible lower triangular matrices and their inverses are, respectively, $E_i(c^{-1})$ and $E_{ij}(-d)$, again lower triangular.



Then it is easily seen that multiplying a matrix on the left by $E_i(c)$ or $E_{ij}(d)$ is the same as performing the corresponding elementary row operation on $A$. You can also verify that the product
$$
E_1(a_{11})E_{21}(a_{21})dotsm E_{n1}(a_{n1}) E_{2}(a_{22})dotsm E_{n}(a_{nn})I_n
$$

gives back the matrix $A$, just by performing successively the elementary row operations: starting from the right, the first row operations places $a_{nn}$ at position $(n,n)$; the following one is $E_{n,n-1}(a_{n,n-1})$ will place $a_{n-1,n}$ at position $(n-1,n)$. These two entries will be unaffected by the subsequent row operations and the same is for all the other matrix multiplications/row operations. You can see that the matrix is “filled in” starting from the lower left corner, going then left and up until the diagonal is reached.



Therefore $A$ is the product of lower triangular matrices each of them having a lower triangular inverse; hence also $A^{-1}$ is lower triangular as well.






share|cite|improve this answer









$endgroup$














    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3050100%2fif-a-matrix-a-is-both-triangular-and-unitary-then-it-is-diagonal%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    3












    $begingroup$

    I think our OP H_1317's proof is conceptually correct.



    Here is a more somewhat more abstract proof:



    Suppose $A$ is upper triangular; then I claim that $A^{-1}$ is also upper triangular; for we may write



    $A = D + T, tag 1$



    where $D$ is diagonal and $T$ is strictly upper triangular; that is, the diagonal entries of $T$ are all zero; we observe that, since $A$ is triangular, $det(A)$ is the product of the diagonal entries of $A$; since $A$ is unitary, it is non-singular and thus $det(A) ne 0$, so none of the diagonal entries of $A$ vanish, and the same applies to $D$; therefore $D$ is invertible and we may write



    $A = D(I + D^{-1}T); tag 2$



    we next observe that $D^{-1}T$ is itself strictly upper triangular, hence nilpotent; in fact we have



    $(D^{-1}T)^n = 0, tag 3$



    where $n = text{size}(A)$; the nilpotence of $D^{-1}T$ allows us to write an explicit inverse for $I + D^{-1}T$; indeed, we have the well-known formula



    $(I + D^{-1}T) displaystyle sum_0^{n - 1} (-D^{-1}T)^k = I + (-1)^n (D^{-1}T)^n = I; tag 4$



    thus,



    $(I + D^{-1}T)^{-1} = displaystyle sum_0^{n - 1} (-D^{-1}T)^k; tag 5$



    since every matrix $(-D^{-1}T)^k$ occurring in this sum is upper triangular, we see that $(I + D^{-1}T)^{-1}$ is upper triangular as well; from (2),



    $A^{-1} = (I + D^{-1}T)^{-1}D^{-1}, tag 6$



    which shows that $A^{-1}$ is upper triangular.



    Having established my claim, we now invoke the unitarity of $A$:



    $A^dagger A = AA^dagger = I, tag 7$



    i.e.,



    $A^dagger = A^{-1}; tag 8$



    we have by definition



    $A^dagger = (A^ast)^T = ((D + T)^ast)^T = (D^ast + T^ast)^T = (D^ast)^T + (T^ast)^T, tag 9$



    from which we see that $A^dagger$ is lower triangular when $A$, and hence $A^{-1}$, is upper triangular;
    then only way (8) can hold is with $T = 0$; therefore we see that



    $A = D tag{10}$



    is a diagonal matrix.



    Of course, if $A$ is lower triangular the same result binds, the proof almost identical to that given above. $OEDelta$.






    share|cite|improve this answer











    $endgroup$









    • 1




      $begingroup$
      Thanks for the alternate proof. would my conceptually correct answer be equivalent to normally correct here?
      $endgroup$
      – H_1317
      Dec 23 '18 at 21:12






    • 1




      $begingroup$
      @H_1317: Yes, I think so; the only reason I said "conceptually correct" instead of simpley "correct" was that I was having a little trouble sorting out your equations $a_1 ast a_n + 1 = 0$ etc. But I think I've got it now.
      $endgroup$
      – Robert Lewis
      Dec 23 '18 at 21:14






    • 1




      $begingroup$
      Ahh yes — those equations in the induction part were my biggest worry for a potential error or maybe just not convincing enough.
      $endgroup$
      – H_1317
      Dec 23 '18 at 21:16
















    3












    $begingroup$

    I think our OP H_1317's proof is conceptually correct.



    Here is a more somewhat more abstract proof:



    Suppose $A$ is upper triangular; then I claim that $A^{-1}$ is also upper triangular; for we may write



    $A = D + T, tag 1$



    where $D$ is diagonal and $T$ is strictly upper triangular; that is, the diagonal entries of $T$ are all zero; we observe that, since $A$ is triangular, $det(A)$ is the product of the diagonal entries of $A$; since $A$ is unitary, it is non-singular and thus $det(A) ne 0$, so none of the diagonal entries of $A$ vanish, and the same applies to $D$; therefore $D$ is invertible and we may write



    $A = D(I + D^{-1}T); tag 2$



    we next observe that $D^{-1}T$ is itself strictly upper triangular, hence nilpotent; in fact we have



    $(D^{-1}T)^n = 0, tag 3$



    where $n = text{size}(A)$; the nilpotence of $D^{-1}T$ allows us to write an explicit inverse for $I + D^{-1}T$; indeed, we have the well-known formula



    $(I + D^{-1}T) displaystyle sum_0^{n - 1} (-D^{-1}T)^k = I + (-1)^n (D^{-1}T)^n = I; tag 4$



    thus,



    $(I + D^{-1}T)^{-1} = displaystyle sum_0^{n - 1} (-D^{-1}T)^k; tag 5$



    since every matrix $(-D^{-1}T)^k$ occurring in this sum is upper triangular, we see that $(I + D^{-1}T)^{-1}$ is upper triangular as well; from (2),



    $A^{-1} = (I + D^{-1}T)^{-1}D^{-1}, tag 6$



    which shows that $A^{-1}$ is upper triangular.



    Having established my claim, we now invoke the unitarity of $A$:



    $A^dagger A = AA^dagger = I, tag 7$



    i.e.,



    $A^dagger = A^{-1}; tag 8$



    we have by definition



    $A^dagger = (A^ast)^T = ((D + T)^ast)^T = (D^ast + T^ast)^T = (D^ast)^T + (T^ast)^T, tag 9$



    from which we see that $A^dagger$ is lower triangular when $A$, and hence $A^{-1}$, is upper triangular;
    then only way (8) can hold is with $T = 0$; therefore we see that



    $A = D tag{10}$



    is a diagonal matrix.



    Of course, if $A$ is lower triangular the same result binds, the proof almost identical to that given above. $OEDelta$.






    share|cite|improve this answer











    $endgroup$









    • 1




      $begingroup$
      Thanks for the alternate proof. would my conceptually correct answer be equivalent to normally correct here?
      $endgroup$
      – H_1317
      Dec 23 '18 at 21:12






    • 1




      $begingroup$
      @H_1317: Yes, I think so; the only reason I said "conceptually correct" instead of simpley "correct" was that I was having a little trouble sorting out your equations $a_1 ast a_n + 1 = 0$ etc. But I think I've got it now.
      $endgroup$
      – Robert Lewis
      Dec 23 '18 at 21:14






    • 1




      $begingroup$
      Ahh yes — those equations in the induction part were my biggest worry for a potential error or maybe just not convincing enough.
      $endgroup$
      – H_1317
      Dec 23 '18 at 21:16














    3












    3








    3





    $begingroup$

    I think our OP H_1317's proof is conceptually correct.



    Here is a more somewhat more abstract proof:



    Suppose $A$ is upper triangular; then I claim that $A^{-1}$ is also upper triangular; for we may write



    $A = D + T, tag 1$



    where $D$ is diagonal and $T$ is strictly upper triangular; that is, the diagonal entries of $T$ are all zero; we observe that, since $A$ is triangular, $det(A)$ is the product of the diagonal entries of $A$; since $A$ is unitary, it is non-singular and thus $det(A) ne 0$, so none of the diagonal entries of $A$ vanish, and the same applies to $D$; therefore $D$ is invertible and we may write



    $A = D(I + D^{-1}T); tag 2$



    we next observe that $D^{-1}T$ is itself strictly upper triangular, hence nilpotent; in fact we have



    $(D^{-1}T)^n = 0, tag 3$



    where $n = text{size}(A)$; the nilpotence of $D^{-1}T$ allows us to write an explicit inverse for $I + D^{-1}T$; indeed, we have the well-known formula



    $(I + D^{-1}T) displaystyle sum_0^{n - 1} (-D^{-1}T)^k = I + (-1)^n (D^{-1}T)^n = I; tag 4$



    thus,



    $(I + D^{-1}T)^{-1} = displaystyle sum_0^{n - 1} (-D^{-1}T)^k; tag 5$



    since every matrix $(-D^{-1}T)^k$ occurring in this sum is upper triangular, we see that $(I + D^{-1}T)^{-1}$ is upper triangular as well; from (2),



    $A^{-1} = (I + D^{-1}T)^{-1}D^{-1}, tag 6$



    which shows that $A^{-1}$ is upper triangular.



    Having established my claim, we now invoke the unitarity of $A$:



    $A^dagger A = AA^dagger = I, tag 7$



    i.e.,



    $A^dagger = A^{-1}; tag 8$



    we have by definition



    $A^dagger = (A^ast)^T = ((D + T)^ast)^T = (D^ast + T^ast)^T = (D^ast)^T + (T^ast)^T, tag 9$



    from which we see that $A^dagger$ is lower triangular when $A$, and hence $A^{-1}$, is upper triangular;
    then only way (8) can hold is with $T = 0$; therefore we see that



    $A = D tag{10}$



    is a diagonal matrix.



    Of course, if $A$ is lower triangular the same result binds, the proof almost identical to that given above. $OEDelta$.






    share|cite|improve this answer











    $endgroup$



    I think our OP H_1317's proof is conceptually correct.



    Here is a more somewhat more abstract proof:



    Suppose $A$ is upper triangular; then I claim that $A^{-1}$ is also upper triangular; for we may write



    $A = D + T, tag 1$



    where $D$ is diagonal and $T$ is strictly upper triangular; that is, the diagonal entries of $T$ are all zero; we observe that, since $A$ is triangular, $det(A)$ is the product of the diagonal entries of $A$; since $A$ is unitary, it is non-singular and thus $det(A) ne 0$, so none of the diagonal entries of $A$ vanish, and the same applies to $D$; therefore $D$ is invertible and we may write



    $A = D(I + D^{-1}T); tag 2$



    we next observe that $D^{-1}T$ is itself strictly upper triangular, hence nilpotent; in fact we have



    $(D^{-1}T)^n = 0, tag 3$



    where $n = text{size}(A)$; the nilpotence of $D^{-1}T$ allows us to write an explicit inverse for $I + D^{-1}T$; indeed, we have the well-known formula



    $(I + D^{-1}T) displaystyle sum_0^{n - 1} (-D^{-1}T)^k = I + (-1)^n (D^{-1}T)^n = I; tag 4$



    thus,



    $(I + D^{-1}T)^{-1} = displaystyle sum_0^{n - 1} (-D^{-1}T)^k; tag 5$



    since every matrix $(-D^{-1}T)^k$ occurring in this sum is upper triangular, we see that $(I + D^{-1}T)^{-1}$ is upper triangular as well; from (2),



    $A^{-1} = (I + D^{-1}T)^{-1}D^{-1}, tag 6$



    which shows that $A^{-1}$ is upper triangular.



    Having established my claim, we now invoke the unitarity of $A$:



    $A^dagger A = AA^dagger = I, tag 7$



    i.e.,



    $A^dagger = A^{-1}; tag 8$



    we have by definition



    $A^dagger = (A^ast)^T = ((D + T)^ast)^T = (D^ast + T^ast)^T = (D^ast)^T + (T^ast)^T, tag 9$



    from which we see that $A^dagger$ is lower triangular when $A$, and hence $A^{-1}$, is upper triangular;
    then only way (8) can hold is with $T = 0$; therefore we see that



    $A = D tag{10}$



    is a diagonal matrix.



    Of course, if $A$ is lower triangular the same result binds, the proof almost identical to that given above. $OEDelta$.







    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited Dec 23 '18 at 21:06

























    answered Dec 23 '18 at 20:18









    Robert LewisRobert Lewis

    49.1k23168




    49.1k23168








    • 1




      $begingroup$
      Thanks for the alternate proof. would my conceptually correct answer be equivalent to normally correct here?
      $endgroup$
      – H_1317
      Dec 23 '18 at 21:12






    • 1




      $begingroup$
      @H_1317: Yes, I think so; the only reason I said "conceptually correct" instead of simpley "correct" was that I was having a little trouble sorting out your equations $a_1 ast a_n + 1 = 0$ etc. But I think I've got it now.
      $endgroup$
      – Robert Lewis
      Dec 23 '18 at 21:14






    • 1




      $begingroup$
      Ahh yes — those equations in the induction part were my biggest worry for a potential error or maybe just not convincing enough.
      $endgroup$
      – H_1317
      Dec 23 '18 at 21:16














    • 1




      $begingroup$
      Thanks for the alternate proof. would my conceptually correct answer be equivalent to normally correct here?
      $endgroup$
      – H_1317
      Dec 23 '18 at 21:12






    • 1




      $begingroup$
      @H_1317: Yes, I think so; the only reason I said "conceptually correct" instead of simpley "correct" was that I was having a little trouble sorting out your equations $a_1 ast a_n + 1 = 0$ etc. But I think I've got it now.
      $endgroup$
      – Robert Lewis
      Dec 23 '18 at 21:14






    • 1




      $begingroup$
      Ahh yes — those equations in the induction part were my biggest worry for a potential error or maybe just not convincing enough.
      $endgroup$
      – H_1317
      Dec 23 '18 at 21:16








    1




    1




    $begingroup$
    Thanks for the alternate proof. would my conceptually correct answer be equivalent to normally correct here?
    $endgroup$
    – H_1317
    Dec 23 '18 at 21:12




    $begingroup$
    Thanks for the alternate proof. would my conceptually correct answer be equivalent to normally correct here?
    $endgroup$
    – H_1317
    Dec 23 '18 at 21:12




    1




    1




    $begingroup$
    @H_1317: Yes, I think so; the only reason I said "conceptually correct" instead of simpley "correct" was that I was having a little trouble sorting out your equations $a_1 ast a_n + 1 = 0$ etc. But I think I've got it now.
    $endgroup$
    – Robert Lewis
    Dec 23 '18 at 21:14




    $begingroup$
    @H_1317: Yes, I think so; the only reason I said "conceptually correct" instead of simpley "correct" was that I was having a little trouble sorting out your equations $a_1 ast a_n + 1 = 0$ etc. But I think I've got it now.
    $endgroup$
    – Robert Lewis
    Dec 23 '18 at 21:14




    1




    1




    $begingroup$
    Ahh yes — those equations in the induction part were my biggest worry for a potential error or maybe just not convincing enough.
    $endgroup$
    – H_1317
    Dec 23 '18 at 21:16




    $begingroup$
    Ahh yes — those equations in the induction part were my biggest worry for a potential error or maybe just not convincing enough.
    $endgroup$
    – H_1317
    Dec 23 '18 at 21:16











    0












    $begingroup$

    Here is a proof that an invertible lower triangular matrix has a lower triangular inverse only using elementary row operations. From this it clearly follows that a unitary triangular matrix must be diagonal.



    Suppose a lower triangular matrix $A$ is invertible, say
    $$
    A=begin{bmatrix}
    a_{11} & 0 & 0 & dots & 0 \
    a_{21} & a_{22} & 0 & dots & 0 \
    vdots & vdots & vdots & ddots & vdots \
    a_{n1} & a_{n2} & a_{n3} & dots & a_{nn}
    end{bmatrix}
    $$

    Then $a_{ii}ne0$, for $i=1,2,dots,n$.



    Denote by $E_i(c)$ the matrix obtained from the $ntimes n$ identity matrix by multiplying the $i$-th row by $cne0$; similarly, denote by $E_{ij}(d)$ the matrix obtained from the identity by adding to the $i$-th row the $j$-th row multiplied by $d$ (here it is assumed that $ine j$).



    Note that $E_i(c)$ ($cne0$) and $E_{ij}(d)$ ($ine j$) are invertible lower triangular matrices and their inverses are, respectively, $E_i(c^{-1})$ and $E_{ij}(-d)$, again lower triangular.



    Then it is easily seen that multiplying a matrix on the left by $E_i(c)$ or $E_{ij}(d)$ is the same as performing the corresponding elementary row operation on $A$. You can also verify that the product
    $$
    E_1(a_{11})E_{21}(a_{21})dotsm E_{n1}(a_{n1}) E_{2}(a_{22})dotsm E_{n}(a_{nn})I_n
    $$

    gives back the matrix $A$, just by performing successively the elementary row operations: starting from the right, the first row operations places $a_{nn}$ at position $(n,n)$; the following one is $E_{n,n-1}(a_{n,n-1})$ will place $a_{n-1,n}$ at position $(n-1,n)$. These two entries will be unaffected by the subsequent row operations and the same is for all the other matrix multiplications/row operations. You can see that the matrix is “filled in” starting from the lower left corner, going then left and up until the diagonal is reached.



    Therefore $A$ is the product of lower triangular matrices each of them having a lower triangular inverse; hence also $A^{-1}$ is lower triangular as well.






    share|cite|improve this answer









    $endgroup$


















      0












      $begingroup$

      Here is a proof that an invertible lower triangular matrix has a lower triangular inverse only using elementary row operations. From this it clearly follows that a unitary triangular matrix must be diagonal.



      Suppose a lower triangular matrix $A$ is invertible, say
      $$
      A=begin{bmatrix}
      a_{11} & 0 & 0 & dots & 0 \
      a_{21} & a_{22} & 0 & dots & 0 \
      vdots & vdots & vdots & ddots & vdots \
      a_{n1} & a_{n2} & a_{n3} & dots & a_{nn}
      end{bmatrix}
      $$

      Then $a_{ii}ne0$, for $i=1,2,dots,n$.



      Denote by $E_i(c)$ the matrix obtained from the $ntimes n$ identity matrix by multiplying the $i$-th row by $cne0$; similarly, denote by $E_{ij}(d)$ the matrix obtained from the identity by adding to the $i$-th row the $j$-th row multiplied by $d$ (here it is assumed that $ine j$).



      Note that $E_i(c)$ ($cne0$) and $E_{ij}(d)$ ($ine j$) are invertible lower triangular matrices and their inverses are, respectively, $E_i(c^{-1})$ and $E_{ij}(-d)$, again lower triangular.



      Then it is easily seen that multiplying a matrix on the left by $E_i(c)$ or $E_{ij}(d)$ is the same as performing the corresponding elementary row operation on $A$. You can also verify that the product
      $$
      E_1(a_{11})E_{21}(a_{21})dotsm E_{n1}(a_{n1}) E_{2}(a_{22})dotsm E_{n}(a_{nn})I_n
      $$

      gives back the matrix $A$, just by performing successively the elementary row operations: starting from the right, the first row operations places $a_{nn}$ at position $(n,n)$; the following one is $E_{n,n-1}(a_{n,n-1})$ will place $a_{n-1,n}$ at position $(n-1,n)$. These two entries will be unaffected by the subsequent row operations and the same is for all the other matrix multiplications/row operations. You can see that the matrix is “filled in” starting from the lower left corner, going then left and up until the diagonal is reached.



      Therefore $A$ is the product of lower triangular matrices each of them having a lower triangular inverse; hence also $A^{-1}$ is lower triangular as well.






      share|cite|improve this answer









      $endgroup$
















        0












        0








        0





        $begingroup$

        Here is a proof that an invertible lower triangular matrix has a lower triangular inverse only using elementary row operations. From this it clearly follows that a unitary triangular matrix must be diagonal.



        Suppose a lower triangular matrix $A$ is invertible, say
        $$
        A=begin{bmatrix}
        a_{11} & 0 & 0 & dots & 0 \
        a_{21} & a_{22} & 0 & dots & 0 \
        vdots & vdots & vdots & ddots & vdots \
        a_{n1} & a_{n2} & a_{n3} & dots & a_{nn}
        end{bmatrix}
        $$

        Then $a_{ii}ne0$, for $i=1,2,dots,n$.



        Denote by $E_i(c)$ the matrix obtained from the $ntimes n$ identity matrix by multiplying the $i$-th row by $cne0$; similarly, denote by $E_{ij}(d)$ the matrix obtained from the identity by adding to the $i$-th row the $j$-th row multiplied by $d$ (here it is assumed that $ine j$).



        Note that $E_i(c)$ ($cne0$) and $E_{ij}(d)$ ($ine j$) are invertible lower triangular matrices and their inverses are, respectively, $E_i(c^{-1})$ and $E_{ij}(-d)$, again lower triangular.



        Then it is easily seen that multiplying a matrix on the left by $E_i(c)$ or $E_{ij}(d)$ is the same as performing the corresponding elementary row operation on $A$. You can also verify that the product
        $$
        E_1(a_{11})E_{21}(a_{21})dotsm E_{n1}(a_{n1}) E_{2}(a_{22})dotsm E_{n}(a_{nn})I_n
        $$

        gives back the matrix $A$, just by performing successively the elementary row operations: starting from the right, the first row operations places $a_{nn}$ at position $(n,n)$; the following one is $E_{n,n-1}(a_{n,n-1})$ will place $a_{n-1,n}$ at position $(n-1,n)$. These two entries will be unaffected by the subsequent row operations and the same is for all the other matrix multiplications/row operations. You can see that the matrix is “filled in” starting from the lower left corner, going then left and up until the diagonal is reached.



        Therefore $A$ is the product of lower triangular matrices each of them having a lower triangular inverse; hence also $A^{-1}$ is lower triangular as well.






        share|cite|improve this answer









        $endgroup$



        Here is a proof that an invertible lower triangular matrix has a lower triangular inverse only using elementary row operations. From this it clearly follows that a unitary triangular matrix must be diagonal.



        Suppose a lower triangular matrix $A$ is invertible, say
        $$
        A=begin{bmatrix}
        a_{11} & 0 & 0 & dots & 0 \
        a_{21} & a_{22} & 0 & dots & 0 \
        vdots & vdots & vdots & ddots & vdots \
        a_{n1} & a_{n2} & a_{n3} & dots & a_{nn}
        end{bmatrix}
        $$

        Then $a_{ii}ne0$, for $i=1,2,dots,n$.



        Denote by $E_i(c)$ the matrix obtained from the $ntimes n$ identity matrix by multiplying the $i$-th row by $cne0$; similarly, denote by $E_{ij}(d)$ the matrix obtained from the identity by adding to the $i$-th row the $j$-th row multiplied by $d$ (here it is assumed that $ine j$).



        Note that $E_i(c)$ ($cne0$) and $E_{ij}(d)$ ($ine j$) are invertible lower triangular matrices and their inverses are, respectively, $E_i(c^{-1})$ and $E_{ij}(-d)$, again lower triangular.



        Then it is easily seen that multiplying a matrix on the left by $E_i(c)$ or $E_{ij}(d)$ is the same as performing the corresponding elementary row operation on $A$. You can also verify that the product
        $$
        E_1(a_{11})E_{21}(a_{21})dotsm E_{n1}(a_{n1}) E_{2}(a_{22})dotsm E_{n}(a_{nn})I_n
        $$

        gives back the matrix $A$, just by performing successively the elementary row operations: starting from the right, the first row operations places $a_{nn}$ at position $(n,n)$; the following one is $E_{n,n-1}(a_{n,n-1})$ will place $a_{n-1,n}$ at position $(n-1,n)$. These two entries will be unaffected by the subsequent row operations and the same is for all the other matrix multiplications/row operations. You can see that the matrix is “filled in” starting from the lower left corner, going then left and up until the diagonal is reached.



        Therefore $A$ is the product of lower triangular matrices each of them having a lower triangular inverse; hence also $A^{-1}$ is lower triangular as well.







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered Dec 23 '18 at 22:35









        egregegreg

        186k1486209




        186k1486209






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3050100%2fif-a-matrix-a-is-both-triangular-and-unitary-then-it-is-diagonal%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Plaza Victoria

            In PowerPoint, is there a keyboard shortcut for bulleted / numbered list?

            How to put 3 figures in Latex with 2 figures side by side and 1 below these side by side images but in...