Square root of $-I_2$












4












$begingroup$


I would like to get all matrices $N in M_2(mathbb R)$ such that $N^2 = -I_2$.



To start with, I know that
$N_0=begin{bmatrix}
0 & -1 \
1 & 0
end{bmatrix}$
works, and we can prove that every matrices $N$ that are similar to $N_0$ work.



$i.e.$ Let $N in M_2(mathbb R)$ if $exists P in mathrm{GL}_2(mathbb R)$ such that $N = PN_0P^{-1}$, then $N^2 = -I_2$.



My question is, is the converse true?



Are all matrices $N in M_2(mathbb R)$ such that $N^2 = -I_2$ similar to $N_0$?










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    Do you know about diagonalization already ?
    $endgroup$
    – LeoDucas
    Oct 27 '18 at 21:02






  • 1




    $begingroup$
    Use Jordan normal form at the fact that $-1$ is central.
    $endgroup$
    – anomaly
    Oct 27 '18 at 21:13
















4












$begingroup$


I would like to get all matrices $N in M_2(mathbb R)$ such that $N^2 = -I_2$.



To start with, I know that
$N_0=begin{bmatrix}
0 & -1 \
1 & 0
end{bmatrix}$
works, and we can prove that every matrices $N$ that are similar to $N_0$ work.



$i.e.$ Let $N in M_2(mathbb R)$ if $exists P in mathrm{GL}_2(mathbb R)$ such that $N = PN_0P^{-1}$, then $N^2 = -I_2$.



My question is, is the converse true?



Are all matrices $N in M_2(mathbb R)$ such that $N^2 = -I_2$ similar to $N_0$?










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    Do you know about diagonalization already ?
    $endgroup$
    – LeoDucas
    Oct 27 '18 at 21:02






  • 1




    $begingroup$
    Use Jordan normal form at the fact that $-1$ is central.
    $endgroup$
    – anomaly
    Oct 27 '18 at 21:13














4












4








4


3



$begingroup$


I would like to get all matrices $N in M_2(mathbb R)$ such that $N^2 = -I_2$.



To start with, I know that
$N_0=begin{bmatrix}
0 & -1 \
1 & 0
end{bmatrix}$
works, and we can prove that every matrices $N$ that are similar to $N_0$ work.



$i.e.$ Let $N in M_2(mathbb R)$ if $exists P in mathrm{GL}_2(mathbb R)$ such that $N = PN_0P^{-1}$, then $N^2 = -I_2$.



My question is, is the converse true?



Are all matrices $N in M_2(mathbb R)$ such that $N^2 = -I_2$ similar to $N_0$?










share|cite|improve this question











$endgroup$




I would like to get all matrices $N in M_2(mathbb R)$ such that $N^2 = -I_2$.



To start with, I know that
$N_0=begin{bmatrix}
0 & -1 \
1 & 0
end{bmatrix}$
works, and we can prove that every matrices $N$ that are similar to $N_0$ work.



$i.e.$ Let $N in M_2(mathbb R)$ if $exists P in mathrm{GL}_2(mathbb R)$ such that $N = PN_0P^{-1}$, then $N^2 = -I_2$.



My question is, is the converse true?



Are all matrices $N in M_2(mathbb R)$ such that $N^2 = -I_2$ similar to $N_0$?







linear-algebra matrices






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Nov 24 '18 at 20:21







Euler Pythagoras

















asked Oct 27 '18 at 20:47









Euler PythagorasEuler Pythagoras

54212




54212








  • 1




    $begingroup$
    Do you know about diagonalization already ?
    $endgroup$
    – LeoDucas
    Oct 27 '18 at 21:02






  • 1




    $begingroup$
    Use Jordan normal form at the fact that $-1$ is central.
    $endgroup$
    – anomaly
    Oct 27 '18 at 21:13














  • 1




    $begingroup$
    Do you know about diagonalization already ?
    $endgroup$
    – LeoDucas
    Oct 27 '18 at 21:02






  • 1




    $begingroup$
    Use Jordan normal form at the fact that $-1$ is central.
    $endgroup$
    – anomaly
    Oct 27 '18 at 21:13








1




1




$begingroup$
Do you know about diagonalization already ?
$endgroup$
– LeoDucas
Oct 27 '18 at 21:02




$begingroup$
Do you know about diagonalization already ?
$endgroup$
– LeoDucas
Oct 27 '18 at 21:02




1




1




$begingroup$
Use Jordan normal form at the fact that $-1$ is central.
$endgroup$
– anomaly
Oct 27 '18 at 21:13




$begingroup$
Use Jordan normal form at the fact that $-1$ is central.
$endgroup$
– anomaly
Oct 27 '18 at 21:13










3 Answers
3






active

oldest

votes


















4












$begingroup$

For $A, B in M_n(mathbb{C})$, write $A sim B$ if $A$ and $B$ are similar in $mathbb{C}$.



Assume that $N in M_2(mathbb{R})$ satisfies $N^2 + I_2 = 0$. Then the minimal polynomial of $N$ is $X^2+1$ and this factors into distinct linear factors in $mathbb{C}$. So $N$ is diagonalizable in $mathbb{C}$ with eigenvalues $i$ and $-i$, i.e., $N sim operatorname{diag}(i, -i) $ in $mathbb{C}$. By the same reason, $N_0 sim operatorname{diag}(i, -i)$ and hence $N sim N_0$. Then the desired claim follows from the following proposition.




Proposition. Let $A, B in M_n(mathbb{R})$. Suppose that $A sim B$. Then $A$ and $B$ are similar in $mathbb{R}$.




Proof. Choose $P in mathrm{GL}_n(mathbb{C})$ such that $A = PBP^{-1}$. Write $P = Q+iR$ for $Q, R in M_n(mathbb{R})$ and define $P_z = Q + zR$ for $z in mathbb{C}$.




  1. Since $AQ = QB$ and $AR = RB$, we have $AP_z = P_z B$ for all $z in mathbb{C}$,


  2. $det(P_i) = det(P) neq 0$, hence $det(P_z)$ is a non-zero polynomial in $mathbb{R}[z]$.



So we can find $x in mathbb{R}$ such that $P_x$ is invertible. Then $P_x in operatorname{GL}_2(mathbb{R})$ and $A = P_x BP_x^{-1}$, hence $A$ and $B$ are similar in $mathbb{R}$ as required. $Box$






share|cite|improve this answer











$endgroup$













  • $begingroup$
    You didn't have to prove your proposition, which is a pretty classical exercise but thanks.
    $endgroup$
    – Euler Pythagoras
    Oct 28 '18 at 11:30



















2












$begingroup$

It's not very hard to find all matrices $N in M_2(Bbb R)$ such that



$N^2 = -I; tag 1$



for let



$N = begin{bmatrix} n_{11} & n_{12} \ n_{21} & n_{22}end{bmatrix}; tag 2$



then



$begin{bmatrix} n_{11}^2 + n_{12}n_{21} & n_{11}n_{12} + n_{12} n_{22} \ n_{21}n_{11} + n_{22} n_{21} & n_{21}n_{12} + n_{22}^2 end{bmatrix} = begin{bmatrix} n_{11} & n_{12} \ n_{21} & n_{22}end{bmatrix}begin{bmatrix} n_{11} & n_{12} \ n_{21} & n_{22}end{bmatrix} = N^2 = -I; tag 3$



therefore,



$n_{11}^2 + n_{12}n_{21} = n_{21}n_{12} + n_{22}^2 = -1, tag 4$



$(n_{11} + n_{22})n_{12} = (n_{11} + n_{22})n_{21} = 0; tag 5$



we see from this equation that



$text{Tr}(N) = n_{11} + n_{22} tag 6$



is a determinative, classifying factor; if



$text{Tr}(N) ne 0, tag 7$



then from (5),



$n_{21} = n_{12} = 0, tag 8$



whence from (4),



$n_{11}^2 = n_{22}^2 = -1; tag 9$



clearly there are no real solutions in this case (7); thus we take



$text{Tr}(N) = 0, tag 8$



and see that



$-n_{22} = n_{11} = alpha in Bbb R; tag 9$



now from (4),



$n_{12}n_{21} ne 0, tag{10}$



implying



$n_{12} ne 0 ne n_{21}; tag{11}$



thus we may write



$n_{21} = -dfrac{1 + alpha^2}{n_{12}}; tag{12}$



if we set



$n_{12} = beta ne 0, tag{13}$



then we may write



$n_{21} = -dfrac{1 + alpha^2}{beta}; tag{14}$



taken together, (9)-(14) provide a parameterized family of $2 times 2$ matrices



$N(alpha, beta) = begin{bmatrix} alpha & beta \ -dfrac{1 + alpha^2}{beta} & -alpha end{bmatrix} tag{15}$



such that



$N^2(alpha, beta) = -I. tag{16}$



It is easy to see that the set of admissible parameters is characterized by



$alpha in Bbb R, ; 0 ne beta in Bbb R. tag{17}$



Careful scrutiny of the above argument reveals that we have in fact demonstrated that every matrix in $M_2(Bbb R)$ satisfying (1) is of the form (15); therefore our parametric representation (15) is complete.



Since any $2 times 2$ real matrix satisfting (1) is of the form (15), and we see that



$text{Tr}(N(alpha, beta)) = alpha + (-alpha) = 0, tag{18}$



and



$det(N(alpha, beta)) = -alpha^2 + beta dfrac{1 + alpha^2}{beta} = 1, tag{19}$



and thus all have characteristic polynomial



$chi_N(x) = x^2 + 1, tag{20}$



we may affirm the the eigenvalues of any $N(alpha, beta)$ are $pm i$, and the eigenvectors satisfy



$begin{bmatrix} alpha & beta \ -dfrac{1 + alpha^2}{beta} & -alpha end{bmatrix} begin{pmatrix} v_1 \ v_2 end{pmatrix} = pm ibegin{pmatrix} v_1 \ v_2 end{pmatrix}, ; v_1, v_2 in Bbb C; tag{21}$



then with eigenvalue $i$,



$alpha v_1 + beta v_2 = i v_1, tag{22}$



which since $beta ne 0$ yields



$v_2 = dfrac{i - alpha}{beta} v_1; tag{23}$



since eigenvectors are only determined up to a scaling factor, we can in fact take $v_1 = 1$ and then



$ begin{pmatrix} v_1 \ v_2 end{pmatrix} =begin{pmatrix} 1 \dfrac{i - alpha}{beta} end{pmatrix}; tag{24}$



since $N(alpha, beta)$ is a real matrix it follows that the eigenvector associated with $-i$ is



$overline{ begin{pmatrix} v_1 \ v_2 end{pmatrix}} = begin{pmatrix} v_1 \ bar v_2 end{pmatrix} =begin{pmatrix} 1 \dfrac{- i - alpha}{beta} end{pmatrix}; tag{25}$



the diagonalizing matrix is thus



$V = begin{bmatrix} 1 & 1 \ dfrac{i - alpha}{beta} & dfrac{-i - alpha}{beta} end{bmatrix}, tag{26}$



with



$det(V) = -dfrac{2i}{beta}, tag{27}$



we have



$V^{-1} = dfrac{beta i}{2}begin{bmatrix} dfrac{ -i - alpha}{beta} & -1 \ -dfrac{i - alpha}{beta} & 1 end{bmatrix} = begin{bmatrix} dfrac{ 1 - ialpha}{2} & -dfrac{beta i}{2} \ -dfrac{-1 - ialpha}{2} & dfrac{beta i}{2} end{bmatrix}; tag{28}$



therefore



$V^{-1} N(alpha, beta)V = begin{bmatrix} i & 0 \ 0 & -i end{bmatrix}; tag{29}$



finally, $N$ from (1) itself may be so represented via application of the above formulas; therefore any $N(alpha, beta)$ is similar to $N$, since each is similar to the diagonal matrix $text{diag}(i, -i)$.



We have thus reached an affirmative answer to our OP Euler Pythagoras' closing question, even if our path has been a tad on the round-about side. We did however discover the formula (15) for solutions to (1), which I think is pretty cool.






share|cite|improve this answer











$endgroup$









  • 1




    $begingroup$
    It is indeed pretty cool. I am impressed by the long answer.
    $endgroup$
    – mathreadler
    Oct 28 '18 at 7:55



















1












$begingroup$

Something more general holds. If $f(t) = t^n + a_{n-1}t^{n-1} + dots + a_1t + a_0$ is a monic polynomial then the matrix



$$ C_f = begin{pmatrix}
0 & 0 & 0 & dots & 0 & -a_0 \
1 & 0 & 0 & dots & 0 & -a_1 \
0 & 1 & 0 & dots & 0 & -a_2 \
0 & 0 & 1 & dots & 0 & -a_3 \
vdots & vdots & vdots & ddots & vdots & vdots \
0 & 0 & 0 & dots & 1 & -a_{n-1}
end{pmatrix}
$$



is called the companion matrix of $f$. A simple calculation shows that $C_f$ satisfies $f$, meaning that
$$ f(C_f) = C_f^n + a_{n-1}C_f^{n-1} + dots + a_1 C_f + a_0I = 0 $$



Moreover, if $C_f$ satisfies any other polynomial equation $g(C_f) = 0$ then $f$ divides $g$. We say that $f$ is the minimal polynomial of $C_f$.



There's a result in algebra called rational normal form (or "canonical form") that says that every matrix is similar to a block diagonal matrix of companion matrices. Specifically,




If $A$ is a matrix that satisfies a polynomial $f$ (for example its characteristic polynomial) then $A$ is similar to a block diagonal matrix whose blocks are companion matrices. I.e. $$A sim operatorname{diag}(C_{g_1},dots,C_{g_r}).$$
Moreover:





  1. $g_i$ divides $f$ for $i = 1, dots, r$

  2. We can make it so that $g_i$ divides $g_{i + 1}$ for $i = 1,dots,r-1$

  3. If 2. holds, you can check that $g_r$ is the minimal polynomial of $A$ in the sense described above.




One idea here is that if $h$ is any polynomial then
$$h(operatorname{diag}(C_{g_1},dots,C_{g_r})) = operatorname{diag}(h(C_{g_1}),dots,h(C_{g_r})). $$
This is true generally for block-diagonal matrices.



The point is that for the polynomial $f(t) = t^2 + 1$ the only real factors of $f$ are $1$ and $t^2 + 1$. On the other hand, $C_1$ is the empty $0times 0$ matrix which just leaves us with $C_{t^2 + 1}$ is exactly what you called $N_0$.






share|cite|improve this answer











$endgroup$














    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2973895%2fsquare-root-of-i-2%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    3 Answers
    3






    active

    oldest

    votes








    3 Answers
    3






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    4












    $begingroup$

    For $A, B in M_n(mathbb{C})$, write $A sim B$ if $A$ and $B$ are similar in $mathbb{C}$.



    Assume that $N in M_2(mathbb{R})$ satisfies $N^2 + I_2 = 0$. Then the minimal polynomial of $N$ is $X^2+1$ and this factors into distinct linear factors in $mathbb{C}$. So $N$ is diagonalizable in $mathbb{C}$ with eigenvalues $i$ and $-i$, i.e., $N sim operatorname{diag}(i, -i) $ in $mathbb{C}$. By the same reason, $N_0 sim operatorname{diag}(i, -i)$ and hence $N sim N_0$. Then the desired claim follows from the following proposition.




    Proposition. Let $A, B in M_n(mathbb{R})$. Suppose that $A sim B$. Then $A$ and $B$ are similar in $mathbb{R}$.




    Proof. Choose $P in mathrm{GL}_n(mathbb{C})$ such that $A = PBP^{-1}$. Write $P = Q+iR$ for $Q, R in M_n(mathbb{R})$ and define $P_z = Q + zR$ for $z in mathbb{C}$.




    1. Since $AQ = QB$ and $AR = RB$, we have $AP_z = P_z B$ for all $z in mathbb{C}$,


    2. $det(P_i) = det(P) neq 0$, hence $det(P_z)$ is a non-zero polynomial in $mathbb{R}[z]$.



    So we can find $x in mathbb{R}$ such that $P_x$ is invertible. Then $P_x in operatorname{GL}_2(mathbb{R})$ and $A = P_x BP_x^{-1}$, hence $A$ and $B$ are similar in $mathbb{R}$ as required. $Box$






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      You didn't have to prove your proposition, which is a pretty classical exercise but thanks.
      $endgroup$
      – Euler Pythagoras
      Oct 28 '18 at 11:30
















    4












    $begingroup$

    For $A, B in M_n(mathbb{C})$, write $A sim B$ if $A$ and $B$ are similar in $mathbb{C}$.



    Assume that $N in M_2(mathbb{R})$ satisfies $N^2 + I_2 = 0$. Then the minimal polynomial of $N$ is $X^2+1$ and this factors into distinct linear factors in $mathbb{C}$. So $N$ is diagonalizable in $mathbb{C}$ with eigenvalues $i$ and $-i$, i.e., $N sim operatorname{diag}(i, -i) $ in $mathbb{C}$. By the same reason, $N_0 sim operatorname{diag}(i, -i)$ and hence $N sim N_0$. Then the desired claim follows from the following proposition.




    Proposition. Let $A, B in M_n(mathbb{R})$. Suppose that $A sim B$. Then $A$ and $B$ are similar in $mathbb{R}$.




    Proof. Choose $P in mathrm{GL}_n(mathbb{C})$ such that $A = PBP^{-1}$. Write $P = Q+iR$ for $Q, R in M_n(mathbb{R})$ and define $P_z = Q + zR$ for $z in mathbb{C}$.




    1. Since $AQ = QB$ and $AR = RB$, we have $AP_z = P_z B$ for all $z in mathbb{C}$,


    2. $det(P_i) = det(P) neq 0$, hence $det(P_z)$ is a non-zero polynomial in $mathbb{R}[z]$.



    So we can find $x in mathbb{R}$ such that $P_x$ is invertible. Then $P_x in operatorname{GL}_2(mathbb{R})$ and $A = P_x BP_x^{-1}$, hence $A$ and $B$ are similar in $mathbb{R}$ as required. $Box$






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      You didn't have to prove your proposition, which is a pretty classical exercise but thanks.
      $endgroup$
      – Euler Pythagoras
      Oct 28 '18 at 11:30














    4












    4








    4





    $begingroup$

    For $A, B in M_n(mathbb{C})$, write $A sim B$ if $A$ and $B$ are similar in $mathbb{C}$.



    Assume that $N in M_2(mathbb{R})$ satisfies $N^2 + I_2 = 0$. Then the minimal polynomial of $N$ is $X^2+1$ and this factors into distinct linear factors in $mathbb{C}$. So $N$ is diagonalizable in $mathbb{C}$ with eigenvalues $i$ and $-i$, i.e., $N sim operatorname{diag}(i, -i) $ in $mathbb{C}$. By the same reason, $N_0 sim operatorname{diag}(i, -i)$ and hence $N sim N_0$. Then the desired claim follows from the following proposition.




    Proposition. Let $A, B in M_n(mathbb{R})$. Suppose that $A sim B$. Then $A$ and $B$ are similar in $mathbb{R}$.




    Proof. Choose $P in mathrm{GL}_n(mathbb{C})$ such that $A = PBP^{-1}$. Write $P = Q+iR$ for $Q, R in M_n(mathbb{R})$ and define $P_z = Q + zR$ for $z in mathbb{C}$.




    1. Since $AQ = QB$ and $AR = RB$, we have $AP_z = P_z B$ for all $z in mathbb{C}$,


    2. $det(P_i) = det(P) neq 0$, hence $det(P_z)$ is a non-zero polynomial in $mathbb{R}[z]$.



    So we can find $x in mathbb{R}$ such that $P_x$ is invertible. Then $P_x in operatorname{GL}_2(mathbb{R})$ and $A = P_x BP_x^{-1}$, hence $A$ and $B$ are similar in $mathbb{R}$ as required. $Box$






    share|cite|improve this answer











    $endgroup$



    For $A, B in M_n(mathbb{C})$, write $A sim B$ if $A$ and $B$ are similar in $mathbb{C}$.



    Assume that $N in M_2(mathbb{R})$ satisfies $N^2 + I_2 = 0$. Then the minimal polynomial of $N$ is $X^2+1$ and this factors into distinct linear factors in $mathbb{C}$. So $N$ is diagonalizable in $mathbb{C}$ with eigenvalues $i$ and $-i$, i.e., $N sim operatorname{diag}(i, -i) $ in $mathbb{C}$. By the same reason, $N_0 sim operatorname{diag}(i, -i)$ and hence $N sim N_0$. Then the desired claim follows from the following proposition.




    Proposition. Let $A, B in M_n(mathbb{R})$. Suppose that $A sim B$. Then $A$ and $B$ are similar in $mathbb{R}$.




    Proof. Choose $P in mathrm{GL}_n(mathbb{C})$ such that $A = PBP^{-1}$. Write $P = Q+iR$ for $Q, R in M_n(mathbb{R})$ and define $P_z = Q + zR$ for $z in mathbb{C}$.




    1. Since $AQ = QB$ and $AR = RB$, we have $AP_z = P_z B$ for all $z in mathbb{C}$,


    2. $det(P_i) = det(P) neq 0$, hence $det(P_z)$ is a non-zero polynomial in $mathbb{R}[z]$.



    So we can find $x in mathbb{R}$ such that $P_x$ is invertible. Then $P_x in operatorname{GL}_2(mathbb{R})$ and $A = P_x BP_x^{-1}$, hence $A$ and $B$ are similar in $mathbb{R}$ as required. $Box$







    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited Dec 19 '18 at 0:07









    Namaste

    1




    1










    answered Oct 28 '18 at 4:49









    Sangchul LeeSangchul Lee

    96.3k12171282




    96.3k12171282












    • $begingroup$
      You didn't have to prove your proposition, which is a pretty classical exercise but thanks.
      $endgroup$
      – Euler Pythagoras
      Oct 28 '18 at 11:30


















    • $begingroup$
      You didn't have to prove your proposition, which is a pretty classical exercise but thanks.
      $endgroup$
      – Euler Pythagoras
      Oct 28 '18 at 11:30
















    $begingroup$
    You didn't have to prove your proposition, which is a pretty classical exercise but thanks.
    $endgroup$
    – Euler Pythagoras
    Oct 28 '18 at 11:30




    $begingroup$
    You didn't have to prove your proposition, which is a pretty classical exercise but thanks.
    $endgroup$
    – Euler Pythagoras
    Oct 28 '18 at 11:30











    2












    $begingroup$

    It's not very hard to find all matrices $N in M_2(Bbb R)$ such that



    $N^2 = -I; tag 1$



    for let



    $N = begin{bmatrix} n_{11} & n_{12} \ n_{21} & n_{22}end{bmatrix}; tag 2$



    then



    $begin{bmatrix} n_{11}^2 + n_{12}n_{21} & n_{11}n_{12} + n_{12} n_{22} \ n_{21}n_{11} + n_{22} n_{21} & n_{21}n_{12} + n_{22}^2 end{bmatrix} = begin{bmatrix} n_{11} & n_{12} \ n_{21} & n_{22}end{bmatrix}begin{bmatrix} n_{11} & n_{12} \ n_{21} & n_{22}end{bmatrix} = N^2 = -I; tag 3$



    therefore,



    $n_{11}^2 + n_{12}n_{21} = n_{21}n_{12} + n_{22}^2 = -1, tag 4$



    $(n_{11} + n_{22})n_{12} = (n_{11} + n_{22})n_{21} = 0; tag 5$



    we see from this equation that



    $text{Tr}(N) = n_{11} + n_{22} tag 6$



    is a determinative, classifying factor; if



    $text{Tr}(N) ne 0, tag 7$



    then from (5),



    $n_{21} = n_{12} = 0, tag 8$



    whence from (4),



    $n_{11}^2 = n_{22}^2 = -1; tag 9$



    clearly there are no real solutions in this case (7); thus we take



    $text{Tr}(N) = 0, tag 8$



    and see that



    $-n_{22} = n_{11} = alpha in Bbb R; tag 9$



    now from (4),



    $n_{12}n_{21} ne 0, tag{10}$



    implying



    $n_{12} ne 0 ne n_{21}; tag{11}$



    thus we may write



    $n_{21} = -dfrac{1 + alpha^2}{n_{12}}; tag{12}$



    if we set



    $n_{12} = beta ne 0, tag{13}$



    then we may write



    $n_{21} = -dfrac{1 + alpha^2}{beta}; tag{14}$



    taken together, (9)-(14) provide a parameterized family of $2 times 2$ matrices



    $N(alpha, beta) = begin{bmatrix} alpha & beta \ -dfrac{1 + alpha^2}{beta} & -alpha end{bmatrix} tag{15}$



    such that



    $N^2(alpha, beta) = -I. tag{16}$



    It is easy to see that the set of admissible parameters is characterized by



    $alpha in Bbb R, ; 0 ne beta in Bbb R. tag{17}$



    Careful scrutiny of the above argument reveals that we have in fact demonstrated that every matrix in $M_2(Bbb R)$ satisfying (1) is of the form (15); therefore our parametric representation (15) is complete.



    Since any $2 times 2$ real matrix satisfting (1) is of the form (15), and we see that



    $text{Tr}(N(alpha, beta)) = alpha + (-alpha) = 0, tag{18}$



    and



    $det(N(alpha, beta)) = -alpha^2 + beta dfrac{1 + alpha^2}{beta} = 1, tag{19}$



    and thus all have characteristic polynomial



    $chi_N(x) = x^2 + 1, tag{20}$



    we may affirm the the eigenvalues of any $N(alpha, beta)$ are $pm i$, and the eigenvectors satisfy



    $begin{bmatrix} alpha & beta \ -dfrac{1 + alpha^2}{beta} & -alpha end{bmatrix} begin{pmatrix} v_1 \ v_2 end{pmatrix} = pm ibegin{pmatrix} v_1 \ v_2 end{pmatrix}, ; v_1, v_2 in Bbb C; tag{21}$



    then with eigenvalue $i$,



    $alpha v_1 + beta v_2 = i v_1, tag{22}$



    which since $beta ne 0$ yields



    $v_2 = dfrac{i - alpha}{beta} v_1; tag{23}$



    since eigenvectors are only determined up to a scaling factor, we can in fact take $v_1 = 1$ and then



    $ begin{pmatrix} v_1 \ v_2 end{pmatrix} =begin{pmatrix} 1 \dfrac{i - alpha}{beta} end{pmatrix}; tag{24}$



    since $N(alpha, beta)$ is a real matrix it follows that the eigenvector associated with $-i$ is



    $overline{ begin{pmatrix} v_1 \ v_2 end{pmatrix}} = begin{pmatrix} v_1 \ bar v_2 end{pmatrix} =begin{pmatrix} 1 \dfrac{- i - alpha}{beta} end{pmatrix}; tag{25}$



    the diagonalizing matrix is thus



    $V = begin{bmatrix} 1 & 1 \ dfrac{i - alpha}{beta} & dfrac{-i - alpha}{beta} end{bmatrix}, tag{26}$



    with



    $det(V) = -dfrac{2i}{beta}, tag{27}$



    we have



    $V^{-1} = dfrac{beta i}{2}begin{bmatrix} dfrac{ -i - alpha}{beta} & -1 \ -dfrac{i - alpha}{beta} & 1 end{bmatrix} = begin{bmatrix} dfrac{ 1 - ialpha}{2} & -dfrac{beta i}{2} \ -dfrac{-1 - ialpha}{2} & dfrac{beta i}{2} end{bmatrix}; tag{28}$



    therefore



    $V^{-1} N(alpha, beta)V = begin{bmatrix} i & 0 \ 0 & -i end{bmatrix}; tag{29}$



    finally, $N$ from (1) itself may be so represented via application of the above formulas; therefore any $N(alpha, beta)$ is similar to $N$, since each is similar to the diagonal matrix $text{diag}(i, -i)$.



    We have thus reached an affirmative answer to our OP Euler Pythagoras' closing question, even if our path has been a tad on the round-about side. We did however discover the formula (15) for solutions to (1), which I think is pretty cool.






    share|cite|improve this answer











    $endgroup$









    • 1




      $begingroup$
      It is indeed pretty cool. I am impressed by the long answer.
      $endgroup$
      – mathreadler
      Oct 28 '18 at 7:55
















    2












    $begingroup$

    It's not very hard to find all matrices $N in M_2(Bbb R)$ such that



    $N^2 = -I; tag 1$



    for let



    $N = begin{bmatrix} n_{11} & n_{12} \ n_{21} & n_{22}end{bmatrix}; tag 2$



    then



    $begin{bmatrix} n_{11}^2 + n_{12}n_{21} & n_{11}n_{12} + n_{12} n_{22} \ n_{21}n_{11} + n_{22} n_{21} & n_{21}n_{12} + n_{22}^2 end{bmatrix} = begin{bmatrix} n_{11} & n_{12} \ n_{21} & n_{22}end{bmatrix}begin{bmatrix} n_{11} & n_{12} \ n_{21} & n_{22}end{bmatrix} = N^2 = -I; tag 3$



    therefore,



    $n_{11}^2 + n_{12}n_{21} = n_{21}n_{12} + n_{22}^2 = -1, tag 4$



    $(n_{11} + n_{22})n_{12} = (n_{11} + n_{22})n_{21} = 0; tag 5$



    we see from this equation that



    $text{Tr}(N) = n_{11} + n_{22} tag 6$



    is a determinative, classifying factor; if



    $text{Tr}(N) ne 0, tag 7$



    then from (5),



    $n_{21} = n_{12} = 0, tag 8$



    whence from (4),



    $n_{11}^2 = n_{22}^2 = -1; tag 9$



    clearly there are no real solutions in this case (7); thus we take



    $text{Tr}(N) = 0, tag 8$



    and see that



    $-n_{22} = n_{11} = alpha in Bbb R; tag 9$



    now from (4),



    $n_{12}n_{21} ne 0, tag{10}$



    implying



    $n_{12} ne 0 ne n_{21}; tag{11}$



    thus we may write



    $n_{21} = -dfrac{1 + alpha^2}{n_{12}}; tag{12}$



    if we set



    $n_{12} = beta ne 0, tag{13}$



    then we may write



    $n_{21} = -dfrac{1 + alpha^2}{beta}; tag{14}$



    taken together, (9)-(14) provide a parameterized family of $2 times 2$ matrices



    $N(alpha, beta) = begin{bmatrix} alpha & beta \ -dfrac{1 + alpha^2}{beta} & -alpha end{bmatrix} tag{15}$



    such that



    $N^2(alpha, beta) = -I. tag{16}$



    It is easy to see that the set of admissible parameters is characterized by



    $alpha in Bbb R, ; 0 ne beta in Bbb R. tag{17}$



    Careful scrutiny of the above argument reveals that we have in fact demonstrated that every matrix in $M_2(Bbb R)$ satisfying (1) is of the form (15); therefore our parametric representation (15) is complete.



    Since any $2 times 2$ real matrix satisfting (1) is of the form (15), and we see that



    $text{Tr}(N(alpha, beta)) = alpha + (-alpha) = 0, tag{18}$



    and



    $det(N(alpha, beta)) = -alpha^2 + beta dfrac{1 + alpha^2}{beta} = 1, tag{19}$



    and thus all have characteristic polynomial



    $chi_N(x) = x^2 + 1, tag{20}$



    we may affirm the the eigenvalues of any $N(alpha, beta)$ are $pm i$, and the eigenvectors satisfy



    $begin{bmatrix} alpha & beta \ -dfrac{1 + alpha^2}{beta} & -alpha end{bmatrix} begin{pmatrix} v_1 \ v_2 end{pmatrix} = pm ibegin{pmatrix} v_1 \ v_2 end{pmatrix}, ; v_1, v_2 in Bbb C; tag{21}$



    then with eigenvalue $i$,



    $alpha v_1 + beta v_2 = i v_1, tag{22}$



    which since $beta ne 0$ yields



    $v_2 = dfrac{i - alpha}{beta} v_1; tag{23}$



    since eigenvectors are only determined up to a scaling factor, we can in fact take $v_1 = 1$ and then



    $ begin{pmatrix} v_1 \ v_2 end{pmatrix} =begin{pmatrix} 1 \dfrac{i - alpha}{beta} end{pmatrix}; tag{24}$



    since $N(alpha, beta)$ is a real matrix it follows that the eigenvector associated with $-i$ is



    $overline{ begin{pmatrix} v_1 \ v_2 end{pmatrix}} = begin{pmatrix} v_1 \ bar v_2 end{pmatrix} =begin{pmatrix} 1 \dfrac{- i - alpha}{beta} end{pmatrix}; tag{25}$



    the diagonalizing matrix is thus



    $V = begin{bmatrix} 1 & 1 \ dfrac{i - alpha}{beta} & dfrac{-i - alpha}{beta} end{bmatrix}, tag{26}$



    with



    $det(V) = -dfrac{2i}{beta}, tag{27}$



    we have



    $V^{-1} = dfrac{beta i}{2}begin{bmatrix} dfrac{ -i - alpha}{beta} & -1 \ -dfrac{i - alpha}{beta} & 1 end{bmatrix} = begin{bmatrix} dfrac{ 1 - ialpha}{2} & -dfrac{beta i}{2} \ -dfrac{-1 - ialpha}{2} & dfrac{beta i}{2} end{bmatrix}; tag{28}$



    therefore



    $V^{-1} N(alpha, beta)V = begin{bmatrix} i & 0 \ 0 & -i end{bmatrix}; tag{29}$



    finally, $N$ from (1) itself may be so represented via application of the above formulas; therefore any $N(alpha, beta)$ is similar to $N$, since each is similar to the diagonal matrix $text{diag}(i, -i)$.



    We have thus reached an affirmative answer to our OP Euler Pythagoras' closing question, even if our path has been a tad on the round-about side. We did however discover the formula (15) for solutions to (1), which I think is pretty cool.






    share|cite|improve this answer











    $endgroup$









    • 1




      $begingroup$
      It is indeed pretty cool. I am impressed by the long answer.
      $endgroup$
      – mathreadler
      Oct 28 '18 at 7:55














    2












    2








    2





    $begingroup$

    It's not very hard to find all matrices $N in M_2(Bbb R)$ such that



    $N^2 = -I; tag 1$



    for let



    $N = begin{bmatrix} n_{11} & n_{12} \ n_{21} & n_{22}end{bmatrix}; tag 2$



    then



    $begin{bmatrix} n_{11}^2 + n_{12}n_{21} & n_{11}n_{12} + n_{12} n_{22} \ n_{21}n_{11} + n_{22} n_{21} & n_{21}n_{12} + n_{22}^2 end{bmatrix} = begin{bmatrix} n_{11} & n_{12} \ n_{21} & n_{22}end{bmatrix}begin{bmatrix} n_{11} & n_{12} \ n_{21} & n_{22}end{bmatrix} = N^2 = -I; tag 3$



    therefore,



    $n_{11}^2 + n_{12}n_{21} = n_{21}n_{12} + n_{22}^2 = -1, tag 4$



    $(n_{11} + n_{22})n_{12} = (n_{11} + n_{22})n_{21} = 0; tag 5$



    we see from this equation that



    $text{Tr}(N) = n_{11} + n_{22} tag 6$



    is a determinative, classifying factor; if



    $text{Tr}(N) ne 0, tag 7$



    then from (5),



    $n_{21} = n_{12} = 0, tag 8$



    whence from (4),



    $n_{11}^2 = n_{22}^2 = -1; tag 9$



    clearly there are no real solutions in this case (7); thus we take



    $text{Tr}(N) = 0, tag 8$



    and see that



    $-n_{22} = n_{11} = alpha in Bbb R; tag 9$



    now from (4),



    $n_{12}n_{21} ne 0, tag{10}$



    implying



    $n_{12} ne 0 ne n_{21}; tag{11}$



    thus we may write



    $n_{21} = -dfrac{1 + alpha^2}{n_{12}}; tag{12}$



    if we set



    $n_{12} = beta ne 0, tag{13}$



    then we may write



    $n_{21} = -dfrac{1 + alpha^2}{beta}; tag{14}$



    taken together, (9)-(14) provide a parameterized family of $2 times 2$ matrices



    $N(alpha, beta) = begin{bmatrix} alpha & beta \ -dfrac{1 + alpha^2}{beta} & -alpha end{bmatrix} tag{15}$



    such that



    $N^2(alpha, beta) = -I. tag{16}$



    It is easy to see that the set of admissible parameters is characterized by



    $alpha in Bbb R, ; 0 ne beta in Bbb R. tag{17}$



    Careful scrutiny of the above argument reveals that we have in fact demonstrated that every matrix in $M_2(Bbb R)$ satisfying (1) is of the form (15); therefore our parametric representation (15) is complete.



    Since any $2 times 2$ real matrix satisfting (1) is of the form (15), and we see that



    $text{Tr}(N(alpha, beta)) = alpha + (-alpha) = 0, tag{18}$



    and



    $det(N(alpha, beta)) = -alpha^2 + beta dfrac{1 + alpha^2}{beta} = 1, tag{19}$



    and thus all have characteristic polynomial



    $chi_N(x) = x^2 + 1, tag{20}$



    we may affirm the the eigenvalues of any $N(alpha, beta)$ are $pm i$, and the eigenvectors satisfy



    $begin{bmatrix} alpha & beta \ -dfrac{1 + alpha^2}{beta} & -alpha end{bmatrix} begin{pmatrix} v_1 \ v_2 end{pmatrix} = pm ibegin{pmatrix} v_1 \ v_2 end{pmatrix}, ; v_1, v_2 in Bbb C; tag{21}$



    then with eigenvalue $i$,



    $alpha v_1 + beta v_2 = i v_1, tag{22}$



    which since $beta ne 0$ yields



    $v_2 = dfrac{i - alpha}{beta} v_1; tag{23}$



    since eigenvectors are only determined up to a scaling factor, we can in fact take $v_1 = 1$ and then



    $ begin{pmatrix} v_1 \ v_2 end{pmatrix} =begin{pmatrix} 1 \dfrac{i - alpha}{beta} end{pmatrix}; tag{24}$



    since $N(alpha, beta)$ is a real matrix it follows that the eigenvector associated with $-i$ is



    $overline{ begin{pmatrix} v_1 \ v_2 end{pmatrix}} = begin{pmatrix} v_1 \ bar v_2 end{pmatrix} =begin{pmatrix} 1 \dfrac{- i - alpha}{beta} end{pmatrix}; tag{25}$



    the diagonalizing matrix is thus



    $V = begin{bmatrix} 1 & 1 \ dfrac{i - alpha}{beta} & dfrac{-i - alpha}{beta} end{bmatrix}, tag{26}$



    with



    $det(V) = -dfrac{2i}{beta}, tag{27}$



    we have



    $V^{-1} = dfrac{beta i}{2}begin{bmatrix} dfrac{ -i - alpha}{beta} & -1 \ -dfrac{i - alpha}{beta} & 1 end{bmatrix} = begin{bmatrix} dfrac{ 1 - ialpha}{2} & -dfrac{beta i}{2} \ -dfrac{-1 - ialpha}{2} & dfrac{beta i}{2} end{bmatrix}; tag{28}$



    therefore



    $V^{-1} N(alpha, beta)V = begin{bmatrix} i & 0 \ 0 & -i end{bmatrix}; tag{29}$



    finally, $N$ from (1) itself may be so represented via application of the above formulas; therefore any $N(alpha, beta)$ is similar to $N$, since each is similar to the diagonal matrix $text{diag}(i, -i)$.



    We have thus reached an affirmative answer to our OP Euler Pythagoras' closing question, even if our path has been a tad on the round-about side. We did however discover the formula (15) for solutions to (1), which I think is pretty cool.






    share|cite|improve this answer











    $endgroup$



    It's not very hard to find all matrices $N in M_2(Bbb R)$ such that



    $N^2 = -I; tag 1$



    for let



    $N = begin{bmatrix} n_{11} & n_{12} \ n_{21} & n_{22}end{bmatrix}; tag 2$



    then



    $begin{bmatrix} n_{11}^2 + n_{12}n_{21} & n_{11}n_{12} + n_{12} n_{22} \ n_{21}n_{11} + n_{22} n_{21} & n_{21}n_{12} + n_{22}^2 end{bmatrix} = begin{bmatrix} n_{11} & n_{12} \ n_{21} & n_{22}end{bmatrix}begin{bmatrix} n_{11} & n_{12} \ n_{21} & n_{22}end{bmatrix} = N^2 = -I; tag 3$



    therefore,



    $n_{11}^2 + n_{12}n_{21} = n_{21}n_{12} + n_{22}^2 = -1, tag 4$



    $(n_{11} + n_{22})n_{12} = (n_{11} + n_{22})n_{21} = 0; tag 5$



    we see from this equation that



    $text{Tr}(N) = n_{11} + n_{22} tag 6$



    is a determinative, classifying factor; if



    $text{Tr}(N) ne 0, tag 7$



    then from (5),



    $n_{21} = n_{12} = 0, tag 8$



    whence from (4),



    $n_{11}^2 = n_{22}^2 = -1; tag 9$



    clearly there are no real solutions in this case (7); thus we take



    $text{Tr}(N) = 0, tag 8$



    and see that



    $-n_{22} = n_{11} = alpha in Bbb R; tag 9$



    now from (4),



    $n_{12}n_{21} ne 0, tag{10}$



    implying



    $n_{12} ne 0 ne n_{21}; tag{11}$



    thus we may write



    $n_{21} = -dfrac{1 + alpha^2}{n_{12}}; tag{12}$



    if we set



    $n_{12} = beta ne 0, tag{13}$



    then we may write



    $n_{21} = -dfrac{1 + alpha^2}{beta}; tag{14}$



    taken together, (9)-(14) provide a parameterized family of $2 times 2$ matrices



    $N(alpha, beta) = begin{bmatrix} alpha & beta \ -dfrac{1 + alpha^2}{beta} & -alpha end{bmatrix} tag{15}$



    such that



    $N^2(alpha, beta) = -I. tag{16}$



    It is easy to see that the set of admissible parameters is characterized by



    $alpha in Bbb R, ; 0 ne beta in Bbb R. tag{17}$



    Careful scrutiny of the above argument reveals that we have in fact demonstrated that every matrix in $M_2(Bbb R)$ satisfying (1) is of the form (15); therefore our parametric representation (15) is complete.



    Since any $2 times 2$ real matrix satisfting (1) is of the form (15), and we see that



    $text{Tr}(N(alpha, beta)) = alpha + (-alpha) = 0, tag{18}$



    and



    $det(N(alpha, beta)) = -alpha^2 + beta dfrac{1 + alpha^2}{beta} = 1, tag{19}$



    and thus all have characteristic polynomial



    $chi_N(x) = x^2 + 1, tag{20}$



    we may affirm the the eigenvalues of any $N(alpha, beta)$ are $pm i$, and the eigenvectors satisfy



    $begin{bmatrix} alpha & beta \ -dfrac{1 + alpha^2}{beta} & -alpha end{bmatrix} begin{pmatrix} v_1 \ v_2 end{pmatrix} = pm ibegin{pmatrix} v_1 \ v_2 end{pmatrix}, ; v_1, v_2 in Bbb C; tag{21}$



    then with eigenvalue $i$,



    $alpha v_1 + beta v_2 = i v_1, tag{22}$



    which since $beta ne 0$ yields



    $v_2 = dfrac{i - alpha}{beta} v_1; tag{23}$



    since eigenvectors are only determined up to a scaling factor, we can in fact take $v_1 = 1$ and then



    $ begin{pmatrix} v_1 \ v_2 end{pmatrix} =begin{pmatrix} 1 \dfrac{i - alpha}{beta} end{pmatrix}; tag{24}$



    since $N(alpha, beta)$ is a real matrix it follows that the eigenvector associated with $-i$ is



    $overline{ begin{pmatrix} v_1 \ v_2 end{pmatrix}} = begin{pmatrix} v_1 \ bar v_2 end{pmatrix} =begin{pmatrix} 1 \dfrac{- i - alpha}{beta} end{pmatrix}; tag{25}$



    the diagonalizing matrix is thus



    $V = begin{bmatrix} 1 & 1 \ dfrac{i - alpha}{beta} & dfrac{-i - alpha}{beta} end{bmatrix}, tag{26}$



    with



    $det(V) = -dfrac{2i}{beta}, tag{27}$



    we have



    $V^{-1} = dfrac{beta i}{2}begin{bmatrix} dfrac{ -i - alpha}{beta} & -1 \ -dfrac{i - alpha}{beta} & 1 end{bmatrix} = begin{bmatrix} dfrac{ 1 - ialpha}{2} & -dfrac{beta i}{2} \ -dfrac{-1 - ialpha}{2} & dfrac{beta i}{2} end{bmatrix}; tag{28}$



    therefore



    $V^{-1} N(alpha, beta)V = begin{bmatrix} i & 0 \ 0 & -i end{bmatrix}; tag{29}$



    finally, $N$ from (1) itself may be so represented via application of the above formulas; therefore any $N(alpha, beta)$ is similar to $N$, since each is similar to the diagonal matrix $text{diag}(i, -i)$.



    We have thus reached an affirmative answer to our OP Euler Pythagoras' closing question, even if our path has been a tad on the round-about side. We did however discover the formula (15) for solutions to (1), which I think is pretty cool.







    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited Oct 28 '18 at 4:15

























    answered Oct 28 '18 at 3:43









    Robert LewisRobert Lewis

    48.5k23167




    48.5k23167








    • 1




      $begingroup$
      It is indeed pretty cool. I am impressed by the long answer.
      $endgroup$
      – mathreadler
      Oct 28 '18 at 7:55














    • 1




      $begingroup$
      It is indeed pretty cool. I am impressed by the long answer.
      $endgroup$
      – mathreadler
      Oct 28 '18 at 7:55








    1




    1




    $begingroup$
    It is indeed pretty cool. I am impressed by the long answer.
    $endgroup$
    – mathreadler
    Oct 28 '18 at 7:55




    $begingroup$
    It is indeed pretty cool. I am impressed by the long answer.
    $endgroup$
    – mathreadler
    Oct 28 '18 at 7:55











    1












    $begingroup$

    Something more general holds. If $f(t) = t^n + a_{n-1}t^{n-1} + dots + a_1t + a_0$ is a monic polynomial then the matrix



    $$ C_f = begin{pmatrix}
    0 & 0 & 0 & dots & 0 & -a_0 \
    1 & 0 & 0 & dots & 0 & -a_1 \
    0 & 1 & 0 & dots & 0 & -a_2 \
    0 & 0 & 1 & dots & 0 & -a_3 \
    vdots & vdots & vdots & ddots & vdots & vdots \
    0 & 0 & 0 & dots & 1 & -a_{n-1}
    end{pmatrix}
    $$



    is called the companion matrix of $f$. A simple calculation shows that $C_f$ satisfies $f$, meaning that
    $$ f(C_f) = C_f^n + a_{n-1}C_f^{n-1} + dots + a_1 C_f + a_0I = 0 $$



    Moreover, if $C_f$ satisfies any other polynomial equation $g(C_f) = 0$ then $f$ divides $g$. We say that $f$ is the minimal polynomial of $C_f$.



    There's a result in algebra called rational normal form (or "canonical form") that says that every matrix is similar to a block diagonal matrix of companion matrices. Specifically,




    If $A$ is a matrix that satisfies a polynomial $f$ (for example its characteristic polynomial) then $A$ is similar to a block diagonal matrix whose blocks are companion matrices. I.e. $$A sim operatorname{diag}(C_{g_1},dots,C_{g_r}).$$
    Moreover:





    1. $g_i$ divides $f$ for $i = 1, dots, r$

    2. We can make it so that $g_i$ divides $g_{i + 1}$ for $i = 1,dots,r-1$

    3. If 2. holds, you can check that $g_r$ is the minimal polynomial of $A$ in the sense described above.




    One idea here is that if $h$ is any polynomial then
    $$h(operatorname{diag}(C_{g_1},dots,C_{g_r})) = operatorname{diag}(h(C_{g_1}),dots,h(C_{g_r})). $$
    This is true generally for block-diagonal matrices.



    The point is that for the polynomial $f(t) = t^2 + 1$ the only real factors of $f$ are $1$ and $t^2 + 1$. On the other hand, $C_1$ is the empty $0times 0$ matrix which just leaves us with $C_{t^2 + 1}$ is exactly what you called $N_0$.






    share|cite|improve this answer











    $endgroup$


















      1












      $begingroup$

      Something more general holds. If $f(t) = t^n + a_{n-1}t^{n-1} + dots + a_1t + a_0$ is a monic polynomial then the matrix



      $$ C_f = begin{pmatrix}
      0 & 0 & 0 & dots & 0 & -a_0 \
      1 & 0 & 0 & dots & 0 & -a_1 \
      0 & 1 & 0 & dots & 0 & -a_2 \
      0 & 0 & 1 & dots & 0 & -a_3 \
      vdots & vdots & vdots & ddots & vdots & vdots \
      0 & 0 & 0 & dots & 1 & -a_{n-1}
      end{pmatrix}
      $$



      is called the companion matrix of $f$. A simple calculation shows that $C_f$ satisfies $f$, meaning that
      $$ f(C_f) = C_f^n + a_{n-1}C_f^{n-1} + dots + a_1 C_f + a_0I = 0 $$



      Moreover, if $C_f$ satisfies any other polynomial equation $g(C_f) = 0$ then $f$ divides $g$. We say that $f$ is the minimal polynomial of $C_f$.



      There's a result in algebra called rational normal form (or "canonical form") that says that every matrix is similar to a block diagonal matrix of companion matrices. Specifically,




      If $A$ is a matrix that satisfies a polynomial $f$ (for example its characteristic polynomial) then $A$ is similar to a block diagonal matrix whose blocks are companion matrices. I.e. $$A sim operatorname{diag}(C_{g_1},dots,C_{g_r}).$$
      Moreover:





      1. $g_i$ divides $f$ for $i = 1, dots, r$

      2. We can make it so that $g_i$ divides $g_{i + 1}$ for $i = 1,dots,r-1$

      3. If 2. holds, you can check that $g_r$ is the minimal polynomial of $A$ in the sense described above.




      One idea here is that if $h$ is any polynomial then
      $$h(operatorname{diag}(C_{g_1},dots,C_{g_r})) = operatorname{diag}(h(C_{g_1}),dots,h(C_{g_r})). $$
      This is true generally for block-diagonal matrices.



      The point is that for the polynomial $f(t) = t^2 + 1$ the only real factors of $f$ are $1$ and $t^2 + 1$. On the other hand, $C_1$ is the empty $0times 0$ matrix which just leaves us with $C_{t^2 + 1}$ is exactly what you called $N_0$.






      share|cite|improve this answer











      $endgroup$
















        1












        1








        1





        $begingroup$

        Something more general holds. If $f(t) = t^n + a_{n-1}t^{n-1} + dots + a_1t + a_0$ is a monic polynomial then the matrix



        $$ C_f = begin{pmatrix}
        0 & 0 & 0 & dots & 0 & -a_0 \
        1 & 0 & 0 & dots & 0 & -a_1 \
        0 & 1 & 0 & dots & 0 & -a_2 \
        0 & 0 & 1 & dots & 0 & -a_3 \
        vdots & vdots & vdots & ddots & vdots & vdots \
        0 & 0 & 0 & dots & 1 & -a_{n-1}
        end{pmatrix}
        $$



        is called the companion matrix of $f$. A simple calculation shows that $C_f$ satisfies $f$, meaning that
        $$ f(C_f) = C_f^n + a_{n-1}C_f^{n-1} + dots + a_1 C_f + a_0I = 0 $$



        Moreover, if $C_f$ satisfies any other polynomial equation $g(C_f) = 0$ then $f$ divides $g$. We say that $f$ is the minimal polynomial of $C_f$.



        There's a result in algebra called rational normal form (or "canonical form") that says that every matrix is similar to a block diagonal matrix of companion matrices. Specifically,




        If $A$ is a matrix that satisfies a polynomial $f$ (for example its characteristic polynomial) then $A$ is similar to a block diagonal matrix whose blocks are companion matrices. I.e. $$A sim operatorname{diag}(C_{g_1},dots,C_{g_r}).$$
        Moreover:





        1. $g_i$ divides $f$ for $i = 1, dots, r$

        2. We can make it so that $g_i$ divides $g_{i + 1}$ for $i = 1,dots,r-1$

        3. If 2. holds, you can check that $g_r$ is the minimal polynomial of $A$ in the sense described above.




        One idea here is that if $h$ is any polynomial then
        $$h(operatorname{diag}(C_{g_1},dots,C_{g_r})) = operatorname{diag}(h(C_{g_1}),dots,h(C_{g_r})). $$
        This is true generally for block-diagonal matrices.



        The point is that for the polynomial $f(t) = t^2 + 1$ the only real factors of $f$ are $1$ and $t^2 + 1$. On the other hand, $C_1$ is the empty $0times 0$ matrix which just leaves us with $C_{t^2 + 1}$ is exactly what you called $N_0$.






        share|cite|improve this answer











        $endgroup$



        Something more general holds. If $f(t) = t^n + a_{n-1}t^{n-1} + dots + a_1t + a_0$ is a monic polynomial then the matrix



        $$ C_f = begin{pmatrix}
        0 & 0 & 0 & dots & 0 & -a_0 \
        1 & 0 & 0 & dots & 0 & -a_1 \
        0 & 1 & 0 & dots & 0 & -a_2 \
        0 & 0 & 1 & dots & 0 & -a_3 \
        vdots & vdots & vdots & ddots & vdots & vdots \
        0 & 0 & 0 & dots & 1 & -a_{n-1}
        end{pmatrix}
        $$



        is called the companion matrix of $f$. A simple calculation shows that $C_f$ satisfies $f$, meaning that
        $$ f(C_f) = C_f^n + a_{n-1}C_f^{n-1} + dots + a_1 C_f + a_0I = 0 $$



        Moreover, if $C_f$ satisfies any other polynomial equation $g(C_f) = 0$ then $f$ divides $g$. We say that $f$ is the minimal polynomial of $C_f$.



        There's a result in algebra called rational normal form (or "canonical form") that says that every matrix is similar to a block diagonal matrix of companion matrices. Specifically,




        If $A$ is a matrix that satisfies a polynomial $f$ (for example its characteristic polynomial) then $A$ is similar to a block diagonal matrix whose blocks are companion matrices. I.e. $$A sim operatorname{diag}(C_{g_1},dots,C_{g_r}).$$
        Moreover:





        1. $g_i$ divides $f$ for $i = 1, dots, r$

        2. We can make it so that $g_i$ divides $g_{i + 1}$ for $i = 1,dots,r-1$

        3. If 2. holds, you can check that $g_r$ is the minimal polynomial of $A$ in the sense described above.




        One idea here is that if $h$ is any polynomial then
        $$h(operatorname{diag}(C_{g_1},dots,C_{g_r})) = operatorname{diag}(h(C_{g_1}),dots,h(C_{g_r})). $$
        This is true generally for block-diagonal matrices.



        The point is that for the polynomial $f(t) = t^2 + 1$ the only real factors of $f$ are $1$ and $t^2 + 1$. On the other hand, $C_1$ is the empty $0times 0$ matrix which just leaves us with $C_{t^2 + 1}$ is exactly what you called $N_0$.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Oct 28 '18 at 5:35

























        answered Oct 28 '18 at 5:30









        Trevor GunnTrevor Gunn

        15k32047




        15k32047






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2973895%2fsquare-root-of-i-2%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Plaza Victoria

            In PowerPoint, is there a keyboard shortcut for bulleted / numbered list?

            How to put 3 figures in Latex with 2 figures side by side and 1 below these side by side images but in...