Finding the eigenvalues and bases for the eigenspaces of linear transformations with non square matrices











up vote
0
down vote

favorite
2












Let $A$ and $B$ be matrices such that $$A = begin{pmatrix}0 & 1\15 & 2end{pmatrix},: B = begin{pmatrix}0 & 2 & -4 \ 2 & -3 & -2 \ -4 & -2 & 0 end{pmatrix}$$ Let $f$ and $g$ be linear transformations such that $$f, g: M_{3x2}(mathbb{R})
rightarrow M_{3x2}(mathbb{R})$$
where $f(X) = BX - XA$ and $g(X) = BXA$ for every $3$ x $2$ real matrix $X$. I want to find the eigenvalues and bases for the eigenspaces of these functions.



EDIT: While @xbh 's method is correct, it involves computing the eigenvalues of a $6times6$ matrix. I've recently learned of a method that can solve this problem with less computation, but I only know half of it.



This method relies on the observation that $$begin{pmatrix}lambda_1 & 0 & 0 \ 0 & lambda_2 & 0 \ 0 & 0 & lambda_3 end{pmatrix}begin{pmatrix}a_{11} & a_{12}\ a_{21} & a_{22} \ a_{31} & a_{32}end{pmatrix}= begin{pmatrix}lambda_1a_{11} & lambda_1a_{12} \ lambda_2a_{21} & lambda_2a_{22} \ lambda_3a_{31} & lambda_3a_{32} end{pmatrix}$$ and$$begin{pmatrix}a_{11} & a_{12} \ a_{21} & a_{22} \ a_{31} & a_{32} end{pmatrix}begin{pmatrix}mu_1 & 0 \ 0 & mu_2 end{pmatrix} = begin{pmatrix} mu_1a_{11} & mu_2a_{12} \ mu_1a_{21} & mu_2a_{22} \ mu_1a_{31} & mu_2a_{32} end{pmatrix} $$
Also, it is easy enough to see that the eigenvalues for $A$ are $mu = -3,5$ with associated eigenvectors $v_1 = (1, -3) : v_2 = (1, 5)$. For $B$, $lambda = -4$ has two-dimensional eigenspace given by $t_1 = (1,0,1)$ and $t_2 = (0,2,1)$ and for $lambda = 5, : t_3 = (-2,-1,2)$.



Since $A$ and $B$ are $ntimes n$ matrices with $n$ linearly independent eigenvectors. They are diagonizable. Thus for $f$,$$A = ED_aE^{-1},: B = CD_bC^{-1}$$ and then $$CD_bC^{-1}X - XED_aE^{-1} = delta X$$ $$Longrightarrow D_b(C^{-1}XE) - (C^{-1}XE)D_a = delta C^{-1}XE$$ From here I'm supposed to deduce that the eigenvalues of $f$ are every combination $lambda - mu$; however, I don't understand how to reach this conclusion since substituting any combination $lambda - mu$ for $delta$ cannot make $$(lambda_i-mu_j)begin{pmatrix}a_{11} & a_{12}\ a_{21} & a_{22} \ a_{31} & a_{32}end{pmatrix} = begin{pmatrix}(lambda_1-mu_1)a_{11} & (lambda_1-mu_2)a_{12} \ (lambda_2-mu_1)a_{21} & (lambda_2-mu_2)a_{22} \ (lambda_3-mu_1)a_{31} & (lambda_3-mu_2)a_{32} end{pmatrix}$$ where $a_{ij}$ denotes the scalars of $C^{-1}XE$.



Also I don't know how to find the eigenvectors with this method.










share|cite|improve this question
























  • Recall the definition of eigenvalues, eigenvectors for linear transformation. Note that "vectors" are not merely the 1 column matrices.
    – xbh
    2 days ago












  • Okay, you're right. In this case we can call $X$ a vector because it is an element of the vector space $M_{3x2}(mathbb{R})$, but this is only a technical correction. I still don't know how to solve for $X$
    – Ryan Greyling
    2 days ago






  • 1




    Seems like a "hard" computation… let $X = [x_{j,k}]_{3times 2}$ and compute $f(X)$ and solve $f(X) = cX$. Or take a basis of $M_{3,2}(Bbb R)$ then find out the matrix of $f$ in this basis. Now you could apply your knowledge about solving $My = cy$ where $M in mathrm M_6 (Bbb R), y in mathrm M_{6,1}(Bbb R)$.
    – xbh
    2 days ago










  • Oh I see, that'll definitely work. I'll post an answer once I've done the computations. I understand that the eigenvectors for $My = cy$ will have to be converted from 6 dimensional column vectors back to $3$ x $2$ matrices using my chosen basis for $M_{3,2}(mathbb{R})$. But just to make sure, the eigenvalues will remain the same right?
    – Ryan Greyling
    2 days ago










  • Yeah, just a formal change. This is just an application of linear algebra notions. The eigenvalues are totally determined by the transformations on their own.
    – xbh
    2 days ago















up vote
0
down vote

favorite
2












Let $A$ and $B$ be matrices such that $$A = begin{pmatrix}0 & 1\15 & 2end{pmatrix},: B = begin{pmatrix}0 & 2 & -4 \ 2 & -3 & -2 \ -4 & -2 & 0 end{pmatrix}$$ Let $f$ and $g$ be linear transformations such that $$f, g: M_{3x2}(mathbb{R})
rightarrow M_{3x2}(mathbb{R})$$
where $f(X) = BX - XA$ and $g(X) = BXA$ for every $3$ x $2$ real matrix $X$. I want to find the eigenvalues and bases for the eigenspaces of these functions.



EDIT: While @xbh 's method is correct, it involves computing the eigenvalues of a $6times6$ matrix. I've recently learned of a method that can solve this problem with less computation, but I only know half of it.



This method relies on the observation that $$begin{pmatrix}lambda_1 & 0 & 0 \ 0 & lambda_2 & 0 \ 0 & 0 & lambda_3 end{pmatrix}begin{pmatrix}a_{11} & a_{12}\ a_{21} & a_{22} \ a_{31} & a_{32}end{pmatrix}= begin{pmatrix}lambda_1a_{11} & lambda_1a_{12} \ lambda_2a_{21} & lambda_2a_{22} \ lambda_3a_{31} & lambda_3a_{32} end{pmatrix}$$ and$$begin{pmatrix}a_{11} & a_{12} \ a_{21} & a_{22} \ a_{31} & a_{32} end{pmatrix}begin{pmatrix}mu_1 & 0 \ 0 & mu_2 end{pmatrix} = begin{pmatrix} mu_1a_{11} & mu_2a_{12} \ mu_1a_{21} & mu_2a_{22} \ mu_1a_{31} & mu_2a_{32} end{pmatrix} $$
Also, it is easy enough to see that the eigenvalues for $A$ are $mu = -3,5$ with associated eigenvectors $v_1 = (1, -3) : v_2 = (1, 5)$. For $B$, $lambda = -4$ has two-dimensional eigenspace given by $t_1 = (1,0,1)$ and $t_2 = (0,2,1)$ and for $lambda = 5, : t_3 = (-2,-1,2)$.



Since $A$ and $B$ are $ntimes n$ matrices with $n$ linearly independent eigenvectors. They are diagonizable. Thus for $f$,$$A = ED_aE^{-1},: B = CD_bC^{-1}$$ and then $$CD_bC^{-1}X - XED_aE^{-1} = delta X$$ $$Longrightarrow D_b(C^{-1}XE) - (C^{-1}XE)D_a = delta C^{-1}XE$$ From here I'm supposed to deduce that the eigenvalues of $f$ are every combination $lambda - mu$; however, I don't understand how to reach this conclusion since substituting any combination $lambda - mu$ for $delta$ cannot make $$(lambda_i-mu_j)begin{pmatrix}a_{11} & a_{12}\ a_{21} & a_{22} \ a_{31} & a_{32}end{pmatrix} = begin{pmatrix}(lambda_1-mu_1)a_{11} & (lambda_1-mu_2)a_{12} \ (lambda_2-mu_1)a_{21} & (lambda_2-mu_2)a_{22} \ (lambda_3-mu_1)a_{31} & (lambda_3-mu_2)a_{32} end{pmatrix}$$ where $a_{ij}$ denotes the scalars of $C^{-1}XE$.



Also I don't know how to find the eigenvectors with this method.










share|cite|improve this question
























  • Recall the definition of eigenvalues, eigenvectors for linear transformation. Note that "vectors" are not merely the 1 column matrices.
    – xbh
    2 days ago












  • Okay, you're right. In this case we can call $X$ a vector because it is an element of the vector space $M_{3x2}(mathbb{R})$, but this is only a technical correction. I still don't know how to solve for $X$
    – Ryan Greyling
    2 days ago






  • 1




    Seems like a "hard" computation… let $X = [x_{j,k}]_{3times 2}$ and compute $f(X)$ and solve $f(X) = cX$. Or take a basis of $M_{3,2}(Bbb R)$ then find out the matrix of $f$ in this basis. Now you could apply your knowledge about solving $My = cy$ where $M in mathrm M_6 (Bbb R), y in mathrm M_{6,1}(Bbb R)$.
    – xbh
    2 days ago










  • Oh I see, that'll definitely work. I'll post an answer once I've done the computations. I understand that the eigenvectors for $My = cy$ will have to be converted from 6 dimensional column vectors back to $3$ x $2$ matrices using my chosen basis for $M_{3,2}(mathbb{R})$. But just to make sure, the eigenvalues will remain the same right?
    – Ryan Greyling
    2 days ago










  • Yeah, just a formal change. This is just an application of linear algebra notions. The eigenvalues are totally determined by the transformations on their own.
    – xbh
    2 days ago













up vote
0
down vote

favorite
2









up vote
0
down vote

favorite
2






2





Let $A$ and $B$ be matrices such that $$A = begin{pmatrix}0 & 1\15 & 2end{pmatrix},: B = begin{pmatrix}0 & 2 & -4 \ 2 & -3 & -2 \ -4 & -2 & 0 end{pmatrix}$$ Let $f$ and $g$ be linear transformations such that $$f, g: M_{3x2}(mathbb{R})
rightarrow M_{3x2}(mathbb{R})$$
where $f(X) = BX - XA$ and $g(X) = BXA$ for every $3$ x $2$ real matrix $X$. I want to find the eigenvalues and bases for the eigenspaces of these functions.



EDIT: While @xbh 's method is correct, it involves computing the eigenvalues of a $6times6$ matrix. I've recently learned of a method that can solve this problem with less computation, but I only know half of it.



This method relies on the observation that $$begin{pmatrix}lambda_1 & 0 & 0 \ 0 & lambda_2 & 0 \ 0 & 0 & lambda_3 end{pmatrix}begin{pmatrix}a_{11} & a_{12}\ a_{21} & a_{22} \ a_{31} & a_{32}end{pmatrix}= begin{pmatrix}lambda_1a_{11} & lambda_1a_{12} \ lambda_2a_{21} & lambda_2a_{22} \ lambda_3a_{31} & lambda_3a_{32} end{pmatrix}$$ and$$begin{pmatrix}a_{11} & a_{12} \ a_{21} & a_{22} \ a_{31} & a_{32} end{pmatrix}begin{pmatrix}mu_1 & 0 \ 0 & mu_2 end{pmatrix} = begin{pmatrix} mu_1a_{11} & mu_2a_{12} \ mu_1a_{21} & mu_2a_{22} \ mu_1a_{31} & mu_2a_{32} end{pmatrix} $$
Also, it is easy enough to see that the eigenvalues for $A$ are $mu = -3,5$ with associated eigenvectors $v_1 = (1, -3) : v_2 = (1, 5)$. For $B$, $lambda = -4$ has two-dimensional eigenspace given by $t_1 = (1,0,1)$ and $t_2 = (0,2,1)$ and for $lambda = 5, : t_3 = (-2,-1,2)$.



Since $A$ and $B$ are $ntimes n$ matrices with $n$ linearly independent eigenvectors. They are diagonizable. Thus for $f$,$$A = ED_aE^{-1},: B = CD_bC^{-1}$$ and then $$CD_bC^{-1}X - XED_aE^{-1} = delta X$$ $$Longrightarrow D_b(C^{-1}XE) - (C^{-1}XE)D_a = delta C^{-1}XE$$ From here I'm supposed to deduce that the eigenvalues of $f$ are every combination $lambda - mu$; however, I don't understand how to reach this conclusion since substituting any combination $lambda - mu$ for $delta$ cannot make $$(lambda_i-mu_j)begin{pmatrix}a_{11} & a_{12}\ a_{21} & a_{22} \ a_{31} & a_{32}end{pmatrix} = begin{pmatrix}(lambda_1-mu_1)a_{11} & (lambda_1-mu_2)a_{12} \ (lambda_2-mu_1)a_{21} & (lambda_2-mu_2)a_{22} \ (lambda_3-mu_1)a_{31} & (lambda_3-mu_2)a_{32} end{pmatrix}$$ where $a_{ij}$ denotes the scalars of $C^{-1}XE$.



Also I don't know how to find the eigenvectors with this method.










share|cite|improve this question















Let $A$ and $B$ be matrices such that $$A = begin{pmatrix}0 & 1\15 & 2end{pmatrix},: B = begin{pmatrix}0 & 2 & -4 \ 2 & -3 & -2 \ -4 & -2 & 0 end{pmatrix}$$ Let $f$ and $g$ be linear transformations such that $$f, g: M_{3x2}(mathbb{R})
rightarrow M_{3x2}(mathbb{R})$$
where $f(X) = BX - XA$ and $g(X) = BXA$ for every $3$ x $2$ real matrix $X$. I want to find the eigenvalues and bases for the eigenspaces of these functions.



EDIT: While @xbh 's method is correct, it involves computing the eigenvalues of a $6times6$ matrix. I've recently learned of a method that can solve this problem with less computation, but I only know half of it.



This method relies on the observation that $$begin{pmatrix}lambda_1 & 0 & 0 \ 0 & lambda_2 & 0 \ 0 & 0 & lambda_3 end{pmatrix}begin{pmatrix}a_{11} & a_{12}\ a_{21} & a_{22} \ a_{31} & a_{32}end{pmatrix}= begin{pmatrix}lambda_1a_{11} & lambda_1a_{12} \ lambda_2a_{21} & lambda_2a_{22} \ lambda_3a_{31} & lambda_3a_{32} end{pmatrix}$$ and$$begin{pmatrix}a_{11} & a_{12} \ a_{21} & a_{22} \ a_{31} & a_{32} end{pmatrix}begin{pmatrix}mu_1 & 0 \ 0 & mu_2 end{pmatrix} = begin{pmatrix} mu_1a_{11} & mu_2a_{12} \ mu_1a_{21} & mu_2a_{22} \ mu_1a_{31} & mu_2a_{32} end{pmatrix} $$
Also, it is easy enough to see that the eigenvalues for $A$ are $mu = -3,5$ with associated eigenvectors $v_1 = (1, -3) : v_2 = (1, 5)$. For $B$, $lambda = -4$ has two-dimensional eigenspace given by $t_1 = (1,0,1)$ and $t_2 = (0,2,1)$ and for $lambda = 5, : t_3 = (-2,-1,2)$.



Since $A$ and $B$ are $ntimes n$ matrices with $n$ linearly independent eigenvectors. They are diagonizable. Thus for $f$,$$A = ED_aE^{-1},: B = CD_bC^{-1}$$ and then $$CD_bC^{-1}X - XED_aE^{-1} = delta X$$ $$Longrightarrow D_b(C^{-1}XE) - (C^{-1}XE)D_a = delta C^{-1}XE$$ From here I'm supposed to deduce that the eigenvalues of $f$ are every combination $lambda - mu$; however, I don't understand how to reach this conclusion since substituting any combination $lambda - mu$ for $delta$ cannot make $$(lambda_i-mu_j)begin{pmatrix}a_{11} & a_{12}\ a_{21} & a_{22} \ a_{31} & a_{32}end{pmatrix} = begin{pmatrix}(lambda_1-mu_1)a_{11} & (lambda_1-mu_2)a_{12} \ (lambda_2-mu_1)a_{21} & (lambda_2-mu_2)a_{22} \ (lambda_3-mu_1)a_{31} & (lambda_3-mu_2)a_{32} end{pmatrix}$$ where $a_{ij}$ denotes the scalars of $C^{-1}XE$.



Also I don't know how to find the eigenvectors with this method.







linear-algebra






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited yesterday

























asked Nov 14 at 4:15









Ryan Greyling

564




564












  • Recall the definition of eigenvalues, eigenvectors for linear transformation. Note that "vectors" are not merely the 1 column matrices.
    – xbh
    2 days ago












  • Okay, you're right. In this case we can call $X$ a vector because it is an element of the vector space $M_{3x2}(mathbb{R})$, but this is only a technical correction. I still don't know how to solve for $X$
    – Ryan Greyling
    2 days ago






  • 1




    Seems like a "hard" computation… let $X = [x_{j,k}]_{3times 2}$ and compute $f(X)$ and solve $f(X) = cX$. Or take a basis of $M_{3,2}(Bbb R)$ then find out the matrix of $f$ in this basis. Now you could apply your knowledge about solving $My = cy$ where $M in mathrm M_6 (Bbb R), y in mathrm M_{6,1}(Bbb R)$.
    – xbh
    2 days ago










  • Oh I see, that'll definitely work. I'll post an answer once I've done the computations. I understand that the eigenvectors for $My = cy$ will have to be converted from 6 dimensional column vectors back to $3$ x $2$ matrices using my chosen basis for $M_{3,2}(mathbb{R})$. But just to make sure, the eigenvalues will remain the same right?
    – Ryan Greyling
    2 days ago










  • Yeah, just a formal change. This is just an application of linear algebra notions. The eigenvalues are totally determined by the transformations on their own.
    – xbh
    2 days ago


















  • Recall the definition of eigenvalues, eigenvectors for linear transformation. Note that "vectors" are not merely the 1 column matrices.
    – xbh
    2 days ago












  • Okay, you're right. In this case we can call $X$ a vector because it is an element of the vector space $M_{3x2}(mathbb{R})$, but this is only a technical correction. I still don't know how to solve for $X$
    – Ryan Greyling
    2 days ago






  • 1




    Seems like a "hard" computation… let $X = [x_{j,k}]_{3times 2}$ and compute $f(X)$ and solve $f(X) = cX$. Or take a basis of $M_{3,2}(Bbb R)$ then find out the matrix of $f$ in this basis. Now you could apply your knowledge about solving $My = cy$ where $M in mathrm M_6 (Bbb R), y in mathrm M_{6,1}(Bbb R)$.
    – xbh
    2 days ago










  • Oh I see, that'll definitely work. I'll post an answer once I've done the computations. I understand that the eigenvectors for $My = cy$ will have to be converted from 6 dimensional column vectors back to $3$ x $2$ matrices using my chosen basis for $M_{3,2}(mathbb{R})$. But just to make sure, the eigenvalues will remain the same right?
    – Ryan Greyling
    2 days ago










  • Yeah, just a formal change. This is just an application of linear algebra notions. The eigenvalues are totally determined by the transformations on their own.
    – xbh
    2 days ago
















Recall the definition of eigenvalues, eigenvectors for linear transformation. Note that "vectors" are not merely the 1 column matrices.
– xbh
2 days ago






Recall the definition of eigenvalues, eigenvectors for linear transformation. Note that "vectors" are not merely the 1 column matrices.
– xbh
2 days ago














Okay, you're right. In this case we can call $X$ a vector because it is an element of the vector space $M_{3x2}(mathbb{R})$, but this is only a technical correction. I still don't know how to solve for $X$
– Ryan Greyling
2 days ago




Okay, you're right. In this case we can call $X$ a vector because it is an element of the vector space $M_{3x2}(mathbb{R})$, but this is only a technical correction. I still don't know how to solve for $X$
– Ryan Greyling
2 days ago




1




1




Seems like a "hard" computation… let $X = [x_{j,k}]_{3times 2}$ and compute $f(X)$ and solve $f(X) = cX$. Or take a basis of $M_{3,2}(Bbb R)$ then find out the matrix of $f$ in this basis. Now you could apply your knowledge about solving $My = cy$ where $M in mathrm M_6 (Bbb R), y in mathrm M_{6,1}(Bbb R)$.
– xbh
2 days ago




Seems like a "hard" computation… let $X = [x_{j,k}]_{3times 2}$ and compute $f(X)$ and solve $f(X) = cX$. Or take a basis of $M_{3,2}(Bbb R)$ then find out the matrix of $f$ in this basis. Now you could apply your knowledge about solving $My = cy$ where $M in mathrm M_6 (Bbb R), y in mathrm M_{6,1}(Bbb R)$.
– xbh
2 days ago












Oh I see, that'll definitely work. I'll post an answer once I've done the computations. I understand that the eigenvectors for $My = cy$ will have to be converted from 6 dimensional column vectors back to $3$ x $2$ matrices using my chosen basis for $M_{3,2}(mathbb{R})$. But just to make sure, the eigenvalues will remain the same right?
– Ryan Greyling
2 days ago




Oh I see, that'll definitely work. I'll post an answer once I've done the computations. I understand that the eigenvectors for $My = cy$ will have to be converted from 6 dimensional column vectors back to $3$ x $2$ matrices using my chosen basis for $M_{3,2}(mathbb{R})$. But just to make sure, the eigenvalues will remain the same right?
– Ryan Greyling
2 days ago












Yeah, just a formal change. This is just an application of linear algebra notions. The eigenvalues are totally determined by the transformations on their own.
– xbh
2 days ago




Yeah, just a formal change. This is just an application of linear algebra notions. The eigenvalues are totally determined by the transformations on their own.
– xbh
2 days ago










1 Answer
1






active

oldest

votes

















up vote
0
down vote













Using the methods described in the edited question, we end up with 6 equations. $$(lambda_1-mu_1)a_{11}=delta a_{11}$$ $$(lambda_1-mu_2)a_{12}=delta a_{12}$$ $$(lambda_2-mu_1)a_{21}=delta a_{21}$$ $$(lambda_2-mu_2)a_{22}=delta a_{22}$$ $$(lambda_3-mu_1)a_{31}=delta a_{31}$$ $$(lambda_3-mu_2)a_{32}=delta a_{32}$$ where $a_{ij}$ denotes the entries of $C^{-1}XE$ and $delta$ are the eigenvalues of $f$ we are trying to find. From this list of equations it is simple to see that the eigenvalues $delta$ are every combination $lambda-mu$. We can then find the eigenvectors by solving for $a_{ij}$ and then solving $begin{pmatrix}a_{11} & a_{12} \ a_{21} & a_{22} \ a_{31} & a_{32} end{pmatrix} = C^{-1}XE$ for $X$



For example, with the given eigenvalues/vectors of $A$ and $B$ consider $delta=-1$ $$-a_{11}=-a_{11}$$ $$-9a_{12}=-a_{12}$$ $$-a_{21}=-a_{21}$$ $$-9a_{22}=-a_{22}$$ $$8a_{31}=-a_{31}$$ $$0*a_{32}=-a_{32}$$ Every $a_{ij}$ other than $a_{11}$ and $a_{21}$ are $0$, so the eigen vector for $delta=-1$ is given by $begin{pmatrix}a_{11} & 0 \ a_{21} & 0 \ 0 & 0 end{pmatrix}$ which can be expressed by the basis ${begin{pmatrix}1 & 0 \ 0 & 0 \ 0 & 0 end{pmatrix},begin{pmatrix} 0 & 0 \ 1 & 0 \ 0 & 0 end{pmatrix} }$



For example, to solve $begin{pmatrix}1 & 0 \ 0 & 0 \ 0 & 0 end{pmatrix}=C^{-1}XE :$ simply compute $Cbegin{pmatrix}1 & 0 \ 0 & 0 \ 0 & 0 end{pmatrix}E^{-1}$



As for $g$, the eigenvalues will be every combination $lambda*mu$ and the process for finding the eigenvectors is very similar.






share|cite|improve this answer





















    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














     

    draft saved


    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2997773%2ffinding-the-eigenvalues-and-bases-for-the-eigenspaces-of-linear-transformations%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    0
    down vote













    Using the methods described in the edited question, we end up with 6 equations. $$(lambda_1-mu_1)a_{11}=delta a_{11}$$ $$(lambda_1-mu_2)a_{12}=delta a_{12}$$ $$(lambda_2-mu_1)a_{21}=delta a_{21}$$ $$(lambda_2-mu_2)a_{22}=delta a_{22}$$ $$(lambda_3-mu_1)a_{31}=delta a_{31}$$ $$(lambda_3-mu_2)a_{32}=delta a_{32}$$ where $a_{ij}$ denotes the entries of $C^{-1}XE$ and $delta$ are the eigenvalues of $f$ we are trying to find. From this list of equations it is simple to see that the eigenvalues $delta$ are every combination $lambda-mu$. We can then find the eigenvectors by solving for $a_{ij}$ and then solving $begin{pmatrix}a_{11} & a_{12} \ a_{21} & a_{22} \ a_{31} & a_{32} end{pmatrix} = C^{-1}XE$ for $X$



    For example, with the given eigenvalues/vectors of $A$ and $B$ consider $delta=-1$ $$-a_{11}=-a_{11}$$ $$-9a_{12}=-a_{12}$$ $$-a_{21}=-a_{21}$$ $$-9a_{22}=-a_{22}$$ $$8a_{31}=-a_{31}$$ $$0*a_{32}=-a_{32}$$ Every $a_{ij}$ other than $a_{11}$ and $a_{21}$ are $0$, so the eigen vector for $delta=-1$ is given by $begin{pmatrix}a_{11} & 0 \ a_{21} & 0 \ 0 & 0 end{pmatrix}$ which can be expressed by the basis ${begin{pmatrix}1 & 0 \ 0 & 0 \ 0 & 0 end{pmatrix},begin{pmatrix} 0 & 0 \ 1 & 0 \ 0 & 0 end{pmatrix} }$



    For example, to solve $begin{pmatrix}1 & 0 \ 0 & 0 \ 0 & 0 end{pmatrix}=C^{-1}XE :$ simply compute $Cbegin{pmatrix}1 & 0 \ 0 & 0 \ 0 & 0 end{pmatrix}E^{-1}$



    As for $g$, the eigenvalues will be every combination $lambda*mu$ and the process for finding the eigenvectors is very similar.






    share|cite|improve this answer

























      up vote
      0
      down vote













      Using the methods described in the edited question, we end up with 6 equations. $$(lambda_1-mu_1)a_{11}=delta a_{11}$$ $$(lambda_1-mu_2)a_{12}=delta a_{12}$$ $$(lambda_2-mu_1)a_{21}=delta a_{21}$$ $$(lambda_2-mu_2)a_{22}=delta a_{22}$$ $$(lambda_3-mu_1)a_{31}=delta a_{31}$$ $$(lambda_3-mu_2)a_{32}=delta a_{32}$$ where $a_{ij}$ denotes the entries of $C^{-1}XE$ and $delta$ are the eigenvalues of $f$ we are trying to find. From this list of equations it is simple to see that the eigenvalues $delta$ are every combination $lambda-mu$. We can then find the eigenvectors by solving for $a_{ij}$ and then solving $begin{pmatrix}a_{11} & a_{12} \ a_{21} & a_{22} \ a_{31} & a_{32} end{pmatrix} = C^{-1}XE$ for $X$



      For example, with the given eigenvalues/vectors of $A$ and $B$ consider $delta=-1$ $$-a_{11}=-a_{11}$$ $$-9a_{12}=-a_{12}$$ $$-a_{21}=-a_{21}$$ $$-9a_{22}=-a_{22}$$ $$8a_{31}=-a_{31}$$ $$0*a_{32}=-a_{32}$$ Every $a_{ij}$ other than $a_{11}$ and $a_{21}$ are $0$, so the eigen vector for $delta=-1$ is given by $begin{pmatrix}a_{11} & 0 \ a_{21} & 0 \ 0 & 0 end{pmatrix}$ which can be expressed by the basis ${begin{pmatrix}1 & 0 \ 0 & 0 \ 0 & 0 end{pmatrix},begin{pmatrix} 0 & 0 \ 1 & 0 \ 0 & 0 end{pmatrix} }$



      For example, to solve $begin{pmatrix}1 & 0 \ 0 & 0 \ 0 & 0 end{pmatrix}=C^{-1}XE :$ simply compute $Cbegin{pmatrix}1 & 0 \ 0 & 0 \ 0 & 0 end{pmatrix}E^{-1}$



      As for $g$, the eigenvalues will be every combination $lambda*mu$ and the process for finding the eigenvectors is very similar.






      share|cite|improve this answer























        up vote
        0
        down vote










        up vote
        0
        down vote









        Using the methods described in the edited question, we end up with 6 equations. $$(lambda_1-mu_1)a_{11}=delta a_{11}$$ $$(lambda_1-mu_2)a_{12}=delta a_{12}$$ $$(lambda_2-mu_1)a_{21}=delta a_{21}$$ $$(lambda_2-mu_2)a_{22}=delta a_{22}$$ $$(lambda_3-mu_1)a_{31}=delta a_{31}$$ $$(lambda_3-mu_2)a_{32}=delta a_{32}$$ where $a_{ij}$ denotes the entries of $C^{-1}XE$ and $delta$ are the eigenvalues of $f$ we are trying to find. From this list of equations it is simple to see that the eigenvalues $delta$ are every combination $lambda-mu$. We can then find the eigenvectors by solving for $a_{ij}$ and then solving $begin{pmatrix}a_{11} & a_{12} \ a_{21} & a_{22} \ a_{31} & a_{32} end{pmatrix} = C^{-1}XE$ for $X$



        For example, with the given eigenvalues/vectors of $A$ and $B$ consider $delta=-1$ $$-a_{11}=-a_{11}$$ $$-9a_{12}=-a_{12}$$ $$-a_{21}=-a_{21}$$ $$-9a_{22}=-a_{22}$$ $$8a_{31}=-a_{31}$$ $$0*a_{32}=-a_{32}$$ Every $a_{ij}$ other than $a_{11}$ and $a_{21}$ are $0$, so the eigen vector for $delta=-1$ is given by $begin{pmatrix}a_{11} & 0 \ a_{21} & 0 \ 0 & 0 end{pmatrix}$ which can be expressed by the basis ${begin{pmatrix}1 & 0 \ 0 & 0 \ 0 & 0 end{pmatrix},begin{pmatrix} 0 & 0 \ 1 & 0 \ 0 & 0 end{pmatrix} }$



        For example, to solve $begin{pmatrix}1 & 0 \ 0 & 0 \ 0 & 0 end{pmatrix}=C^{-1}XE :$ simply compute $Cbegin{pmatrix}1 & 0 \ 0 & 0 \ 0 & 0 end{pmatrix}E^{-1}$



        As for $g$, the eigenvalues will be every combination $lambda*mu$ and the process for finding the eigenvectors is very similar.






        share|cite|improve this answer












        Using the methods described in the edited question, we end up with 6 equations. $$(lambda_1-mu_1)a_{11}=delta a_{11}$$ $$(lambda_1-mu_2)a_{12}=delta a_{12}$$ $$(lambda_2-mu_1)a_{21}=delta a_{21}$$ $$(lambda_2-mu_2)a_{22}=delta a_{22}$$ $$(lambda_3-mu_1)a_{31}=delta a_{31}$$ $$(lambda_3-mu_2)a_{32}=delta a_{32}$$ where $a_{ij}$ denotes the entries of $C^{-1}XE$ and $delta$ are the eigenvalues of $f$ we are trying to find. From this list of equations it is simple to see that the eigenvalues $delta$ are every combination $lambda-mu$. We can then find the eigenvectors by solving for $a_{ij}$ and then solving $begin{pmatrix}a_{11} & a_{12} \ a_{21} & a_{22} \ a_{31} & a_{32} end{pmatrix} = C^{-1}XE$ for $X$



        For example, with the given eigenvalues/vectors of $A$ and $B$ consider $delta=-1$ $$-a_{11}=-a_{11}$$ $$-9a_{12}=-a_{12}$$ $$-a_{21}=-a_{21}$$ $$-9a_{22}=-a_{22}$$ $$8a_{31}=-a_{31}$$ $$0*a_{32}=-a_{32}$$ Every $a_{ij}$ other than $a_{11}$ and $a_{21}$ are $0$, so the eigen vector for $delta=-1$ is given by $begin{pmatrix}a_{11} & 0 \ a_{21} & 0 \ 0 & 0 end{pmatrix}$ which can be expressed by the basis ${begin{pmatrix}1 & 0 \ 0 & 0 \ 0 & 0 end{pmatrix},begin{pmatrix} 0 & 0 \ 1 & 0 \ 0 & 0 end{pmatrix} }$



        For example, to solve $begin{pmatrix}1 & 0 \ 0 & 0 \ 0 & 0 end{pmatrix}=C^{-1}XE :$ simply compute $Cbegin{pmatrix}1 & 0 \ 0 & 0 \ 0 & 0 end{pmatrix}E^{-1}$



        As for $g$, the eigenvalues will be every combination $lambda*mu$ and the process for finding the eigenvectors is very similar.







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered yesterday









        Ryan Greyling

        564




        564






























             

            draft saved


            draft discarded



















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2997773%2ffinding-the-eigenvalues-and-bases-for-the-eigenspaces-of-linear-transformations%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Plaza Victoria

            Puebla de Zaragoza

            Musa