Are all bivectors in three dimensions simple?











up vote
1
down vote

favorite












I want to show that all bivectors in three dimensions are simple.



If I understand correctly, a bivector is simply an element from the two-fold exterior product $bigwedge^2V$ of a vector space $V$, right?



We can define $wedge(e_iotimes e_j):=e_iotimes e_j - e_jotimes e_i$. Now let $T=t^{ij}e_iwedge e_jmapsto t^{ij}(e_iotimes e_j - e_jotimes e_i)=(t^{ij}-t^{ji})e_iotimes e_j$. This is injective. Because the wedge product as a linear map from the tensor product to the exterior product maps all symmetric tensors to 0.



We see that total antisymmetric tensors in this case are represented by skew-symmetric matrices. To show that they are all simple, I would have to show that the rank of any 3x3 skew-symmetric matrix is 1.



But the rank of a skew-symmetric matrix is never one.



I must have made a conceptual mistake somewhere again. Does anybody have a hint for me?



Geometrically, one can use the canonical isomorphism between the two-fold exterior product and $mathbb{R}^3$ itself to show that any anyisymmetric tensor can be thought of as a vector in 3D, which in turn can be represented by the cross product of two vectors, that are not collinear to each other but orthogonal to that vector.










share|cite|improve this question




















  • 2




    $(e_1+e_2)wedge(e_3-e_1)$
    – user8268
    Nov 13 at 23:12






  • 1




    Does "simple" mean "of the form $u wedge v$ for some $u, v in V$"? In this case, this is not hard to prove. More generally, if $V$ is an $n$-dimensional vector space over a field with $n geq 2$, then each element of $Lambda^{n-1}left(Vright)$ can be written as $v_1 wedge v_2 wedge cdots wedge v_{n-1}$ for some $v_1, v_2, ldots, v_{n-1} in V$.
    – darij grinberg
    Nov 14 at 2:42












  • Okay, I see how the vector given by user8268 decomposes into the one I wrote down. Now the question is how do I arrive at the result given by darij grinberg. I have no clue. I have an interesting idea, though. For 3 dimensions the 2-fold exterior algebra is obviously isomorphic to $mathbb{R} ^3$ itself.
    – Thomas Wening
    Nov 14 at 11:05










  • Okay, it turns out that the rank of the coefficient matrix must be equal to 1. I have just gone through the proof. But now I have another problem. It's amended in the post.
    – Thomas Wening
    Nov 14 at 14:24















up vote
1
down vote

favorite












I want to show that all bivectors in three dimensions are simple.



If I understand correctly, a bivector is simply an element from the two-fold exterior product $bigwedge^2V$ of a vector space $V$, right?



We can define $wedge(e_iotimes e_j):=e_iotimes e_j - e_jotimes e_i$. Now let $T=t^{ij}e_iwedge e_jmapsto t^{ij}(e_iotimes e_j - e_jotimes e_i)=(t^{ij}-t^{ji})e_iotimes e_j$. This is injective. Because the wedge product as a linear map from the tensor product to the exterior product maps all symmetric tensors to 0.



We see that total antisymmetric tensors in this case are represented by skew-symmetric matrices. To show that they are all simple, I would have to show that the rank of any 3x3 skew-symmetric matrix is 1.



But the rank of a skew-symmetric matrix is never one.



I must have made a conceptual mistake somewhere again. Does anybody have a hint for me?



Geometrically, one can use the canonical isomorphism between the two-fold exterior product and $mathbb{R}^3$ itself to show that any anyisymmetric tensor can be thought of as a vector in 3D, which in turn can be represented by the cross product of two vectors, that are not collinear to each other but orthogonal to that vector.










share|cite|improve this question




















  • 2




    $(e_1+e_2)wedge(e_3-e_1)$
    – user8268
    Nov 13 at 23:12






  • 1




    Does "simple" mean "of the form $u wedge v$ for some $u, v in V$"? In this case, this is not hard to prove. More generally, if $V$ is an $n$-dimensional vector space over a field with $n geq 2$, then each element of $Lambda^{n-1}left(Vright)$ can be written as $v_1 wedge v_2 wedge cdots wedge v_{n-1}$ for some $v_1, v_2, ldots, v_{n-1} in V$.
    – darij grinberg
    Nov 14 at 2:42












  • Okay, I see how the vector given by user8268 decomposes into the one I wrote down. Now the question is how do I arrive at the result given by darij grinberg. I have no clue. I have an interesting idea, though. For 3 dimensions the 2-fold exterior algebra is obviously isomorphic to $mathbb{R} ^3$ itself.
    – Thomas Wening
    Nov 14 at 11:05










  • Okay, it turns out that the rank of the coefficient matrix must be equal to 1. I have just gone through the proof. But now I have another problem. It's amended in the post.
    – Thomas Wening
    Nov 14 at 14:24













up vote
1
down vote

favorite









up vote
1
down vote

favorite











I want to show that all bivectors in three dimensions are simple.



If I understand correctly, a bivector is simply an element from the two-fold exterior product $bigwedge^2V$ of a vector space $V$, right?



We can define $wedge(e_iotimes e_j):=e_iotimes e_j - e_jotimes e_i$. Now let $T=t^{ij}e_iwedge e_jmapsto t^{ij}(e_iotimes e_j - e_jotimes e_i)=(t^{ij}-t^{ji})e_iotimes e_j$. This is injective. Because the wedge product as a linear map from the tensor product to the exterior product maps all symmetric tensors to 0.



We see that total antisymmetric tensors in this case are represented by skew-symmetric matrices. To show that they are all simple, I would have to show that the rank of any 3x3 skew-symmetric matrix is 1.



But the rank of a skew-symmetric matrix is never one.



I must have made a conceptual mistake somewhere again. Does anybody have a hint for me?



Geometrically, one can use the canonical isomorphism between the two-fold exterior product and $mathbb{R}^3$ itself to show that any anyisymmetric tensor can be thought of as a vector in 3D, which in turn can be represented by the cross product of two vectors, that are not collinear to each other but orthogonal to that vector.










share|cite|improve this question















I want to show that all bivectors in three dimensions are simple.



If I understand correctly, a bivector is simply an element from the two-fold exterior product $bigwedge^2V$ of a vector space $V$, right?



We can define $wedge(e_iotimes e_j):=e_iotimes e_j - e_jotimes e_i$. Now let $T=t^{ij}e_iwedge e_jmapsto t^{ij}(e_iotimes e_j - e_jotimes e_i)=(t^{ij}-t^{ji})e_iotimes e_j$. This is injective. Because the wedge product as a linear map from the tensor product to the exterior product maps all symmetric tensors to 0.



We see that total antisymmetric tensors in this case are represented by skew-symmetric matrices. To show that they are all simple, I would have to show that the rank of any 3x3 skew-symmetric matrix is 1.



But the rank of a skew-symmetric matrix is never one.



I must have made a conceptual mistake somewhere again. Does anybody have a hint for me?



Geometrically, one can use the canonical isomorphism between the two-fold exterior product and $mathbb{R}^3$ itself to show that any anyisymmetric tensor can be thought of as a vector in 3D, which in turn can be represented by the cross product of two vectors, that are not collinear to each other but orthogonal to that vector.







vector-spaces tensor-products tensors exterior-algebra






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Nov 14 at 14:37

























asked Nov 13 at 22:59









Thomas Wening

12110




12110








  • 2




    $(e_1+e_2)wedge(e_3-e_1)$
    – user8268
    Nov 13 at 23:12






  • 1




    Does "simple" mean "of the form $u wedge v$ for some $u, v in V$"? In this case, this is not hard to prove. More generally, if $V$ is an $n$-dimensional vector space over a field with $n geq 2$, then each element of $Lambda^{n-1}left(Vright)$ can be written as $v_1 wedge v_2 wedge cdots wedge v_{n-1}$ for some $v_1, v_2, ldots, v_{n-1} in V$.
    – darij grinberg
    Nov 14 at 2:42












  • Okay, I see how the vector given by user8268 decomposes into the one I wrote down. Now the question is how do I arrive at the result given by darij grinberg. I have no clue. I have an interesting idea, though. For 3 dimensions the 2-fold exterior algebra is obviously isomorphic to $mathbb{R} ^3$ itself.
    – Thomas Wening
    Nov 14 at 11:05










  • Okay, it turns out that the rank of the coefficient matrix must be equal to 1. I have just gone through the proof. But now I have another problem. It's amended in the post.
    – Thomas Wening
    Nov 14 at 14:24














  • 2




    $(e_1+e_2)wedge(e_3-e_1)$
    – user8268
    Nov 13 at 23:12






  • 1




    Does "simple" mean "of the form $u wedge v$ for some $u, v in V$"? In this case, this is not hard to prove. More generally, if $V$ is an $n$-dimensional vector space over a field with $n geq 2$, then each element of $Lambda^{n-1}left(Vright)$ can be written as $v_1 wedge v_2 wedge cdots wedge v_{n-1}$ for some $v_1, v_2, ldots, v_{n-1} in V$.
    – darij grinberg
    Nov 14 at 2:42












  • Okay, I see how the vector given by user8268 decomposes into the one I wrote down. Now the question is how do I arrive at the result given by darij grinberg. I have no clue. I have an interesting idea, though. For 3 dimensions the 2-fold exterior algebra is obviously isomorphic to $mathbb{R} ^3$ itself.
    – Thomas Wening
    Nov 14 at 11:05










  • Okay, it turns out that the rank of the coefficient matrix must be equal to 1. I have just gone through the proof. But now I have another problem. It's amended in the post.
    – Thomas Wening
    Nov 14 at 14:24








2




2




$(e_1+e_2)wedge(e_3-e_1)$
– user8268
Nov 13 at 23:12




$(e_1+e_2)wedge(e_3-e_1)$
– user8268
Nov 13 at 23:12




1




1




Does "simple" mean "of the form $u wedge v$ for some $u, v in V$"? In this case, this is not hard to prove. More generally, if $V$ is an $n$-dimensional vector space over a field with $n geq 2$, then each element of $Lambda^{n-1}left(Vright)$ can be written as $v_1 wedge v_2 wedge cdots wedge v_{n-1}$ for some $v_1, v_2, ldots, v_{n-1} in V$.
– darij grinberg
Nov 14 at 2:42






Does "simple" mean "of the form $u wedge v$ for some $u, v in V$"? In this case, this is not hard to prove. More generally, if $V$ is an $n$-dimensional vector space over a field with $n geq 2$, then each element of $Lambda^{n-1}left(Vright)$ can be written as $v_1 wedge v_2 wedge cdots wedge v_{n-1}$ for some $v_1, v_2, ldots, v_{n-1} in V$.
– darij grinberg
Nov 14 at 2:42














Okay, I see how the vector given by user8268 decomposes into the one I wrote down. Now the question is how do I arrive at the result given by darij grinberg. I have no clue. I have an interesting idea, though. For 3 dimensions the 2-fold exterior algebra is obviously isomorphic to $mathbb{R} ^3$ itself.
– Thomas Wening
Nov 14 at 11:05




Okay, I see how the vector given by user8268 decomposes into the one I wrote down. Now the question is how do I arrive at the result given by darij grinberg. I have no clue. I have an interesting idea, though. For 3 dimensions the 2-fold exterior algebra is obviously isomorphic to $mathbb{R} ^3$ itself.
– Thomas Wening
Nov 14 at 11:05












Okay, it turns out that the rank of the coefficient matrix must be equal to 1. I have just gone through the proof. But now I have another problem. It's amended in the post.
– Thomas Wening
Nov 14 at 14:24




Okay, it turns out that the rank of the coefficient matrix must be equal to 1. I have just gone through the proof. But now I have another problem. It's amended in the post.
– Thomas Wening
Nov 14 at 14:24










1 Answer
1






active

oldest

votes

















up vote
1
down vote



accepted











To show that they are all simple, I would have to show that the rank of any 3x3 skew-symmetric matrix is 1.




This doesn't follow, since as you say the rank of a skew-symmetric matrix can never be $1$. You're conflating $e_i otimes e_j - e_j otimes e_i$, which as a tensor has rank $2$ or $0$, with $e_i wedge e_j$. These are not the same object; one of them lives in $V^{otimes 2}$ and the other one lives in $Lambda^2(V)$. In general I don't recommend thinking in terms of antisymmetric tensors; it makes the exterior product look much more complicated than it is.



Anyway, here's a proof of darij's more general claim in the comments. Let $v_1 in Lambda^{n-1}(V)$ be a vector, where $dim V = n$. Choose a nonzero element $omega in Lambda^n(V)$, hence an identification of it with the ground field $k$. Then the exterior product



$$wedge : V times Lambda^{n-1}(V) to Lambda^n(V) cong k$$



is a nondegenerate bilinear pairing. Extend $v_1$ to a basis $v_1, dots v_n in Lambda^{n-1}(V)$. Then it has a unique dual basis $e_1, dots e_n in V$ defined by the condition that



$$e_i wedge v_j = delta_{ij} omega in Lambda^n(V).$$



Then the $v_i$ must also be the dual basis of the $e_i$ with respect to this pairing. But this dual basis in turn must be



$$v_i = (-1)^{i-1} frac{omega}{e_1 wedge dots wedge e_n} e_1 wedge dots wedge widehat{e_i} wedge dots wedge e_n$$



where the hat denotes that we omit $e_i$, and in particular



$$v_1 = frac{omega}{e_1 wedge dots wedge e_n} e_2 wedge dots wedge e_n.$$






share|cite|improve this answer





















  • Okay, this calms my mind. I was already worried I had gotten something really wrong about the theorem about the rank and simpleness. Now to your proof: I have a question. What does $frac{omega}{e^1wedgecdotswedge e_n}$ mean?
    – Thomas Wening
    Nov 14 at 21:41






  • 1




    @Thomas: $Lambda^n(V)$ is one-dimensional, and $omega$ and $e_1 wedge dots wedge e_n$ are two nonzero elements of it, so there's a unique nonzero scalar $c$ such that $omega = c e_1 wedge dots wedge e_n$. It refers to this scalar.
    – Qiaochu Yuan
    Nov 14 at 21:44










  • Okay, I do understand the dual base part. The factor $(-1)^{i-1}$ is needed because you have to permute $e_i$ $i-1$ times in the wedge product $e_iwedge v_i$ to get all the indices in monotonously increasing order, right?
    – Thomas Wening
    Nov 14 at 21:59










  • Yes, that's right.
    – Qiaochu Yuan
    Nov 14 at 22:00










  • Good. The pairing allows for the definition of a dual basis, once I have fixed one in $bigwedge^2mathbb{R}^3$. But how does that allow me to decompose an arbitrary $omegainbigwedge^{n-1}V$ into $omega=vwedge u$ with $u,vin V$?
    – Thomas Wening
    Nov 14 at 22:11













Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














 

draft saved


draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2997487%2fare-all-bivectors-in-three-dimensions-simple%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
1
down vote



accepted











To show that they are all simple, I would have to show that the rank of any 3x3 skew-symmetric matrix is 1.




This doesn't follow, since as you say the rank of a skew-symmetric matrix can never be $1$. You're conflating $e_i otimes e_j - e_j otimes e_i$, which as a tensor has rank $2$ or $0$, with $e_i wedge e_j$. These are not the same object; one of them lives in $V^{otimes 2}$ and the other one lives in $Lambda^2(V)$. In general I don't recommend thinking in terms of antisymmetric tensors; it makes the exterior product look much more complicated than it is.



Anyway, here's a proof of darij's more general claim in the comments. Let $v_1 in Lambda^{n-1}(V)$ be a vector, where $dim V = n$. Choose a nonzero element $omega in Lambda^n(V)$, hence an identification of it with the ground field $k$. Then the exterior product



$$wedge : V times Lambda^{n-1}(V) to Lambda^n(V) cong k$$



is a nondegenerate bilinear pairing. Extend $v_1$ to a basis $v_1, dots v_n in Lambda^{n-1}(V)$. Then it has a unique dual basis $e_1, dots e_n in V$ defined by the condition that



$$e_i wedge v_j = delta_{ij} omega in Lambda^n(V).$$



Then the $v_i$ must also be the dual basis of the $e_i$ with respect to this pairing. But this dual basis in turn must be



$$v_i = (-1)^{i-1} frac{omega}{e_1 wedge dots wedge e_n} e_1 wedge dots wedge widehat{e_i} wedge dots wedge e_n$$



where the hat denotes that we omit $e_i$, and in particular



$$v_1 = frac{omega}{e_1 wedge dots wedge e_n} e_2 wedge dots wedge e_n.$$






share|cite|improve this answer





















  • Okay, this calms my mind. I was already worried I had gotten something really wrong about the theorem about the rank and simpleness. Now to your proof: I have a question. What does $frac{omega}{e^1wedgecdotswedge e_n}$ mean?
    – Thomas Wening
    Nov 14 at 21:41






  • 1




    @Thomas: $Lambda^n(V)$ is one-dimensional, and $omega$ and $e_1 wedge dots wedge e_n$ are two nonzero elements of it, so there's a unique nonzero scalar $c$ such that $omega = c e_1 wedge dots wedge e_n$. It refers to this scalar.
    – Qiaochu Yuan
    Nov 14 at 21:44










  • Okay, I do understand the dual base part. The factor $(-1)^{i-1}$ is needed because you have to permute $e_i$ $i-1$ times in the wedge product $e_iwedge v_i$ to get all the indices in monotonously increasing order, right?
    – Thomas Wening
    Nov 14 at 21:59










  • Yes, that's right.
    – Qiaochu Yuan
    Nov 14 at 22:00










  • Good. The pairing allows for the definition of a dual basis, once I have fixed one in $bigwedge^2mathbb{R}^3$. But how does that allow me to decompose an arbitrary $omegainbigwedge^{n-1}V$ into $omega=vwedge u$ with $u,vin V$?
    – Thomas Wening
    Nov 14 at 22:11

















up vote
1
down vote



accepted











To show that they are all simple, I would have to show that the rank of any 3x3 skew-symmetric matrix is 1.




This doesn't follow, since as you say the rank of a skew-symmetric matrix can never be $1$. You're conflating $e_i otimes e_j - e_j otimes e_i$, which as a tensor has rank $2$ or $0$, with $e_i wedge e_j$. These are not the same object; one of them lives in $V^{otimes 2}$ and the other one lives in $Lambda^2(V)$. In general I don't recommend thinking in terms of antisymmetric tensors; it makes the exterior product look much more complicated than it is.



Anyway, here's a proof of darij's more general claim in the comments. Let $v_1 in Lambda^{n-1}(V)$ be a vector, where $dim V = n$. Choose a nonzero element $omega in Lambda^n(V)$, hence an identification of it with the ground field $k$. Then the exterior product



$$wedge : V times Lambda^{n-1}(V) to Lambda^n(V) cong k$$



is a nondegenerate bilinear pairing. Extend $v_1$ to a basis $v_1, dots v_n in Lambda^{n-1}(V)$. Then it has a unique dual basis $e_1, dots e_n in V$ defined by the condition that



$$e_i wedge v_j = delta_{ij} omega in Lambda^n(V).$$



Then the $v_i$ must also be the dual basis of the $e_i$ with respect to this pairing. But this dual basis in turn must be



$$v_i = (-1)^{i-1} frac{omega}{e_1 wedge dots wedge e_n} e_1 wedge dots wedge widehat{e_i} wedge dots wedge e_n$$



where the hat denotes that we omit $e_i$, and in particular



$$v_1 = frac{omega}{e_1 wedge dots wedge e_n} e_2 wedge dots wedge e_n.$$






share|cite|improve this answer





















  • Okay, this calms my mind. I was already worried I had gotten something really wrong about the theorem about the rank and simpleness. Now to your proof: I have a question. What does $frac{omega}{e^1wedgecdotswedge e_n}$ mean?
    – Thomas Wening
    Nov 14 at 21:41






  • 1




    @Thomas: $Lambda^n(V)$ is one-dimensional, and $omega$ and $e_1 wedge dots wedge e_n$ are two nonzero elements of it, so there's a unique nonzero scalar $c$ such that $omega = c e_1 wedge dots wedge e_n$. It refers to this scalar.
    – Qiaochu Yuan
    Nov 14 at 21:44










  • Okay, I do understand the dual base part. The factor $(-1)^{i-1}$ is needed because you have to permute $e_i$ $i-1$ times in the wedge product $e_iwedge v_i$ to get all the indices in monotonously increasing order, right?
    – Thomas Wening
    Nov 14 at 21:59










  • Yes, that's right.
    – Qiaochu Yuan
    Nov 14 at 22:00










  • Good. The pairing allows for the definition of a dual basis, once I have fixed one in $bigwedge^2mathbb{R}^3$. But how does that allow me to decompose an arbitrary $omegainbigwedge^{n-1}V$ into $omega=vwedge u$ with $u,vin V$?
    – Thomas Wening
    Nov 14 at 22:11















up vote
1
down vote



accepted







up vote
1
down vote



accepted







To show that they are all simple, I would have to show that the rank of any 3x3 skew-symmetric matrix is 1.




This doesn't follow, since as you say the rank of a skew-symmetric matrix can never be $1$. You're conflating $e_i otimes e_j - e_j otimes e_i$, which as a tensor has rank $2$ or $0$, with $e_i wedge e_j$. These are not the same object; one of them lives in $V^{otimes 2}$ and the other one lives in $Lambda^2(V)$. In general I don't recommend thinking in terms of antisymmetric tensors; it makes the exterior product look much more complicated than it is.



Anyway, here's a proof of darij's more general claim in the comments. Let $v_1 in Lambda^{n-1}(V)$ be a vector, where $dim V = n$. Choose a nonzero element $omega in Lambda^n(V)$, hence an identification of it with the ground field $k$. Then the exterior product



$$wedge : V times Lambda^{n-1}(V) to Lambda^n(V) cong k$$



is a nondegenerate bilinear pairing. Extend $v_1$ to a basis $v_1, dots v_n in Lambda^{n-1}(V)$. Then it has a unique dual basis $e_1, dots e_n in V$ defined by the condition that



$$e_i wedge v_j = delta_{ij} omega in Lambda^n(V).$$



Then the $v_i$ must also be the dual basis of the $e_i$ with respect to this pairing. But this dual basis in turn must be



$$v_i = (-1)^{i-1} frac{omega}{e_1 wedge dots wedge e_n} e_1 wedge dots wedge widehat{e_i} wedge dots wedge e_n$$



where the hat denotes that we omit $e_i$, and in particular



$$v_1 = frac{omega}{e_1 wedge dots wedge e_n} e_2 wedge dots wedge e_n.$$






share|cite|improve this answer













To show that they are all simple, I would have to show that the rank of any 3x3 skew-symmetric matrix is 1.




This doesn't follow, since as you say the rank of a skew-symmetric matrix can never be $1$. You're conflating $e_i otimes e_j - e_j otimes e_i$, which as a tensor has rank $2$ or $0$, with $e_i wedge e_j$. These are not the same object; one of them lives in $V^{otimes 2}$ and the other one lives in $Lambda^2(V)$. In general I don't recommend thinking in terms of antisymmetric tensors; it makes the exterior product look much more complicated than it is.



Anyway, here's a proof of darij's more general claim in the comments. Let $v_1 in Lambda^{n-1}(V)$ be a vector, where $dim V = n$. Choose a nonzero element $omega in Lambda^n(V)$, hence an identification of it with the ground field $k$. Then the exterior product



$$wedge : V times Lambda^{n-1}(V) to Lambda^n(V) cong k$$



is a nondegenerate bilinear pairing. Extend $v_1$ to a basis $v_1, dots v_n in Lambda^{n-1}(V)$. Then it has a unique dual basis $e_1, dots e_n in V$ defined by the condition that



$$e_i wedge v_j = delta_{ij} omega in Lambda^n(V).$$



Then the $v_i$ must also be the dual basis of the $e_i$ with respect to this pairing. But this dual basis in turn must be



$$v_i = (-1)^{i-1} frac{omega}{e_1 wedge dots wedge e_n} e_1 wedge dots wedge widehat{e_i} wedge dots wedge e_n$$



where the hat denotes that we omit $e_i$, and in particular



$$v_1 = frac{omega}{e_1 wedge dots wedge e_n} e_2 wedge dots wedge e_n.$$







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Nov 14 at 21:18









Qiaochu Yuan

274k32578914




274k32578914












  • Okay, this calms my mind. I was already worried I had gotten something really wrong about the theorem about the rank and simpleness. Now to your proof: I have a question. What does $frac{omega}{e^1wedgecdotswedge e_n}$ mean?
    – Thomas Wening
    Nov 14 at 21:41






  • 1




    @Thomas: $Lambda^n(V)$ is one-dimensional, and $omega$ and $e_1 wedge dots wedge e_n$ are two nonzero elements of it, so there's a unique nonzero scalar $c$ such that $omega = c e_1 wedge dots wedge e_n$. It refers to this scalar.
    – Qiaochu Yuan
    Nov 14 at 21:44










  • Okay, I do understand the dual base part. The factor $(-1)^{i-1}$ is needed because you have to permute $e_i$ $i-1$ times in the wedge product $e_iwedge v_i$ to get all the indices in monotonously increasing order, right?
    – Thomas Wening
    Nov 14 at 21:59










  • Yes, that's right.
    – Qiaochu Yuan
    Nov 14 at 22:00










  • Good. The pairing allows for the definition of a dual basis, once I have fixed one in $bigwedge^2mathbb{R}^3$. But how does that allow me to decompose an arbitrary $omegainbigwedge^{n-1}V$ into $omega=vwedge u$ with $u,vin V$?
    – Thomas Wening
    Nov 14 at 22:11




















  • Okay, this calms my mind. I was already worried I had gotten something really wrong about the theorem about the rank and simpleness. Now to your proof: I have a question. What does $frac{omega}{e^1wedgecdotswedge e_n}$ mean?
    – Thomas Wening
    Nov 14 at 21:41






  • 1




    @Thomas: $Lambda^n(V)$ is one-dimensional, and $omega$ and $e_1 wedge dots wedge e_n$ are two nonzero elements of it, so there's a unique nonzero scalar $c$ such that $omega = c e_1 wedge dots wedge e_n$. It refers to this scalar.
    – Qiaochu Yuan
    Nov 14 at 21:44










  • Okay, I do understand the dual base part. The factor $(-1)^{i-1}$ is needed because you have to permute $e_i$ $i-1$ times in the wedge product $e_iwedge v_i$ to get all the indices in monotonously increasing order, right?
    – Thomas Wening
    Nov 14 at 21:59










  • Yes, that's right.
    – Qiaochu Yuan
    Nov 14 at 22:00










  • Good. The pairing allows for the definition of a dual basis, once I have fixed one in $bigwedge^2mathbb{R}^3$. But how does that allow me to decompose an arbitrary $omegainbigwedge^{n-1}V$ into $omega=vwedge u$ with $u,vin V$?
    – Thomas Wening
    Nov 14 at 22:11


















Okay, this calms my mind. I was already worried I had gotten something really wrong about the theorem about the rank and simpleness. Now to your proof: I have a question. What does $frac{omega}{e^1wedgecdotswedge e_n}$ mean?
– Thomas Wening
Nov 14 at 21:41




Okay, this calms my mind. I was already worried I had gotten something really wrong about the theorem about the rank and simpleness. Now to your proof: I have a question. What does $frac{omega}{e^1wedgecdotswedge e_n}$ mean?
– Thomas Wening
Nov 14 at 21:41




1




1




@Thomas: $Lambda^n(V)$ is one-dimensional, and $omega$ and $e_1 wedge dots wedge e_n$ are two nonzero elements of it, so there's a unique nonzero scalar $c$ such that $omega = c e_1 wedge dots wedge e_n$. It refers to this scalar.
– Qiaochu Yuan
Nov 14 at 21:44




@Thomas: $Lambda^n(V)$ is one-dimensional, and $omega$ and $e_1 wedge dots wedge e_n$ are two nonzero elements of it, so there's a unique nonzero scalar $c$ such that $omega = c e_1 wedge dots wedge e_n$. It refers to this scalar.
– Qiaochu Yuan
Nov 14 at 21:44












Okay, I do understand the dual base part. The factor $(-1)^{i-1}$ is needed because you have to permute $e_i$ $i-1$ times in the wedge product $e_iwedge v_i$ to get all the indices in monotonously increasing order, right?
– Thomas Wening
Nov 14 at 21:59




Okay, I do understand the dual base part. The factor $(-1)^{i-1}$ is needed because you have to permute $e_i$ $i-1$ times in the wedge product $e_iwedge v_i$ to get all the indices in monotonously increasing order, right?
– Thomas Wening
Nov 14 at 21:59












Yes, that's right.
– Qiaochu Yuan
Nov 14 at 22:00




Yes, that's right.
– Qiaochu Yuan
Nov 14 at 22:00












Good. The pairing allows for the definition of a dual basis, once I have fixed one in $bigwedge^2mathbb{R}^3$. But how does that allow me to decompose an arbitrary $omegainbigwedge^{n-1}V$ into $omega=vwedge u$ with $u,vin V$?
– Thomas Wening
Nov 14 at 22:11






Good. The pairing allows for the definition of a dual basis, once I have fixed one in $bigwedge^2mathbb{R}^3$. But how does that allow me to decompose an arbitrary $omegainbigwedge^{n-1}V$ into $omega=vwedge u$ with $u,vin V$?
– Thomas Wening
Nov 14 at 22:11




















 

draft saved


draft discarded



















































 


draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2997487%2fare-all-bivectors-in-three-dimensions-simple%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Plaza Victoria

In PowerPoint, is there a keyboard shortcut for bulleted / numbered list?

How to put 3 figures in Latex with 2 figures side by side and 1 below these side by side images but in...