Are all bivectors in three dimensions simple?
up vote
1
down vote
favorite
I want to show that all bivectors in three dimensions are simple.
If I understand correctly, a bivector is simply an element from the two-fold exterior product $bigwedge^2V$ of a vector space $V$, right?
We can define $wedge(e_iotimes e_j):=e_iotimes e_j - e_jotimes e_i$. Now let $T=t^{ij}e_iwedge e_jmapsto t^{ij}(e_iotimes e_j - e_jotimes e_i)=(t^{ij}-t^{ji})e_iotimes e_j$. This is injective. Because the wedge product as a linear map from the tensor product to the exterior product maps all symmetric tensors to 0.
We see that total antisymmetric tensors in this case are represented by skew-symmetric matrices. To show that they are all simple, I would have to show that the rank of any 3x3 skew-symmetric matrix is 1.
But the rank of a skew-symmetric matrix is never one.
I must have made a conceptual mistake somewhere again. Does anybody have a hint for me?
Geometrically, one can use the canonical isomorphism between the two-fold exterior product and $mathbb{R}^3$ itself to show that any anyisymmetric tensor can be thought of as a vector in 3D, which in turn can be represented by the cross product of two vectors, that are not collinear to each other but orthogonal to that vector.
vector-spaces tensor-products tensors exterior-algebra
add a comment |
up vote
1
down vote
favorite
I want to show that all bivectors in three dimensions are simple.
If I understand correctly, a bivector is simply an element from the two-fold exterior product $bigwedge^2V$ of a vector space $V$, right?
We can define $wedge(e_iotimes e_j):=e_iotimes e_j - e_jotimes e_i$. Now let $T=t^{ij}e_iwedge e_jmapsto t^{ij}(e_iotimes e_j - e_jotimes e_i)=(t^{ij}-t^{ji})e_iotimes e_j$. This is injective. Because the wedge product as a linear map from the tensor product to the exterior product maps all symmetric tensors to 0.
We see that total antisymmetric tensors in this case are represented by skew-symmetric matrices. To show that they are all simple, I would have to show that the rank of any 3x3 skew-symmetric matrix is 1.
But the rank of a skew-symmetric matrix is never one.
I must have made a conceptual mistake somewhere again. Does anybody have a hint for me?
Geometrically, one can use the canonical isomorphism between the two-fold exterior product and $mathbb{R}^3$ itself to show that any anyisymmetric tensor can be thought of as a vector in 3D, which in turn can be represented by the cross product of two vectors, that are not collinear to each other but orthogonal to that vector.
vector-spaces tensor-products tensors exterior-algebra
2
$(e_1+e_2)wedge(e_3-e_1)$
– user8268
Nov 13 at 23:12
1
Does "simple" mean "of the form $u wedge v$ for some $u, v in V$"? In this case, this is not hard to prove. More generally, if $V$ is an $n$-dimensional vector space over a field with $n geq 2$, then each element of $Lambda^{n-1}left(Vright)$ can be written as $v_1 wedge v_2 wedge cdots wedge v_{n-1}$ for some $v_1, v_2, ldots, v_{n-1} in V$.
– darij grinberg
Nov 14 at 2:42
Okay, I see how the vector given by user8268 decomposes into the one I wrote down. Now the question is how do I arrive at the result given by darij grinberg. I have no clue. I have an interesting idea, though. For 3 dimensions the 2-fold exterior algebra is obviously isomorphic to $mathbb{R} ^3$ itself.
– Thomas Wening
Nov 14 at 11:05
Okay, it turns out that the rank of the coefficient matrix must be equal to 1. I have just gone through the proof. But now I have another problem. It's amended in the post.
– Thomas Wening
Nov 14 at 14:24
add a comment |
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I want to show that all bivectors in three dimensions are simple.
If I understand correctly, a bivector is simply an element from the two-fold exterior product $bigwedge^2V$ of a vector space $V$, right?
We can define $wedge(e_iotimes e_j):=e_iotimes e_j - e_jotimes e_i$. Now let $T=t^{ij}e_iwedge e_jmapsto t^{ij}(e_iotimes e_j - e_jotimes e_i)=(t^{ij}-t^{ji})e_iotimes e_j$. This is injective. Because the wedge product as a linear map from the tensor product to the exterior product maps all symmetric tensors to 0.
We see that total antisymmetric tensors in this case are represented by skew-symmetric matrices. To show that they are all simple, I would have to show that the rank of any 3x3 skew-symmetric matrix is 1.
But the rank of a skew-symmetric matrix is never one.
I must have made a conceptual mistake somewhere again. Does anybody have a hint for me?
Geometrically, one can use the canonical isomorphism between the two-fold exterior product and $mathbb{R}^3$ itself to show that any anyisymmetric tensor can be thought of as a vector in 3D, which in turn can be represented by the cross product of two vectors, that are not collinear to each other but orthogonal to that vector.
vector-spaces tensor-products tensors exterior-algebra
I want to show that all bivectors in three dimensions are simple.
If I understand correctly, a bivector is simply an element from the two-fold exterior product $bigwedge^2V$ of a vector space $V$, right?
We can define $wedge(e_iotimes e_j):=e_iotimes e_j - e_jotimes e_i$. Now let $T=t^{ij}e_iwedge e_jmapsto t^{ij}(e_iotimes e_j - e_jotimes e_i)=(t^{ij}-t^{ji})e_iotimes e_j$. This is injective. Because the wedge product as a linear map from the tensor product to the exterior product maps all symmetric tensors to 0.
We see that total antisymmetric tensors in this case are represented by skew-symmetric matrices. To show that they are all simple, I would have to show that the rank of any 3x3 skew-symmetric matrix is 1.
But the rank of a skew-symmetric matrix is never one.
I must have made a conceptual mistake somewhere again. Does anybody have a hint for me?
Geometrically, one can use the canonical isomorphism between the two-fold exterior product and $mathbb{R}^3$ itself to show that any anyisymmetric tensor can be thought of as a vector in 3D, which in turn can be represented by the cross product of two vectors, that are not collinear to each other but orthogonal to that vector.
vector-spaces tensor-products tensors exterior-algebra
vector-spaces tensor-products tensors exterior-algebra
edited Nov 14 at 14:37
asked Nov 13 at 22:59
Thomas Wening
12110
12110
2
$(e_1+e_2)wedge(e_3-e_1)$
– user8268
Nov 13 at 23:12
1
Does "simple" mean "of the form $u wedge v$ for some $u, v in V$"? In this case, this is not hard to prove. More generally, if $V$ is an $n$-dimensional vector space over a field with $n geq 2$, then each element of $Lambda^{n-1}left(Vright)$ can be written as $v_1 wedge v_2 wedge cdots wedge v_{n-1}$ for some $v_1, v_2, ldots, v_{n-1} in V$.
– darij grinberg
Nov 14 at 2:42
Okay, I see how the vector given by user8268 decomposes into the one I wrote down. Now the question is how do I arrive at the result given by darij grinberg. I have no clue. I have an interesting idea, though. For 3 dimensions the 2-fold exterior algebra is obviously isomorphic to $mathbb{R} ^3$ itself.
– Thomas Wening
Nov 14 at 11:05
Okay, it turns out that the rank of the coefficient matrix must be equal to 1. I have just gone through the proof. But now I have another problem. It's amended in the post.
– Thomas Wening
Nov 14 at 14:24
add a comment |
2
$(e_1+e_2)wedge(e_3-e_1)$
– user8268
Nov 13 at 23:12
1
Does "simple" mean "of the form $u wedge v$ for some $u, v in V$"? In this case, this is not hard to prove. More generally, if $V$ is an $n$-dimensional vector space over a field with $n geq 2$, then each element of $Lambda^{n-1}left(Vright)$ can be written as $v_1 wedge v_2 wedge cdots wedge v_{n-1}$ for some $v_1, v_2, ldots, v_{n-1} in V$.
– darij grinberg
Nov 14 at 2:42
Okay, I see how the vector given by user8268 decomposes into the one I wrote down. Now the question is how do I arrive at the result given by darij grinberg. I have no clue. I have an interesting idea, though. For 3 dimensions the 2-fold exterior algebra is obviously isomorphic to $mathbb{R} ^3$ itself.
– Thomas Wening
Nov 14 at 11:05
Okay, it turns out that the rank of the coefficient matrix must be equal to 1. I have just gone through the proof. But now I have another problem. It's amended in the post.
– Thomas Wening
Nov 14 at 14:24
2
2
$(e_1+e_2)wedge(e_3-e_1)$
– user8268
Nov 13 at 23:12
$(e_1+e_2)wedge(e_3-e_1)$
– user8268
Nov 13 at 23:12
1
1
Does "simple" mean "of the form $u wedge v$ for some $u, v in V$"? In this case, this is not hard to prove. More generally, if $V$ is an $n$-dimensional vector space over a field with $n geq 2$, then each element of $Lambda^{n-1}left(Vright)$ can be written as $v_1 wedge v_2 wedge cdots wedge v_{n-1}$ for some $v_1, v_2, ldots, v_{n-1} in V$.
– darij grinberg
Nov 14 at 2:42
Does "simple" mean "of the form $u wedge v$ for some $u, v in V$"? In this case, this is not hard to prove. More generally, if $V$ is an $n$-dimensional vector space over a field with $n geq 2$, then each element of $Lambda^{n-1}left(Vright)$ can be written as $v_1 wedge v_2 wedge cdots wedge v_{n-1}$ for some $v_1, v_2, ldots, v_{n-1} in V$.
– darij grinberg
Nov 14 at 2:42
Okay, I see how the vector given by user8268 decomposes into the one I wrote down. Now the question is how do I arrive at the result given by darij grinberg. I have no clue. I have an interesting idea, though. For 3 dimensions the 2-fold exterior algebra is obviously isomorphic to $mathbb{R} ^3$ itself.
– Thomas Wening
Nov 14 at 11:05
Okay, I see how the vector given by user8268 decomposes into the one I wrote down. Now the question is how do I arrive at the result given by darij grinberg. I have no clue. I have an interesting idea, though. For 3 dimensions the 2-fold exterior algebra is obviously isomorphic to $mathbb{R} ^3$ itself.
– Thomas Wening
Nov 14 at 11:05
Okay, it turns out that the rank of the coefficient matrix must be equal to 1. I have just gone through the proof. But now I have another problem. It's amended in the post.
– Thomas Wening
Nov 14 at 14:24
Okay, it turns out that the rank of the coefficient matrix must be equal to 1. I have just gone through the proof. But now I have another problem. It's amended in the post.
– Thomas Wening
Nov 14 at 14:24
add a comment |
1 Answer
1
active
oldest
votes
up vote
1
down vote
accepted
To show that they are all simple, I would have to show that the rank of any 3x3 skew-symmetric matrix is 1.
This doesn't follow, since as you say the rank of a skew-symmetric matrix can never be $1$. You're conflating $e_i otimes e_j - e_j otimes e_i$, which as a tensor has rank $2$ or $0$, with $e_i wedge e_j$. These are not the same object; one of them lives in $V^{otimes 2}$ and the other one lives in $Lambda^2(V)$. In general I don't recommend thinking in terms of antisymmetric tensors; it makes the exterior product look much more complicated than it is.
Anyway, here's a proof of darij's more general claim in the comments. Let $v_1 in Lambda^{n-1}(V)$ be a vector, where $dim V = n$. Choose a nonzero element $omega in Lambda^n(V)$, hence an identification of it with the ground field $k$. Then the exterior product
$$wedge : V times Lambda^{n-1}(V) to Lambda^n(V) cong k$$
is a nondegenerate bilinear pairing. Extend $v_1$ to a basis $v_1, dots v_n in Lambda^{n-1}(V)$. Then it has a unique dual basis $e_1, dots e_n in V$ defined by the condition that
$$e_i wedge v_j = delta_{ij} omega in Lambda^n(V).$$
Then the $v_i$ must also be the dual basis of the $e_i$ with respect to this pairing. But this dual basis in turn must be
$$v_i = (-1)^{i-1} frac{omega}{e_1 wedge dots wedge e_n} e_1 wedge dots wedge widehat{e_i} wedge dots wedge e_n$$
where the hat denotes that we omit $e_i$, and in particular
$$v_1 = frac{omega}{e_1 wedge dots wedge e_n} e_2 wedge dots wedge e_n.$$
Okay, this calms my mind. I was already worried I had gotten something really wrong about the theorem about the rank and simpleness. Now to your proof: I have a question. What does $frac{omega}{e^1wedgecdotswedge e_n}$ mean?
– Thomas Wening
Nov 14 at 21:41
1
@Thomas: $Lambda^n(V)$ is one-dimensional, and $omega$ and $e_1 wedge dots wedge e_n$ are two nonzero elements of it, so there's a unique nonzero scalar $c$ such that $omega = c e_1 wedge dots wedge e_n$. It refers to this scalar.
– Qiaochu Yuan
Nov 14 at 21:44
Okay, I do understand the dual base part. The factor $(-1)^{i-1}$ is needed because you have to permute $e_i$ $i-1$ times in the wedge product $e_iwedge v_i$ to get all the indices in monotonously increasing order, right?
– Thomas Wening
Nov 14 at 21:59
Yes, that's right.
– Qiaochu Yuan
Nov 14 at 22:00
Good. The pairing allows for the definition of a dual basis, once I have fixed one in $bigwedge^2mathbb{R}^3$. But how does that allow me to decompose an arbitrary $omegainbigwedge^{n-1}V$ into $omega=vwedge u$ with $u,vin V$?
– Thomas Wening
Nov 14 at 22:11
|
show 4 more comments
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
accepted
To show that they are all simple, I would have to show that the rank of any 3x3 skew-symmetric matrix is 1.
This doesn't follow, since as you say the rank of a skew-symmetric matrix can never be $1$. You're conflating $e_i otimes e_j - e_j otimes e_i$, which as a tensor has rank $2$ or $0$, with $e_i wedge e_j$. These are not the same object; one of them lives in $V^{otimes 2}$ and the other one lives in $Lambda^2(V)$. In general I don't recommend thinking in terms of antisymmetric tensors; it makes the exterior product look much more complicated than it is.
Anyway, here's a proof of darij's more general claim in the comments. Let $v_1 in Lambda^{n-1}(V)$ be a vector, where $dim V = n$. Choose a nonzero element $omega in Lambda^n(V)$, hence an identification of it with the ground field $k$. Then the exterior product
$$wedge : V times Lambda^{n-1}(V) to Lambda^n(V) cong k$$
is a nondegenerate bilinear pairing. Extend $v_1$ to a basis $v_1, dots v_n in Lambda^{n-1}(V)$. Then it has a unique dual basis $e_1, dots e_n in V$ defined by the condition that
$$e_i wedge v_j = delta_{ij} omega in Lambda^n(V).$$
Then the $v_i$ must also be the dual basis of the $e_i$ with respect to this pairing. But this dual basis in turn must be
$$v_i = (-1)^{i-1} frac{omega}{e_1 wedge dots wedge e_n} e_1 wedge dots wedge widehat{e_i} wedge dots wedge e_n$$
where the hat denotes that we omit $e_i$, and in particular
$$v_1 = frac{omega}{e_1 wedge dots wedge e_n} e_2 wedge dots wedge e_n.$$
Okay, this calms my mind. I was already worried I had gotten something really wrong about the theorem about the rank and simpleness. Now to your proof: I have a question. What does $frac{omega}{e^1wedgecdotswedge e_n}$ mean?
– Thomas Wening
Nov 14 at 21:41
1
@Thomas: $Lambda^n(V)$ is one-dimensional, and $omega$ and $e_1 wedge dots wedge e_n$ are two nonzero elements of it, so there's a unique nonzero scalar $c$ such that $omega = c e_1 wedge dots wedge e_n$. It refers to this scalar.
– Qiaochu Yuan
Nov 14 at 21:44
Okay, I do understand the dual base part. The factor $(-1)^{i-1}$ is needed because you have to permute $e_i$ $i-1$ times in the wedge product $e_iwedge v_i$ to get all the indices in monotonously increasing order, right?
– Thomas Wening
Nov 14 at 21:59
Yes, that's right.
– Qiaochu Yuan
Nov 14 at 22:00
Good. The pairing allows for the definition of a dual basis, once I have fixed one in $bigwedge^2mathbb{R}^3$. But how does that allow me to decompose an arbitrary $omegainbigwedge^{n-1}V$ into $omega=vwedge u$ with $u,vin V$?
– Thomas Wening
Nov 14 at 22:11
|
show 4 more comments
up vote
1
down vote
accepted
To show that they are all simple, I would have to show that the rank of any 3x3 skew-symmetric matrix is 1.
This doesn't follow, since as you say the rank of a skew-symmetric matrix can never be $1$. You're conflating $e_i otimes e_j - e_j otimes e_i$, which as a tensor has rank $2$ or $0$, with $e_i wedge e_j$. These are not the same object; one of them lives in $V^{otimes 2}$ and the other one lives in $Lambda^2(V)$. In general I don't recommend thinking in terms of antisymmetric tensors; it makes the exterior product look much more complicated than it is.
Anyway, here's a proof of darij's more general claim in the comments. Let $v_1 in Lambda^{n-1}(V)$ be a vector, where $dim V = n$. Choose a nonzero element $omega in Lambda^n(V)$, hence an identification of it with the ground field $k$. Then the exterior product
$$wedge : V times Lambda^{n-1}(V) to Lambda^n(V) cong k$$
is a nondegenerate bilinear pairing. Extend $v_1$ to a basis $v_1, dots v_n in Lambda^{n-1}(V)$. Then it has a unique dual basis $e_1, dots e_n in V$ defined by the condition that
$$e_i wedge v_j = delta_{ij} omega in Lambda^n(V).$$
Then the $v_i$ must also be the dual basis of the $e_i$ with respect to this pairing. But this dual basis in turn must be
$$v_i = (-1)^{i-1} frac{omega}{e_1 wedge dots wedge e_n} e_1 wedge dots wedge widehat{e_i} wedge dots wedge e_n$$
where the hat denotes that we omit $e_i$, and in particular
$$v_1 = frac{omega}{e_1 wedge dots wedge e_n} e_2 wedge dots wedge e_n.$$
Okay, this calms my mind. I was already worried I had gotten something really wrong about the theorem about the rank and simpleness. Now to your proof: I have a question. What does $frac{omega}{e^1wedgecdotswedge e_n}$ mean?
– Thomas Wening
Nov 14 at 21:41
1
@Thomas: $Lambda^n(V)$ is one-dimensional, and $omega$ and $e_1 wedge dots wedge e_n$ are two nonzero elements of it, so there's a unique nonzero scalar $c$ such that $omega = c e_1 wedge dots wedge e_n$. It refers to this scalar.
– Qiaochu Yuan
Nov 14 at 21:44
Okay, I do understand the dual base part. The factor $(-1)^{i-1}$ is needed because you have to permute $e_i$ $i-1$ times in the wedge product $e_iwedge v_i$ to get all the indices in monotonously increasing order, right?
– Thomas Wening
Nov 14 at 21:59
Yes, that's right.
– Qiaochu Yuan
Nov 14 at 22:00
Good. The pairing allows for the definition of a dual basis, once I have fixed one in $bigwedge^2mathbb{R}^3$. But how does that allow me to decompose an arbitrary $omegainbigwedge^{n-1}V$ into $omega=vwedge u$ with $u,vin V$?
– Thomas Wening
Nov 14 at 22:11
|
show 4 more comments
up vote
1
down vote
accepted
up vote
1
down vote
accepted
To show that they are all simple, I would have to show that the rank of any 3x3 skew-symmetric matrix is 1.
This doesn't follow, since as you say the rank of a skew-symmetric matrix can never be $1$. You're conflating $e_i otimes e_j - e_j otimes e_i$, which as a tensor has rank $2$ or $0$, with $e_i wedge e_j$. These are not the same object; one of them lives in $V^{otimes 2}$ and the other one lives in $Lambda^2(V)$. In general I don't recommend thinking in terms of antisymmetric tensors; it makes the exterior product look much more complicated than it is.
Anyway, here's a proof of darij's more general claim in the comments. Let $v_1 in Lambda^{n-1}(V)$ be a vector, where $dim V = n$. Choose a nonzero element $omega in Lambda^n(V)$, hence an identification of it with the ground field $k$. Then the exterior product
$$wedge : V times Lambda^{n-1}(V) to Lambda^n(V) cong k$$
is a nondegenerate bilinear pairing. Extend $v_1$ to a basis $v_1, dots v_n in Lambda^{n-1}(V)$. Then it has a unique dual basis $e_1, dots e_n in V$ defined by the condition that
$$e_i wedge v_j = delta_{ij} omega in Lambda^n(V).$$
Then the $v_i$ must also be the dual basis of the $e_i$ with respect to this pairing. But this dual basis in turn must be
$$v_i = (-1)^{i-1} frac{omega}{e_1 wedge dots wedge e_n} e_1 wedge dots wedge widehat{e_i} wedge dots wedge e_n$$
where the hat denotes that we omit $e_i$, and in particular
$$v_1 = frac{omega}{e_1 wedge dots wedge e_n} e_2 wedge dots wedge e_n.$$
To show that they are all simple, I would have to show that the rank of any 3x3 skew-symmetric matrix is 1.
This doesn't follow, since as you say the rank of a skew-symmetric matrix can never be $1$. You're conflating $e_i otimes e_j - e_j otimes e_i$, which as a tensor has rank $2$ or $0$, with $e_i wedge e_j$. These are not the same object; one of them lives in $V^{otimes 2}$ and the other one lives in $Lambda^2(V)$. In general I don't recommend thinking in terms of antisymmetric tensors; it makes the exterior product look much more complicated than it is.
Anyway, here's a proof of darij's more general claim in the comments. Let $v_1 in Lambda^{n-1}(V)$ be a vector, where $dim V = n$. Choose a nonzero element $omega in Lambda^n(V)$, hence an identification of it with the ground field $k$. Then the exterior product
$$wedge : V times Lambda^{n-1}(V) to Lambda^n(V) cong k$$
is a nondegenerate bilinear pairing. Extend $v_1$ to a basis $v_1, dots v_n in Lambda^{n-1}(V)$. Then it has a unique dual basis $e_1, dots e_n in V$ defined by the condition that
$$e_i wedge v_j = delta_{ij} omega in Lambda^n(V).$$
Then the $v_i$ must also be the dual basis of the $e_i$ with respect to this pairing. But this dual basis in turn must be
$$v_i = (-1)^{i-1} frac{omega}{e_1 wedge dots wedge e_n} e_1 wedge dots wedge widehat{e_i} wedge dots wedge e_n$$
where the hat denotes that we omit $e_i$, and in particular
$$v_1 = frac{omega}{e_1 wedge dots wedge e_n} e_2 wedge dots wedge e_n.$$
answered Nov 14 at 21:18
Qiaochu Yuan
274k32578914
274k32578914
Okay, this calms my mind. I was already worried I had gotten something really wrong about the theorem about the rank and simpleness. Now to your proof: I have a question. What does $frac{omega}{e^1wedgecdotswedge e_n}$ mean?
– Thomas Wening
Nov 14 at 21:41
1
@Thomas: $Lambda^n(V)$ is one-dimensional, and $omega$ and $e_1 wedge dots wedge e_n$ are two nonzero elements of it, so there's a unique nonzero scalar $c$ such that $omega = c e_1 wedge dots wedge e_n$. It refers to this scalar.
– Qiaochu Yuan
Nov 14 at 21:44
Okay, I do understand the dual base part. The factor $(-1)^{i-1}$ is needed because you have to permute $e_i$ $i-1$ times in the wedge product $e_iwedge v_i$ to get all the indices in monotonously increasing order, right?
– Thomas Wening
Nov 14 at 21:59
Yes, that's right.
– Qiaochu Yuan
Nov 14 at 22:00
Good. The pairing allows for the definition of a dual basis, once I have fixed one in $bigwedge^2mathbb{R}^3$. But how does that allow me to decompose an arbitrary $omegainbigwedge^{n-1}V$ into $omega=vwedge u$ with $u,vin V$?
– Thomas Wening
Nov 14 at 22:11
|
show 4 more comments
Okay, this calms my mind. I was already worried I had gotten something really wrong about the theorem about the rank and simpleness. Now to your proof: I have a question. What does $frac{omega}{e^1wedgecdotswedge e_n}$ mean?
– Thomas Wening
Nov 14 at 21:41
1
@Thomas: $Lambda^n(V)$ is one-dimensional, and $omega$ and $e_1 wedge dots wedge e_n$ are two nonzero elements of it, so there's a unique nonzero scalar $c$ such that $omega = c e_1 wedge dots wedge e_n$. It refers to this scalar.
– Qiaochu Yuan
Nov 14 at 21:44
Okay, I do understand the dual base part. The factor $(-1)^{i-1}$ is needed because you have to permute $e_i$ $i-1$ times in the wedge product $e_iwedge v_i$ to get all the indices in monotonously increasing order, right?
– Thomas Wening
Nov 14 at 21:59
Yes, that's right.
– Qiaochu Yuan
Nov 14 at 22:00
Good. The pairing allows for the definition of a dual basis, once I have fixed one in $bigwedge^2mathbb{R}^3$. But how does that allow me to decompose an arbitrary $omegainbigwedge^{n-1}V$ into $omega=vwedge u$ with $u,vin V$?
– Thomas Wening
Nov 14 at 22:11
Okay, this calms my mind. I was already worried I had gotten something really wrong about the theorem about the rank and simpleness. Now to your proof: I have a question. What does $frac{omega}{e^1wedgecdotswedge e_n}$ mean?
– Thomas Wening
Nov 14 at 21:41
Okay, this calms my mind. I was already worried I had gotten something really wrong about the theorem about the rank and simpleness. Now to your proof: I have a question. What does $frac{omega}{e^1wedgecdotswedge e_n}$ mean?
– Thomas Wening
Nov 14 at 21:41
1
1
@Thomas: $Lambda^n(V)$ is one-dimensional, and $omega$ and $e_1 wedge dots wedge e_n$ are two nonzero elements of it, so there's a unique nonzero scalar $c$ such that $omega = c e_1 wedge dots wedge e_n$. It refers to this scalar.
– Qiaochu Yuan
Nov 14 at 21:44
@Thomas: $Lambda^n(V)$ is one-dimensional, and $omega$ and $e_1 wedge dots wedge e_n$ are two nonzero elements of it, so there's a unique nonzero scalar $c$ such that $omega = c e_1 wedge dots wedge e_n$. It refers to this scalar.
– Qiaochu Yuan
Nov 14 at 21:44
Okay, I do understand the dual base part. The factor $(-1)^{i-1}$ is needed because you have to permute $e_i$ $i-1$ times in the wedge product $e_iwedge v_i$ to get all the indices in monotonously increasing order, right?
– Thomas Wening
Nov 14 at 21:59
Okay, I do understand the dual base part. The factor $(-1)^{i-1}$ is needed because you have to permute $e_i$ $i-1$ times in the wedge product $e_iwedge v_i$ to get all the indices in monotonously increasing order, right?
– Thomas Wening
Nov 14 at 21:59
Yes, that's right.
– Qiaochu Yuan
Nov 14 at 22:00
Yes, that's right.
– Qiaochu Yuan
Nov 14 at 22:00
Good. The pairing allows for the definition of a dual basis, once I have fixed one in $bigwedge^2mathbb{R}^3$. But how does that allow me to decompose an arbitrary $omegainbigwedge^{n-1}V$ into $omega=vwedge u$ with $u,vin V$?
– Thomas Wening
Nov 14 at 22:11
Good. The pairing allows for the definition of a dual basis, once I have fixed one in $bigwedge^2mathbb{R}^3$. But how does that allow me to decompose an arbitrary $omegainbigwedge^{n-1}V$ into $omega=vwedge u$ with $u,vin V$?
– Thomas Wening
Nov 14 at 22:11
|
show 4 more comments
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2997487%2fare-all-bivectors-in-three-dimensions-simple%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
2
$(e_1+e_2)wedge(e_3-e_1)$
– user8268
Nov 13 at 23:12
1
Does "simple" mean "of the form $u wedge v$ for some $u, v in V$"? In this case, this is not hard to prove. More generally, if $V$ is an $n$-dimensional vector space over a field with $n geq 2$, then each element of $Lambda^{n-1}left(Vright)$ can be written as $v_1 wedge v_2 wedge cdots wedge v_{n-1}$ for some $v_1, v_2, ldots, v_{n-1} in V$.
– darij grinberg
Nov 14 at 2:42
Okay, I see how the vector given by user8268 decomposes into the one I wrote down. Now the question is how do I arrive at the result given by darij grinberg. I have no clue. I have an interesting idea, though. For 3 dimensions the 2-fold exterior algebra is obviously isomorphic to $mathbb{R} ^3$ itself.
– Thomas Wening
Nov 14 at 11:05
Okay, it turns out that the rank of the coefficient matrix must be equal to 1. I have just gone through the proof. But now I have another problem. It's amended in the post.
– Thomas Wening
Nov 14 at 14:24