Generalize exterior algebra: vectors are nilcube instead of nilsquare












18












$begingroup$


The exterior product on a ($d$-dimensional) vector space $V$ is defined to be associative and bilinear, and to make any vector square to $0$, and is otherwise unrestricted. Formally, the exterior algebra $Lambda V$ is a quotient of the tensor algebra.



What happens if, instead, the $n$th power of any vector is $0$? Let's call this the $n$th nilpotent algebra, $N_nV$.



The $0$th nilpotent algebra is trivial, with only one element $1=a^0=0$, so $N_0V={0}$, which is $0$-dimensional.



The $1$st nilpotent algebra is the scalar field, because any vector is $a^1=0$; thus $N_1V=mathbb F$, which is $1$-dimensional.



The $2$nd nilpotent algebra, with $a^2=awedge a=0$, is the exterior algebra $N_2V=Lambda V$, which is $2^d$-dimensional.



What is the $3$rd nilpotent algebra? Would $N_3V$ be $3^d$-dimensional? Would $N_nV$ be $n^d$-dimensional?



If $d=1$, so $V$ has a basis ${a}$, then $N_nV$ has a basis ${1,a,a^2,cdots,a^{n-1}}$, so it is indeed $n^1$-dimensional.



If $d=2$, so $V$ has a basis ${a,b}$, then $N_0V$ has basis ${}$, and $N_1V$ has basis ${1}$, and $N_2V$ has basis ${1,a,b,ab}$. What is a basis for $N_3V$?



Since any vector cubes to $0$, we have



$$0=(a+b)^3=a^3+aab+aba+abb+baa+bab+bba+b^3$$



and



$$0=(a-b)^3=a^3-aab-aba+abb-baa+bab+bba-b^3.$$



Subtract these and divide by $2$ to get



$$0=aab+aba+baa.$$



Are there any other relations like this, not immediately obvious from the definition? Naively using only $a^3=b^3=0$, we would expect this to be a basis:



$${1,a,b,a^2,ab,ba,b^2,a^2b,aba,ab^2,ba^2,bab,b^2a,$$



$$a^2ba,a^2b^2,aba^2,abab,ab^2a,ba^2b,baba,bab^2,b^2a^2,b^2ab,cdots}$$



which would contain an infinite number of terms like $a^2ba^2ba^2b$; but some terms must be removed, being linearly dependent:



$$ba^2=-a^2b-aba$$



$$b^2a=-ab^2-bab$$



$$a(ba^2)=0-a^2ba$$



$$a(b^2a)=-a^2b^2-abab$$



$$(ba^2)b=-a^2b^2-abab$$



$$b^2a^2=a^2b^2+abab-baba$$



$$(b^2a)b=0-bab^2$$



$$vdots$$



Will there be only finitely many independent terms left?










share|cite|improve this question











$endgroup$








  • 4




    $begingroup$
    Bergman's famous paper "The diamond lemma for ring theory", or more modern accounts of noncommutative Gröbner bases, should help a lot here.
    $endgroup$
    – darij grinberg
    Jan 20 at 1:00
















18












$begingroup$


The exterior product on a ($d$-dimensional) vector space $V$ is defined to be associative and bilinear, and to make any vector square to $0$, and is otherwise unrestricted. Formally, the exterior algebra $Lambda V$ is a quotient of the tensor algebra.



What happens if, instead, the $n$th power of any vector is $0$? Let's call this the $n$th nilpotent algebra, $N_nV$.



The $0$th nilpotent algebra is trivial, with only one element $1=a^0=0$, so $N_0V={0}$, which is $0$-dimensional.



The $1$st nilpotent algebra is the scalar field, because any vector is $a^1=0$; thus $N_1V=mathbb F$, which is $1$-dimensional.



The $2$nd nilpotent algebra, with $a^2=awedge a=0$, is the exterior algebra $N_2V=Lambda V$, which is $2^d$-dimensional.



What is the $3$rd nilpotent algebra? Would $N_3V$ be $3^d$-dimensional? Would $N_nV$ be $n^d$-dimensional?



If $d=1$, so $V$ has a basis ${a}$, then $N_nV$ has a basis ${1,a,a^2,cdots,a^{n-1}}$, so it is indeed $n^1$-dimensional.



If $d=2$, so $V$ has a basis ${a,b}$, then $N_0V$ has basis ${}$, and $N_1V$ has basis ${1}$, and $N_2V$ has basis ${1,a,b,ab}$. What is a basis for $N_3V$?



Since any vector cubes to $0$, we have



$$0=(a+b)^3=a^3+aab+aba+abb+baa+bab+bba+b^3$$



and



$$0=(a-b)^3=a^3-aab-aba+abb-baa+bab+bba-b^3.$$



Subtract these and divide by $2$ to get



$$0=aab+aba+baa.$$



Are there any other relations like this, not immediately obvious from the definition? Naively using only $a^3=b^3=0$, we would expect this to be a basis:



$${1,a,b,a^2,ab,ba,b^2,a^2b,aba,ab^2,ba^2,bab,b^2a,$$



$$a^2ba,a^2b^2,aba^2,abab,ab^2a,ba^2b,baba,bab^2,b^2a^2,b^2ab,cdots}$$



which would contain an infinite number of terms like $a^2ba^2ba^2b$; but some terms must be removed, being linearly dependent:



$$ba^2=-a^2b-aba$$



$$b^2a=-ab^2-bab$$



$$a(ba^2)=0-a^2ba$$



$$a(b^2a)=-a^2b^2-abab$$



$$(ba^2)b=-a^2b^2-abab$$



$$b^2a^2=a^2b^2+abab-baba$$



$$(b^2a)b=0-bab^2$$



$$vdots$$



Will there be only finitely many independent terms left?










share|cite|improve this question











$endgroup$








  • 4




    $begingroup$
    Bergman's famous paper "The diamond lemma for ring theory", or more modern accounts of noncommutative Gröbner bases, should help a lot here.
    $endgroup$
    – darij grinberg
    Jan 20 at 1:00














18












18








18


6



$begingroup$


The exterior product on a ($d$-dimensional) vector space $V$ is defined to be associative and bilinear, and to make any vector square to $0$, and is otherwise unrestricted. Formally, the exterior algebra $Lambda V$ is a quotient of the tensor algebra.



What happens if, instead, the $n$th power of any vector is $0$? Let's call this the $n$th nilpotent algebra, $N_nV$.



The $0$th nilpotent algebra is trivial, with only one element $1=a^0=0$, so $N_0V={0}$, which is $0$-dimensional.



The $1$st nilpotent algebra is the scalar field, because any vector is $a^1=0$; thus $N_1V=mathbb F$, which is $1$-dimensional.



The $2$nd nilpotent algebra, with $a^2=awedge a=0$, is the exterior algebra $N_2V=Lambda V$, which is $2^d$-dimensional.



What is the $3$rd nilpotent algebra? Would $N_3V$ be $3^d$-dimensional? Would $N_nV$ be $n^d$-dimensional?



If $d=1$, so $V$ has a basis ${a}$, then $N_nV$ has a basis ${1,a,a^2,cdots,a^{n-1}}$, so it is indeed $n^1$-dimensional.



If $d=2$, so $V$ has a basis ${a,b}$, then $N_0V$ has basis ${}$, and $N_1V$ has basis ${1}$, and $N_2V$ has basis ${1,a,b,ab}$. What is a basis for $N_3V$?



Since any vector cubes to $0$, we have



$$0=(a+b)^3=a^3+aab+aba+abb+baa+bab+bba+b^3$$



and



$$0=(a-b)^3=a^3-aab-aba+abb-baa+bab+bba-b^3.$$



Subtract these and divide by $2$ to get



$$0=aab+aba+baa.$$



Are there any other relations like this, not immediately obvious from the definition? Naively using only $a^3=b^3=0$, we would expect this to be a basis:



$${1,a,b,a^2,ab,ba,b^2,a^2b,aba,ab^2,ba^2,bab,b^2a,$$



$$a^2ba,a^2b^2,aba^2,abab,ab^2a,ba^2b,baba,bab^2,b^2a^2,b^2ab,cdots}$$



which would contain an infinite number of terms like $a^2ba^2ba^2b$; but some terms must be removed, being linearly dependent:



$$ba^2=-a^2b-aba$$



$$b^2a=-ab^2-bab$$



$$a(ba^2)=0-a^2ba$$



$$a(b^2a)=-a^2b^2-abab$$



$$(ba^2)b=-a^2b^2-abab$$



$$b^2a^2=a^2b^2+abab-baba$$



$$(b^2a)b=0-bab^2$$



$$vdots$$



Will there be only finitely many independent terms left?










share|cite|improve this question











$endgroup$




The exterior product on a ($d$-dimensional) vector space $V$ is defined to be associative and bilinear, and to make any vector square to $0$, and is otherwise unrestricted. Formally, the exterior algebra $Lambda V$ is a quotient of the tensor algebra.



What happens if, instead, the $n$th power of any vector is $0$? Let's call this the $n$th nilpotent algebra, $N_nV$.



The $0$th nilpotent algebra is trivial, with only one element $1=a^0=0$, so $N_0V={0}$, which is $0$-dimensional.



The $1$st nilpotent algebra is the scalar field, because any vector is $a^1=0$; thus $N_1V=mathbb F$, which is $1$-dimensional.



The $2$nd nilpotent algebra, with $a^2=awedge a=0$, is the exterior algebra $N_2V=Lambda V$, which is $2^d$-dimensional.



What is the $3$rd nilpotent algebra? Would $N_3V$ be $3^d$-dimensional? Would $N_nV$ be $n^d$-dimensional?



If $d=1$, so $V$ has a basis ${a}$, then $N_nV$ has a basis ${1,a,a^2,cdots,a^{n-1}}$, so it is indeed $n^1$-dimensional.



If $d=2$, so $V$ has a basis ${a,b}$, then $N_0V$ has basis ${}$, and $N_1V$ has basis ${1}$, and $N_2V$ has basis ${1,a,b,ab}$. What is a basis for $N_3V$?



Since any vector cubes to $0$, we have



$$0=(a+b)^3=a^3+aab+aba+abb+baa+bab+bba+b^3$$



and



$$0=(a-b)^3=a^3-aab-aba+abb-baa+bab+bba-b^3.$$



Subtract these and divide by $2$ to get



$$0=aab+aba+baa.$$



Are there any other relations like this, not immediately obvious from the definition? Naively using only $a^3=b^3=0$, we would expect this to be a basis:



$${1,a,b,a^2,ab,ba,b^2,a^2b,aba,ab^2,ba^2,bab,b^2a,$$



$$a^2ba,a^2b^2,aba^2,abab,ab^2a,ba^2b,baba,bab^2,b^2a^2,b^2ab,cdots}$$



which would contain an infinite number of terms like $a^2ba^2ba^2b$; but some terms must be removed, being linearly dependent:



$$ba^2=-a^2b-aba$$



$$b^2a=-ab^2-bab$$



$$a(ba^2)=0-a^2ba$$



$$a(b^2a)=-a^2b^2-abab$$



$$(ba^2)b=-a^2b^2-abab$$



$$b^2a^2=a^2b^2+abab-baba$$



$$(b^2a)b=0-bab^2$$



$$vdots$$



Will there be only finitely many independent terms left?







abstract-algebra vector-spaces tensor-products quotient-spaces nilpotence






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 15 '18 at 10:31







mr_e_man

















asked Dec 15 '18 at 9:44









mr_e_manmr_e_man

1,1351424




1,1351424








  • 4




    $begingroup$
    Bergman's famous paper "The diamond lemma for ring theory", or more modern accounts of noncommutative Gröbner bases, should help a lot here.
    $endgroup$
    – darij grinberg
    Jan 20 at 1:00














  • 4




    $begingroup$
    Bergman's famous paper "The diamond lemma for ring theory", or more modern accounts of noncommutative Gröbner bases, should help a lot here.
    $endgroup$
    – darij grinberg
    Jan 20 at 1:00








4




4




$begingroup$
Bergman's famous paper "The diamond lemma for ring theory", or more modern accounts of noncommutative Gröbner bases, should help a lot here.
$endgroup$
– darij grinberg
Jan 20 at 1:00




$begingroup$
Bergman's famous paper "The diamond lemma for ring theory", or more modern accounts of noncommutative Gröbner bases, should help a lot here.
$endgroup$
– darij grinberg
Jan 20 at 1:00










1 Answer
1






active

oldest

votes


















7





+200







$begingroup$

I assume $V$ is a vector space over a field of characteristic zero. For $kgeq 0$ let $N_n^kV$ be the degree-$k$ part of $N_nV.$ Then:




$dim N_n^kV$ is the number of length $k$ words on the alphabet ${1,dots,dim V}$ containing no non-decreasing subword of length $n.$




In particular, $N_nV$ is infinite dimensional for $ngeq 3$ and $dim Vgeq 2.$
The sequence $dim N_n^kV$ for $k=0,1,2,dots$ is called the Hilbert series and can be computed directly from the algebraic description of $N_nV$ using Singular - I've included a script at the end that gives the following values for $dim N_3^kV$:



begin{align*}
(dim V=2)&&&1,2,4,4,5,4,5,4,5,4,5,dots\
(dim V=3)&&&1,3,9,17,36,63,126,225,441,801,1548,dots
end{align*}





To prove this formula I will use a variant of the result from "The diamond lemma for ring theory" mentioned in darij grinberg's comment, and you'll need that paper to understand what I'm writing. I won't use the language of Gröbner bases, but this does show that the obvious basis for the identities defining $N_nV$ in the free associative algebra are in fact a (noncommutative) Gröbner basis - see for example https://arxiv.org/abs/1303.0920 in particular the "Reduce" algorithm. A computer algebra system such as Singular one can easily verify directly that a particular basis is a Gröbner basis.



A convenient basis for the ideal



Let $V$ have basis $x_1,dots,x_d$ and, to avoid excessive subscripting, let $[i_1,dots,i_m]$ denote the monomial $x_{i_1}dots x_{i_m}.$



First I will argue that $N_nV$ can be defined by the two-sided ideal generated by the noncommutative polynomials of the form $$sum_{pi}[pi(i_1,dots,i_n)]tag{*}$$ where the sum runs over all permutation of $n$ elements. These identities are sufficient because any identity $(sum_{j=1}^d t_jx_j)^n,$ with scalars $t_j,$ can be grouped by monomials in $t,$ and each term is a scalar multiple of (*). For the converse let $(c_{i,j})$ with $0leq i,jleq d$ be an inverse Vandermonde matrix such that $sum_{t=0}^{d} c_{i,t}p(t)$ is the $x^i$ coefficient of $p$ for any degree-$d$ univariate polynomial $p(x).$ Then $sum_{tin{0,dots,k}^d}c_{i_1,t_1}dots c_{i_d,t_d}(sum_{j=1}^d t_jx_j)^n$ is the coefficient of $t_1^{i_1}dots t_d^{i_d}$ in $(sum_{j=1}^d t_jx_j)^n,$ which is a scalar multiple of (*).



Reductions



So we can work with (*) instead of the nilpotence condition. The set $S$ of reductions we will use consists of



$$[i_1,dots,i_n]mapsto -sum_{pineq id} [pi(i_1,dots,i_n)]$$



for any $i_1leq i_2leq dots leq i_n,$ where the sum is over the set of permutations of $(i_1,dots,i_n)$; permutations that give the same result are now only counted once. For example monomials $x_1^n$ are reduced to zero.



A monomial $[i_1,dots,i_k]$ is "$S$-irreducible" if this rule cannot be applied, or equivalently $i_1dots i_k$ doesn't have a non-decreasing subword of length $n.$
We want to use Bergman Theorem 1.2 (c) to show that these monomials give a basis of $N_n^kV.$



Example: for $n=3$ and $kgeq 3,$ writing $a,b$ instead of $x_1,x_2,$ the normalized words are of the form
"$(a|b|epsilon)(ba)^n(a|b|epsilon)$", by which I mean: some number of repetitions of $(ba)^n,$ optionally with an $a$ or $b$ before and optionally with an $a$ or $b$ after. This explains the dimension counts and shows that the $4,5$ periodic pattern shown above does continue.



We need to show these reductions have only resolvable ambiguities in the sense of Bergman's paper. Consider an overlap ambiguity of length $n+1$: given $i_1leqdotsleq i_{n+1},$ there are two reductions that can be applied to an input $[i_1,dots,i_{n+1}].$



Lemma. For $i_1leq dotsleq i_{n+1},$ the two ways of reducing $[i_1,dots,i_{n+1}]$ give the same result under some sequence of reductions.



Proof. I will use a table notation where $[j,dots,k]$ is represented in a row for $j$ and a column for $k,$ with "$1$" if
the middle elements are in non-decreasing order, and "$pi$" for the sum over all other orders which may be an empty sum.
A star in the row header means to sum over the remaining options (for the first case, ${i_1,dots,i_{n+1}}setminus{i_1,i_2,i_{n+1}}$),
and similarly for the columns.



Case $i_1<i_2leq i_n<i_{n+1}$



$[i_1,dots,i_{n+1}]$ is represented by:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&1\ hline
i_2 &&&&\ hline
* &&&&\ hline
i_{n+1}&&&&\ hline
end{array}



Applying the reduction rule to the first $n$ letters is the same as subtracting these terms:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&1+pi&1+pi&color{red}{1}+pi\ hline
i_2 &&&&\ hline
* &&&&\ hline
i_{n+1}&&&&\ hline
end{array}

I have drawn the left-hand-side of the reduction rule in red.



We can then reduce the the last $n$ letters of some of these monomials, which is the same as adding these terms:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&color{red}{1}+pi&color{red}{1}+pi&\ hline
i_2 &&1+pi&1+pi&\ hline
* &&1+pi&1+pi&\ hline
i_{n+1}&&1+pi&1+pi&\ hline
end{array}



Finally we subtract
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&\ hline
i_2 &&&&\ hline
* &&&&\ hline
i_{n+1}&1+pi&1+pi&color{red}{1}+pi&\ hline
end{array}



The final result is
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&-pi\ hline
i_2 &&1+pi&1+pi&\ hline
* &&1+pi&1+pi&\ hline
i_{n+1}&-pi&&&\ hline
end{array}



If we started by applying the reduction rule to the last $n$ letters we would get the same result - just flip the table contents so the top-left becomes the bottom-right.



Case $i_1=i_2< i_n<i_{n+1}$



$[i_1,dots,i_{n+1}]$ is represented by:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&1\ hline
* &&&&\ hline
i_{n+1}&&&&\ hline
end{array}



Reducing the first $n$ letters is represented by subtracting
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &1+pi&1+pi&1+pi&color{red}{1}+pi\ hline
* &&&&\ hline
i_{n+1}&&&&\ hline
end{array}



Add:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &color{red}{1}+pi&color{red}{1}+pi&color{red}{1}+pi&\ hline
* &1+pi&1+pi&1+pi&\ hline
i_{n+1}&1+pi&1+pi&1+pi&\ hline
end{array}



Subtract:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&\ hline
* &&&&\ hline
i_{n+1}&1+pi&1+pi&color{red}{1}+pi&\ hline
end{array}



The result is
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&\ hline
* &1+pi&1+pi&1+pi&\ hline
i_{n+1}&&&&\ hline
end{array}



If we started by reducing the last $n$ letters we would instead subtract
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&color{red}{1}+pi\ hline
* &&&&1+pi\ hline
i_{n+1}&&&&\ hline
end{array}



Then add
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&\ hline
* &1+pi&1+pi&1+pi&color{red}{1}+pi\ hline
i_{n+1}&&&&\ hline
end{array}



which is the same result.



Case $i_1<i_2< i_n=i_{n+1}$



This is dual to the previous case.



Case $i_1=i_2<i_n=i_{n+1}$



I'll omit the steps, but the result is
begin{array}
{|c|c|c|c|}hline
&i_1&*&i_{n+1}\hline
i_1 &&&-pi\ hline
* &1+pi&1+pi&\ hline
i_{n+1}&1+pi&1+pi&\ hline
end{array}



Cases $i_1=i_n<i_{n+1}$ and $i_1<i_2<i_{n+1}$ and $i_1=i_{n+1}$



Left to the reader for now.



$Box$



Diamond lemma variant



We have shown that overlap ambiguities of $[i_1,dots,i_{n+1}]$ are resolvable, but there are longer overlaps possible.
Say that an overlap ambiguity $(sigma,nu,A,B,C)$ is decomposable if there are overlap ambiguities $(sigma,tau,A',A''B,C')$ and $(tau,nu,A'',BC',C'')$ with $A=A'A''$ and $C=C'C''.$
I claim that in Bergman Theorem 1.2 one can replace "all ambiguities" by "all indecomposable ambiguities".



The modification needed is in "Case 1". If the overlap ambiguity is indecomposable, the proof is the same: one uses the resolvability to show $L(f_sigma C-Af_tau)Min I_D.$. For the case of a decomposable overlap ambiguity $(sigma,nu,A,B,C)$ with word $D=LABCM$ we similarly need to show that $L(f_sigma C-Af_tau)Min I_D.$ By decomposability there are overlap ambiguities $(sigma,tau,A',A''B,C')$ and $(tau,nu,A'',BC',C'')$ with $A=A'A''$ and $C=C'C''$ (so $D=LA'A''BC'C''M$).



We can prove this by induction on the length of $ABC.$
If $(sigma,tau,A',A''B,C')$ is indecomposable then there are reductions proving that $L(f_sigma C'-A'f_tau)C''Min I_D.$ Otherwise we get the same result by the induction hypothesis. By the same argument for $(tau,nu,A'',BC',C'')$ we get $LA'(f_{tau} C''-A''f_nu)Min I_D.$ Adding these gives $L(f_sigma C-Af_tau)Min I_D$ as required.



By this variant of Bergman Theorem 1.2(c), and noting that there are no inclusion ambiguities, $N_nV$ is spanned by the $S$-irreducible monomials.





This part isn't entirely serious, but I have seen a quotient of $N_3^4V$ in the wild. In Riemannian normal co-ordinates near a point $0,$ the Riemannian metric has the form $g_{ac}(x)=g_{ac}(0)+sum_{b,d}tfrac12g_{ac,bd}x^bx^d+O(|x|^3).$ The Hessian $g_{ac,bd}$ various symmetries, but in particular, summing over permutations of $a,c,b$ or of $c,b,d$ gives zero. In this situation, the Hessian is equivalent to the Riemannian curvature tensor - they can be defined in terms of each other. See http://users.monash.edu.au/~leo/research/papers/files/lcb96-01.pdf for example for more details.





Singular script to generate the dimensions directly:




LIB "fpadim.lib";



// d must be 2 or 3
int d=2;
//int d=3;



ring r2 = 0,(a,b),dp;
def R2 = makeLetterplaceRing(10);
ring r3 = 0,(a,b,c),dp;
def R3 = makeLetterplaceRing(10);
if (d==2) {
setring R2;
list monoms=list(a(1),b(1));
} else {
setring R3;
list monoms=list(a(1),b(1),c(1));
}



// Construct ideal of all monomial nilcube relations
ideal G=0;
int m1,m2,m3;
poly p;
for (m1=1; m1<=d; m1=m1+1) {
for (m2=1; m2<=d; m2=m2+1) {
for (m3=1; m3<=d; m3=m3+1)
{
p=0;
p = p + lpMult(lpMult(monoms[m1],monoms[m2]),monoms[m3]);
p = p + lpMult(lpMult(monoms[m2],monoms[m3]),monoms[m1]);
p = p + lpMult(lpMult(monoms[m3],monoms[m1]),monoms[m2]);
p = p + lpMult(lpMult(monoms[m3],monoms[m2]),monoms[m1]);
p = p + lpMult(lpMult(monoms[m2],monoms[m1]),monoms[m3]);
p = p + lpMult(lpMult(monoms[m1],monoms[m3]),monoms[m2]);
G=G+p;
}
}
}



lpDHilbert(G);






share|cite|improve this answer











$endgroup$













  • $begingroup$
    I think that as of now, what you have shown is that your reduction procedure is "sufficient", i.e. it defines an algebra which is a quotient of the original algebra of the OP. You have not shown yet that the procedure is "necessary", i.e. that your identities are necessarily satisfied in the original algebra.
    $endgroup$
    – Ewan Delanoy
    Jan 21 at 13:47










  • $begingroup$
    @EwanDelanoy: good point, I had forgotten to mention that step
    $endgroup$
    – Dap
    Jan 21 at 15:32










  • $begingroup$
    Note: Your argument requires that the base ring is a commutative $mathbb{Q}$-algebra, or perhaps even a field of characteristic $0$.
    $endgroup$
    – darij grinberg
    Jan 22 at 21:12










  • $begingroup$
    By "subword", you mean "factor" (i.e., "contiguous subword").
    $endgroup$
    – darij grinberg
    Jan 22 at 21:13










  • $begingroup$
    You are proving that overlap ambiguities with overlap size $n-1$ are resolvable. What about smaller overlap sizes? Or is there a result somewhere in Bergman that says the maximum proper overlap size is sufficient?
    $endgroup$
    – darij grinberg
    Jan 22 at 21:14











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3040330%2fgeneralize-exterior-algebra-vectors-are-nilcube-instead-of-nilsquare%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









7





+200







$begingroup$

I assume $V$ is a vector space over a field of characteristic zero. For $kgeq 0$ let $N_n^kV$ be the degree-$k$ part of $N_nV.$ Then:




$dim N_n^kV$ is the number of length $k$ words on the alphabet ${1,dots,dim V}$ containing no non-decreasing subword of length $n.$




In particular, $N_nV$ is infinite dimensional for $ngeq 3$ and $dim Vgeq 2.$
The sequence $dim N_n^kV$ for $k=0,1,2,dots$ is called the Hilbert series and can be computed directly from the algebraic description of $N_nV$ using Singular - I've included a script at the end that gives the following values for $dim N_3^kV$:



begin{align*}
(dim V=2)&&&1,2,4,4,5,4,5,4,5,4,5,dots\
(dim V=3)&&&1,3,9,17,36,63,126,225,441,801,1548,dots
end{align*}





To prove this formula I will use a variant of the result from "The diamond lemma for ring theory" mentioned in darij grinberg's comment, and you'll need that paper to understand what I'm writing. I won't use the language of Gröbner bases, but this does show that the obvious basis for the identities defining $N_nV$ in the free associative algebra are in fact a (noncommutative) Gröbner basis - see for example https://arxiv.org/abs/1303.0920 in particular the "Reduce" algorithm. A computer algebra system such as Singular one can easily verify directly that a particular basis is a Gröbner basis.



A convenient basis for the ideal



Let $V$ have basis $x_1,dots,x_d$ and, to avoid excessive subscripting, let $[i_1,dots,i_m]$ denote the monomial $x_{i_1}dots x_{i_m}.$



First I will argue that $N_nV$ can be defined by the two-sided ideal generated by the noncommutative polynomials of the form $$sum_{pi}[pi(i_1,dots,i_n)]tag{*}$$ where the sum runs over all permutation of $n$ elements. These identities are sufficient because any identity $(sum_{j=1}^d t_jx_j)^n,$ with scalars $t_j,$ can be grouped by monomials in $t,$ and each term is a scalar multiple of (*). For the converse let $(c_{i,j})$ with $0leq i,jleq d$ be an inverse Vandermonde matrix such that $sum_{t=0}^{d} c_{i,t}p(t)$ is the $x^i$ coefficient of $p$ for any degree-$d$ univariate polynomial $p(x).$ Then $sum_{tin{0,dots,k}^d}c_{i_1,t_1}dots c_{i_d,t_d}(sum_{j=1}^d t_jx_j)^n$ is the coefficient of $t_1^{i_1}dots t_d^{i_d}$ in $(sum_{j=1}^d t_jx_j)^n,$ which is a scalar multiple of (*).



Reductions



So we can work with (*) instead of the nilpotence condition. The set $S$ of reductions we will use consists of



$$[i_1,dots,i_n]mapsto -sum_{pineq id} [pi(i_1,dots,i_n)]$$



for any $i_1leq i_2leq dots leq i_n,$ where the sum is over the set of permutations of $(i_1,dots,i_n)$; permutations that give the same result are now only counted once. For example monomials $x_1^n$ are reduced to zero.



A monomial $[i_1,dots,i_k]$ is "$S$-irreducible" if this rule cannot be applied, or equivalently $i_1dots i_k$ doesn't have a non-decreasing subword of length $n.$
We want to use Bergman Theorem 1.2 (c) to show that these monomials give a basis of $N_n^kV.$



Example: for $n=3$ and $kgeq 3,$ writing $a,b$ instead of $x_1,x_2,$ the normalized words are of the form
"$(a|b|epsilon)(ba)^n(a|b|epsilon)$", by which I mean: some number of repetitions of $(ba)^n,$ optionally with an $a$ or $b$ before and optionally with an $a$ or $b$ after. This explains the dimension counts and shows that the $4,5$ periodic pattern shown above does continue.



We need to show these reductions have only resolvable ambiguities in the sense of Bergman's paper. Consider an overlap ambiguity of length $n+1$: given $i_1leqdotsleq i_{n+1},$ there are two reductions that can be applied to an input $[i_1,dots,i_{n+1}].$



Lemma. For $i_1leq dotsleq i_{n+1},$ the two ways of reducing $[i_1,dots,i_{n+1}]$ give the same result under some sequence of reductions.



Proof. I will use a table notation where $[j,dots,k]$ is represented in a row for $j$ and a column for $k,$ with "$1$" if
the middle elements are in non-decreasing order, and "$pi$" for the sum over all other orders which may be an empty sum.
A star in the row header means to sum over the remaining options (for the first case, ${i_1,dots,i_{n+1}}setminus{i_1,i_2,i_{n+1}}$),
and similarly for the columns.



Case $i_1<i_2leq i_n<i_{n+1}$



$[i_1,dots,i_{n+1}]$ is represented by:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&1\ hline
i_2 &&&&\ hline
* &&&&\ hline
i_{n+1}&&&&\ hline
end{array}



Applying the reduction rule to the first $n$ letters is the same as subtracting these terms:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&1+pi&1+pi&color{red}{1}+pi\ hline
i_2 &&&&\ hline
* &&&&\ hline
i_{n+1}&&&&\ hline
end{array}

I have drawn the left-hand-side of the reduction rule in red.



We can then reduce the the last $n$ letters of some of these monomials, which is the same as adding these terms:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&color{red}{1}+pi&color{red}{1}+pi&\ hline
i_2 &&1+pi&1+pi&\ hline
* &&1+pi&1+pi&\ hline
i_{n+1}&&1+pi&1+pi&\ hline
end{array}



Finally we subtract
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&\ hline
i_2 &&&&\ hline
* &&&&\ hline
i_{n+1}&1+pi&1+pi&color{red}{1}+pi&\ hline
end{array}



The final result is
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&-pi\ hline
i_2 &&1+pi&1+pi&\ hline
* &&1+pi&1+pi&\ hline
i_{n+1}&-pi&&&\ hline
end{array}



If we started by applying the reduction rule to the last $n$ letters we would get the same result - just flip the table contents so the top-left becomes the bottom-right.



Case $i_1=i_2< i_n<i_{n+1}$



$[i_1,dots,i_{n+1}]$ is represented by:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&1\ hline
* &&&&\ hline
i_{n+1}&&&&\ hline
end{array}



Reducing the first $n$ letters is represented by subtracting
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &1+pi&1+pi&1+pi&color{red}{1}+pi\ hline
* &&&&\ hline
i_{n+1}&&&&\ hline
end{array}



Add:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &color{red}{1}+pi&color{red}{1}+pi&color{red}{1}+pi&\ hline
* &1+pi&1+pi&1+pi&\ hline
i_{n+1}&1+pi&1+pi&1+pi&\ hline
end{array}



Subtract:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&\ hline
* &&&&\ hline
i_{n+1}&1+pi&1+pi&color{red}{1}+pi&\ hline
end{array}



The result is
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&\ hline
* &1+pi&1+pi&1+pi&\ hline
i_{n+1}&&&&\ hline
end{array}



If we started by reducing the last $n$ letters we would instead subtract
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&color{red}{1}+pi\ hline
* &&&&1+pi\ hline
i_{n+1}&&&&\ hline
end{array}



Then add
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&\ hline
* &1+pi&1+pi&1+pi&color{red}{1}+pi\ hline
i_{n+1}&&&&\ hline
end{array}



which is the same result.



Case $i_1<i_2< i_n=i_{n+1}$



This is dual to the previous case.



Case $i_1=i_2<i_n=i_{n+1}$



I'll omit the steps, but the result is
begin{array}
{|c|c|c|c|}hline
&i_1&*&i_{n+1}\hline
i_1 &&&-pi\ hline
* &1+pi&1+pi&\ hline
i_{n+1}&1+pi&1+pi&\ hline
end{array}



Cases $i_1=i_n<i_{n+1}$ and $i_1<i_2<i_{n+1}$ and $i_1=i_{n+1}$



Left to the reader for now.



$Box$



Diamond lemma variant



We have shown that overlap ambiguities of $[i_1,dots,i_{n+1}]$ are resolvable, but there are longer overlaps possible.
Say that an overlap ambiguity $(sigma,nu,A,B,C)$ is decomposable if there are overlap ambiguities $(sigma,tau,A',A''B,C')$ and $(tau,nu,A'',BC',C'')$ with $A=A'A''$ and $C=C'C''.$
I claim that in Bergman Theorem 1.2 one can replace "all ambiguities" by "all indecomposable ambiguities".



The modification needed is in "Case 1". If the overlap ambiguity is indecomposable, the proof is the same: one uses the resolvability to show $L(f_sigma C-Af_tau)Min I_D.$. For the case of a decomposable overlap ambiguity $(sigma,nu,A,B,C)$ with word $D=LABCM$ we similarly need to show that $L(f_sigma C-Af_tau)Min I_D.$ By decomposability there are overlap ambiguities $(sigma,tau,A',A''B,C')$ and $(tau,nu,A'',BC',C'')$ with $A=A'A''$ and $C=C'C''$ (so $D=LA'A''BC'C''M$).



We can prove this by induction on the length of $ABC.$
If $(sigma,tau,A',A''B,C')$ is indecomposable then there are reductions proving that $L(f_sigma C'-A'f_tau)C''Min I_D.$ Otherwise we get the same result by the induction hypothesis. By the same argument for $(tau,nu,A'',BC',C'')$ we get $LA'(f_{tau} C''-A''f_nu)Min I_D.$ Adding these gives $L(f_sigma C-Af_tau)Min I_D$ as required.



By this variant of Bergman Theorem 1.2(c), and noting that there are no inclusion ambiguities, $N_nV$ is spanned by the $S$-irreducible monomials.





This part isn't entirely serious, but I have seen a quotient of $N_3^4V$ in the wild. In Riemannian normal co-ordinates near a point $0,$ the Riemannian metric has the form $g_{ac}(x)=g_{ac}(0)+sum_{b,d}tfrac12g_{ac,bd}x^bx^d+O(|x|^3).$ The Hessian $g_{ac,bd}$ various symmetries, but in particular, summing over permutations of $a,c,b$ or of $c,b,d$ gives zero. In this situation, the Hessian is equivalent to the Riemannian curvature tensor - they can be defined in terms of each other. See http://users.monash.edu.au/~leo/research/papers/files/lcb96-01.pdf for example for more details.





Singular script to generate the dimensions directly:




LIB "fpadim.lib";



// d must be 2 or 3
int d=2;
//int d=3;



ring r2 = 0,(a,b),dp;
def R2 = makeLetterplaceRing(10);
ring r3 = 0,(a,b,c),dp;
def R3 = makeLetterplaceRing(10);
if (d==2) {
setring R2;
list monoms=list(a(1),b(1));
} else {
setring R3;
list monoms=list(a(1),b(1),c(1));
}



// Construct ideal of all monomial nilcube relations
ideal G=0;
int m1,m2,m3;
poly p;
for (m1=1; m1<=d; m1=m1+1) {
for (m2=1; m2<=d; m2=m2+1) {
for (m3=1; m3<=d; m3=m3+1)
{
p=0;
p = p + lpMult(lpMult(monoms[m1],monoms[m2]),monoms[m3]);
p = p + lpMult(lpMult(monoms[m2],monoms[m3]),monoms[m1]);
p = p + lpMult(lpMult(monoms[m3],monoms[m1]),monoms[m2]);
p = p + lpMult(lpMult(monoms[m3],monoms[m2]),monoms[m1]);
p = p + lpMult(lpMult(monoms[m2],monoms[m1]),monoms[m3]);
p = p + lpMult(lpMult(monoms[m1],monoms[m3]),monoms[m2]);
G=G+p;
}
}
}



lpDHilbert(G);






share|cite|improve this answer











$endgroup$













  • $begingroup$
    I think that as of now, what you have shown is that your reduction procedure is "sufficient", i.e. it defines an algebra which is a quotient of the original algebra of the OP. You have not shown yet that the procedure is "necessary", i.e. that your identities are necessarily satisfied in the original algebra.
    $endgroup$
    – Ewan Delanoy
    Jan 21 at 13:47










  • $begingroup$
    @EwanDelanoy: good point, I had forgotten to mention that step
    $endgroup$
    – Dap
    Jan 21 at 15:32










  • $begingroup$
    Note: Your argument requires that the base ring is a commutative $mathbb{Q}$-algebra, or perhaps even a field of characteristic $0$.
    $endgroup$
    – darij grinberg
    Jan 22 at 21:12










  • $begingroup$
    By "subword", you mean "factor" (i.e., "contiguous subword").
    $endgroup$
    – darij grinberg
    Jan 22 at 21:13










  • $begingroup$
    You are proving that overlap ambiguities with overlap size $n-1$ are resolvable. What about smaller overlap sizes? Or is there a result somewhere in Bergman that says the maximum proper overlap size is sufficient?
    $endgroup$
    – darij grinberg
    Jan 22 at 21:14
















7





+200







$begingroup$

I assume $V$ is a vector space over a field of characteristic zero. For $kgeq 0$ let $N_n^kV$ be the degree-$k$ part of $N_nV.$ Then:




$dim N_n^kV$ is the number of length $k$ words on the alphabet ${1,dots,dim V}$ containing no non-decreasing subword of length $n.$




In particular, $N_nV$ is infinite dimensional for $ngeq 3$ and $dim Vgeq 2.$
The sequence $dim N_n^kV$ for $k=0,1,2,dots$ is called the Hilbert series and can be computed directly from the algebraic description of $N_nV$ using Singular - I've included a script at the end that gives the following values for $dim N_3^kV$:



begin{align*}
(dim V=2)&&&1,2,4,4,5,4,5,4,5,4,5,dots\
(dim V=3)&&&1,3,9,17,36,63,126,225,441,801,1548,dots
end{align*}





To prove this formula I will use a variant of the result from "The diamond lemma for ring theory" mentioned in darij grinberg's comment, and you'll need that paper to understand what I'm writing. I won't use the language of Gröbner bases, but this does show that the obvious basis for the identities defining $N_nV$ in the free associative algebra are in fact a (noncommutative) Gröbner basis - see for example https://arxiv.org/abs/1303.0920 in particular the "Reduce" algorithm. A computer algebra system such as Singular one can easily verify directly that a particular basis is a Gröbner basis.



A convenient basis for the ideal



Let $V$ have basis $x_1,dots,x_d$ and, to avoid excessive subscripting, let $[i_1,dots,i_m]$ denote the monomial $x_{i_1}dots x_{i_m}.$



First I will argue that $N_nV$ can be defined by the two-sided ideal generated by the noncommutative polynomials of the form $$sum_{pi}[pi(i_1,dots,i_n)]tag{*}$$ where the sum runs over all permutation of $n$ elements. These identities are sufficient because any identity $(sum_{j=1}^d t_jx_j)^n,$ with scalars $t_j,$ can be grouped by monomials in $t,$ and each term is a scalar multiple of (*). For the converse let $(c_{i,j})$ with $0leq i,jleq d$ be an inverse Vandermonde matrix such that $sum_{t=0}^{d} c_{i,t}p(t)$ is the $x^i$ coefficient of $p$ for any degree-$d$ univariate polynomial $p(x).$ Then $sum_{tin{0,dots,k}^d}c_{i_1,t_1}dots c_{i_d,t_d}(sum_{j=1}^d t_jx_j)^n$ is the coefficient of $t_1^{i_1}dots t_d^{i_d}$ in $(sum_{j=1}^d t_jx_j)^n,$ which is a scalar multiple of (*).



Reductions



So we can work with (*) instead of the nilpotence condition. The set $S$ of reductions we will use consists of



$$[i_1,dots,i_n]mapsto -sum_{pineq id} [pi(i_1,dots,i_n)]$$



for any $i_1leq i_2leq dots leq i_n,$ where the sum is over the set of permutations of $(i_1,dots,i_n)$; permutations that give the same result are now only counted once. For example monomials $x_1^n$ are reduced to zero.



A monomial $[i_1,dots,i_k]$ is "$S$-irreducible" if this rule cannot be applied, or equivalently $i_1dots i_k$ doesn't have a non-decreasing subword of length $n.$
We want to use Bergman Theorem 1.2 (c) to show that these monomials give a basis of $N_n^kV.$



Example: for $n=3$ and $kgeq 3,$ writing $a,b$ instead of $x_1,x_2,$ the normalized words are of the form
"$(a|b|epsilon)(ba)^n(a|b|epsilon)$", by which I mean: some number of repetitions of $(ba)^n,$ optionally with an $a$ or $b$ before and optionally with an $a$ or $b$ after. This explains the dimension counts and shows that the $4,5$ periodic pattern shown above does continue.



We need to show these reductions have only resolvable ambiguities in the sense of Bergman's paper. Consider an overlap ambiguity of length $n+1$: given $i_1leqdotsleq i_{n+1},$ there are two reductions that can be applied to an input $[i_1,dots,i_{n+1}].$



Lemma. For $i_1leq dotsleq i_{n+1},$ the two ways of reducing $[i_1,dots,i_{n+1}]$ give the same result under some sequence of reductions.



Proof. I will use a table notation where $[j,dots,k]$ is represented in a row for $j$ and a column for $k,$ with "$1$" if
the middle elements are in non-decreasing order, and "$pi$" for the sum over all other orders which may be an empty sum.
A star in the row header means to sum over the remaining options (for the first case, ${i_1,dots,i_{n+1}}setminus{i_1,i_2,i_{n+1}}$),
and similarly for the columns.



Case $i_1<i_2leq i_n<i_{n+1}$



$[i_1,dots,i_{n+1}]$ is represented by:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&1\ hline
i_2 &&&&\ hline
* &&&&\ hline
i_{n+1}&&&&\ hline
end{array}



Applying the reduction rule to the first $n$ letters is the same as subtracting these terms:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&1+pi&1+pi&color{red}{1}+pi\ hline
i_2 &&&&\ hline
* &&&&\ hline
i_{n+1}&&&&\ hline
end{array}

I have drawn the left-hand-side of the reduction rule in red.



We can then reduce the the last $n$ letters of some of these monomials, which is the same as adding these terms:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&color{red}{1}+pi&color{red}{1}+pi&\ hline
i_2 &&1+pi&1+pi&\ hline
* &&1+pi&1+pi&\ hline
i_{n+1}&&1+pi&1+pi&\ hline
end{array}



Finally we subtract
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&\ hline
i_2 &&&&\ hline
* &&&&\ hline
i_{n+1}&1+pi&1+pi&color{red}{1}+pi&\ hline
end{array}



The final result is
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&-pi\ hline
i_2 &&1+pi&1+pi&\ hline
* &&1+pi&1+pi&\ hline
i_{n+1}&-pi&&&\ hline
end{array}



If we started by applying the reduction rule to the last $n$ letters we would get the same result - just flip the table contents so the top-left becomes the bottom-right.



Case $i_1=i_2< i_n<i_{n+1}$



$[i_1,dots,i_{n+1}]$ is represented by:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&1\ hline
* &&&&\ hline
i_{n+1}&&&&\ hline
end{array}



Reducing the first $n$ letters is represented by subtracting
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &1+pi&1+pi&1+pi&color{red}{1}+pi\ hline
* &&&&\ hline
i_{n+1}&&&&\ hline
end{array}



Add:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &color{red}{1}+pi&color{red}{1}+pi&color{red}{1}+pi&\ hline
* &1+pi&1+pi&1+pi&\ hline
i_{n+1}&1+pi&1+pi&1+pi&\ hline
end{array}



Subtract:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&\ hline
* &&&&\ hline
i_{n+1}&1+pi&1+pi&color{red}{1}+pi&\ hline
end{array}



The result is
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&\ hline
* &1+pi&1+pi&1+pi&\ hline
i_{n+1}&&&&\ hline
end{array}



If we started by reducing the last $n$ letters we would instead subtract
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&color{red}{1}+pi\ hline
* &&&&1+pi\ hline
i_{n+1}&&&&\ hline
end{array}



Then add
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&\ hline
* &1+pi&1+pi&1+pi&color{red}{1}+pi\ hline
i_{n+1}&&&&\ hline
end{array}



which is the same result.



Case $i_1<i_2< i_n=i_{n+1}$



This is dual to the previous case.



Case $i_1=i_2<i_n=i_{n+1}$



I'll omit the steps, but the result is
begin{array}
{|c|c|c|c|}hline
&i_1&*&i_{n+1}\hline
i_1 &&&-pi\ hline
* &1+pi&1+pi&\ hline
i_{n+1}&1+pi&1+pi&\ hline
end{array}



Cases $i_1=i_n<i_{n+1}$ and $i_1<i_2<i_{n+1}$ and $i_1=i_{n+1}$



Left to the reader for now.



$Box$



Diamond lemma variant



We have shown that overlap ambiguities of $[i_1,dots,i_{n+1}]$ are resolvable, but there are longer overlaps possible.
Say that an overlap ambiguity $(sigma,nu,A,B,C)$ is decomposable if there are overlap ambiguities $(sigma,tau,A',A''B,C')$ and $(tau,nu,A'',BC',C'')$ with $A=A'A''$ and $C=C'C''.$
I claim that in Bergman Theorem 1.2 one can replace "all ambiguities" by "all indecomposable ambiguities".



The modification needed is in "Case 1". If the overlap ambiguity is indecomposable, the proof is the same: one uses the resolvability to show $L(f_sigma C-Af_tau)Min I_D.$. For the case of a decomposable overlap ambiguity $(sigma,nu,A,B,C)$ with word $D=LABCM$ we similarly need to show that $L(f_sigma C-Af_tau)Min I_D.$ By decomposability there are overlap ambiguities $(sigma,tau,A',A''B,C')$ and $(tau,nu,A'',BC',C'')$ with $A=A'A''$ and $C=C'C''$ (so $D=LA'A''BC'C''M$).



We can prove this by induction on the length of $ABC.$
If $(sigma,tau,A',A''B,C')$ is indecomposable then there are reductions proving that $L(f_sigma C'-A'f_tau)C''Min I_D.$ Otherwise we get the same result by the induction hypothesis. By the same argument for $(tau,nu,A'',BC',C'')$ we get $LA'(f_{tau} C''-A''f_nu)Min I_D.$ Adding these gives $L(f_sigma C-Af_tau)Min I_D$ as required.



By this variant of Bergman Theorem 1.2(c), and noting that there are no inclusion ambiguities, $N_nV$ is spanned by the $S$-irreducible monomials.





This part isn't entirely serious, but I have seen a quotient of $N_3^4V$ in the wild. In Riemannian normal co-ordinates near a point $0,$ the Riemannian metric has the form $g_{ac}(x)=g_{ac}(0)+sum_{b,d}tfrac12g_{ac,bd}x^bx^d+O(|x|^3).$ The Hessian $g_{ac,bd}$ various symmetries, but in particular, summing over permutations of $a,c,b$ or of $c,b,d$ gives zero. In this situation, the Hessian is equivalent to the Riemannian curvature tensor - they can be defined in terms of each other. See http://users.monash.edu.au/~leo/research/papers/files/lcb96-01.pdf for example for more details.





Singular script to generate the dimensions directly:




LIB "fpadim.lib";



// d must be 2 or 3
int d=2;
//int d=3;



ring r2 = 0,(a,b),dp;
def R2 = makeLetterplaceRing(10);
ring r3 = 0,(a,b,c),dp;
def R3 = makeLetterplaceRing(10);
if (d==2) {
setring R2;
list monoms=list(a(1),b(1));
} else {
setring R3;
list monoms=list(a(1),b(1),c(1));
}



// Construct ideal of all monomial nilcube relations
ideal G=0;
int m1,m2,m3;
poly p;
for (m1=1; m1<=d; m1=m1+1) {
for (m2=1; m2<=d; m2=m2+1) {
for (m3=1; m3<=d; m3=m3+1)
{
p=0;
p = p + lpMult(lpMult(monoms[m1],monoms[m2]),monoms[m3]);
p = p + lpMult(lpMult(monoms[m2],monoms[m3]),monoms[m1]);
p = p + lpMult(lpMult(monoms[m3],monoms[m1]),monoms[m2]);
p = p + lpMult(lpMult(monoms[m3],monoms[m2]),monoms[m1]);
p = p + lpMult(lpMult(monoms[m2],monoms[m1]),monoms[m3]);
p = p + lpMult(lpMult(monoms[m1],monoms[m3]),monoms[m2]);
G=G+p;
}
}
}



lpDHilbert(G);






share|cite|improve this answer











$endgroup$













  • $begingroup$
    I think that as of now, what you have shown is that your reduction procedure is "sufficient", i.e. it defines an algebra which is a quotient of the original algebra of the OP. You have not shown yet that the procedure is "necessary", i.e. that your identities are necessarily satisfied in the original algebra.
    $endgroup$
    – Ewan Delanoy
    Jan 21 at 13:47










  • $begingroup$
    @EwanDelanoy: good point, I had forgotten to mention that step
    $endgroup$
    – Dap
    Jan 21 at 15:32










  • $begingroup$
    Note: Your argument requires that the base ring is a commutative $mathbb{Q}$-algebra, or perhaps even a field of characteristic $0$.
    $endgroup$
    – darij grinberg
    Jan 22 at 21:12










  • $begingroup$
    By "subword", you mean "factor" (i.e., "contiguous subword").
    $endgroup$
    – darij grinberg
    Jan 22 at 21:13










  • $begingroup$
    You are proving that overlap ambiguities with overlap size $n-1$ are resolvable. What about smaller overlap sizes? Or is there a result somewhere in Bergman that says the maximum proper overlap size is sufficient?
    $endgroup$
    – darij grinberg
    Jan 22 at 21:14














7





+200







7





+200



7




+200



$begingroup$

I assume $V$ is a vector space over a field of characteristic zero. For $kgeq 0$ let $N_n^kV$ be the degree-$k$ part of $N_nV.$ Then:




$dim N_n^kV$ is the number of length $k$ words on the alphabet ${1,dots,dim V}$ containing no non-decreasing subword of length $n.$




In particular, $N_nV$ is infinite dimensional for $ngeq 3$ and $dim Vgeq 2.$
The sequence $dim N_n^kV$ for $k=0,1,2,dots$ is called the Hilbert series and can be computed directly from the algebraic description of $N_nV$ using Singular - I've included a script at the end that gives the following values for $dim N_3^kV$:



begin{align*}
(dim V=2)&&&1,2,4,4,5,4,5,4,5,4,5,dots\
(dim V=3)&&&1,3,9,17,36,63,126,225,441,801,1548,dots
end{align*}





To prove this formula I will use a variant of the result from "The diamond lemma for ring theory" mentioned in darij grinberg's comment, and you'll need that paper to understand what I'm writing. I won't use the language of Gröbner bases, but this does show that the obvious basis for the identities defining $N_nV$ in the free associative algebra are in fact a (noncommutative) Gröbner basis - see for example https://arxiv.org/abs/1303.0920 in particular the "Reduce" algorithm. A computer algebra system such as Singular one can easily verify directly that a particular basis is a Gröbner basis.



A convenient basis for the ideal



Let $V$ have basis $x_1,dots,x_d$ and, to avoid excessive subscripting, let $[i_1,dots,i_m]$ denote the monomial $x_{i_1}dots x_{i_m}.$



First I will argue that $N_nV$ can be defined by the two-sided ideal generated by the noncommutative polynomials of the form $$sum_{pi}[pi(i_1,dots,i_n)]tag{*}$$ where the sum runs over all permutation of $n$ elements. These identities are sufficient because any identity $(sum_{j=1}^d t_jx_j)^n,$ with scalars $t_j,$ can be grouped by monomials in $t,$ and each term is a scalar multiple of (*). For the converse let $(c_{i,j})$ with $0leq i,jleq d$ be an inverse Vandermonde matrix such that $sum_{t=0}^{d} c_{i,t}p(t)$ is the $x^i$ coefficient of $p$ for any degree-$d$ univariate polynomial $p(x).$ Then $sum_{tin{0,dots,k}^d}c_{i_1,t_1}dots c_{i_d,t_d}(sum_{j=1}^d t_jx_j)^n$ is the coefficient of $t_1^{i_1}dots t_d^{i_d}$ in $(sum_{j=1}^d t_jx_j)^n,$ which is a scalar multiple of (*).



Reductions



So we can work with (*) instead of the nilpotence condition. The set $S$ of reductions we will use consists of



$$[i_1,dots,i_n]mapsto -sum_{pineq id} [pi(i_1,dots,i_n)]$$



for any $i_1leq i_2leq dots leq i_n,$ where the sum is over the set of permutations of $(i_1,dots,i_n)$; permutations that give the same result are now only counted once. For example monomials $x_1^n$ are reduced to zero.



A monomial $[i_1,dots,i_k]$ is "$S$-irreducible" if this rule cannot be applied, or equivalently $i_1dots i_k$ doesn't have a non-decreasing subword of length $n.$
We want to use Bergman Theorem 1.2 (c) to show that these monomials give a basis of $N_n^kV.$



Example: for $n=3$ and $kgeq 3,$ writing $a,b$ instead of $x_1,x_2,$ the normalized words are of the form
"$(a|b|epsilon)(ba)^n(a|b|epsilon)$", by which I mean: some number of repetitions of $(ba)^n,$ optionally with an $a$ or $b$ before and optionally with an $a$ or $b$ after. This explains the dimension counts and shows that the $4,5$ periodic pattern shown above does continue.



We need to show these reductions have only resolvable ambiguities in the sense of Bergman's paper. Consider an overlap ambiguity of length $n+1$: given $i_1leqdotsleq i_{n+1},$ there are two reductions that can be applied to an input $[i_1,dots,i_{n+1}].$



Lemma. For $i_1leq dotsleq i_{n+1},$ the two ways of reducing $[i_1,dots,i_{n+1}]$ give the same result under some sequence of reductions.



Proof. I will use a table notation where $[j,dots,k]$ is represented in a row for $j$ and a column for $k,$ with "$1$" if
the middle elements are in non-decreasing order, and "$pi$" for the sum over all other orders which may be an empty sum.
A star in the row header means to sum over the remaining options (for the first case, ${i_1,dots,i_{n+1}}setminus{i_1,i_2,i_{n+1}}$),
and similarly for the columns.



Case $i_1<i_2leq i_n<i_{n+1}$



$[i_1,dots,i_{n+1}]$ is represented by:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&1\ hline
i_2 &&&&\ hline
* &&&&\ hline
i_{n+1}&&&&\ hline
end{array}



Applying the reduction rule to the first $n$ letters is the same as subtracting these terms:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&1+pi&1+pi&color{red}{1}+pi\ hline
i_2 &&&&\ hline
* &&&&\ hline
i_{n+1}&&&&\ hline
end{array}

I have drawn the left-hand-side of the reduction rule in red.



We can then reduce the the last $n$ letters of some of these monomials, which is the same as adding these terms:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&color{red}{1}+pi&color{red}{1}+pi&\ hline
i_2 &&1+pi&1+pi&\ hline
* &&1+pi&1+pi&\ hline
i_{n+1}&&1+pi&1+pi&\ hline
end{array}



Finally we subtract
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&\ hline
i_2 &&&&\ hline
* &&&&\ hline
i_{n+1}&1+pi&1+pi&color{red}{1}+pi&\ hline
end{array}



The final result is
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&-pi\ hline
i_2 &&1+pi&1+pi&\ hline
* &&1+pi&1+pi&\ hline
i_{n+1}&-pi&&&\ hline
end{array}



If we started by applying the reduction rule to the last $n$ letters we would get the same result - just flip the table contents so the top-left becomes the bottom-right.



Case $i_1=i_2< i_n<i_{n+1}$



$[i_1,dots,i_{n+1}]$ is represented by:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&1\ hline
* &&&&\ hline
i_{n+1}&&&&\ hline
end{array}



Reducing the first $n$ letters is represented by subtracting
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &1+pi&1+pi&1+pi&color{red}{1}+pi\ hline
* &&&&\ hline
i_{n+1}&&&&\ hline
end{array}



Add:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &color{red}{1}+pi&color{red}{1}+pi&color{red}{1}+pi&\ hline
* &1+pi&1+pi&1+pi&\ hline
i_{n+1}&1+pi&1+pi&1+pi&\ hline
end{array}



Subtract:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&\ hline
* &&&&\ hline
i_{n+1}&1+pi&1+pi&color{red}{1}+pi&\ hline
end{array}



The result is
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&\ hline
* &1+pi&1+pi&1+pi&\ hline
i_{n+1}&&&&\ hline
end{array}



If we started by reducing the last $n$ letters we would instead subtract
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&color{red}{1}+pi\ hline
* &&&&1+pi\ hline
i_{n+1}&&&&\ hline
end{array}



Then add
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&\ hline
* &1+pi&1+pi&1+pi&color{red}{1}+pi\ hline
i_{n+1}&&&&\ hline
end{array}



which is the same result.



Case $i_1<i_2< i_n=i_{n+1}$



This is dual to the previous case.



Case $i_1=i_2<i_n=i_{n+1}$



I'll omit the steps, but the result is
begin{array}
{|c|c|c|c|}hline
&i_1&*&i_{n+1}\hline
i_1 &&&-pi\ hline
* &1+pi&1+pi&\ hline
i_{n+1}&1+pi&1+pi&\ hline
end{array}



Cases $i_1=i_n<i_{n+1}$ and $i_1<i_2<i_{n+1}$ and $i_1=i_{n+1}$



Left to the reader for now.



$Box$



Diamond lemma variant



We have shown that overlap ambiguities of $[i_1,dots,i_{n+1}]$ are resolvable, but there are longer overlaps possible.
Say that an overlap ambiguity $(sigma,nu,A,B,C)$ is decomposable if there are overlap ambiguities $(sigma,tau,A',A''B,C')$ and $(tau,nu,A'',BC',C'')$ with $A=A'A''$ and $C=C'C''.$
I claim that in Bergman Theorem 1.2 one can replace "all ambiguities" by "all indecomposable ambiguities".



The modification needed is in "Case 1". If the overlap ambiguity is indecomposable, the proof is the same: one uses the resolvability to show $L(f_sigma C-Af_tau)Min I_D.$. For the case of a decomposable overlap ambiguity $(sigma,nu,A,B,C)$ with word $D=LABCM$ we similarly need to show that $L(f_sigma C-Af_tau)Min I_D.$ By decomposability there are overlap ambiguities $(sigma,tau,A',A''B,C')$ and $(tau,nu,A'',BC',C'')$ with $A=A'A''$ and $C=C'C''$ (so $D=LA'A''BC'C''M$).



We can prove this by induction on the length of $ABC.$
If $(sigma,tau,A',A''B,C')$ is indecomposable then there are reductions proving that $L(f_sigma C'-A'f_tau)C''Min I_D.$ Otherwise we get the same result by the induction hypothesis. By the same argument for $(tau,nu,A'',BC',C'')$ we get $LA'(f_{tau} C''-A''f_nu)Min I_D.$ Adding these gives $L(f_sigma C-Af_tau)Min I_D$ as required.



By this variant of Bergman Theorem 1.2(c), and noting that there are no inclusion ambiguities, $N_nV$ is spanned by the $S$-irreducible monomials.





This part isn't entirely serious, but I have seen a quotient of $N_3^4V$ in the wild. In Riemannian normal co-ordinates near a point $0,$ the Riemannian metric has the form $g_{ac}(x)=g_{ac}(0)+sum_{b,d}tfrac12g_{ac,bd}x^bx^d+O(|x|^3).$ The Hessian $g_{ac,bd}$ various symmetries, but in particular, summing over permutations of $a,c,b$ or of $c,b,d$ gives zero. In this situation, the Hessian is equivalent to the Riemannian curvature tensor - they can be defined in terms of each other. See http://users.monash.edu.au/~leo/research/papers/files/lcb96-01.pdf for example for more details.





Singular script to generate the dimensions directly:




LIB "fpadim.lib";



// d must be 2 or 3
int d=2;
//int d=3;



ring r2 = 0,(a,b),dp;
def R2 = makeLetterplaceRing(10);
ring r3 = 0,(a,b,c),dp;
def R3 = makeLetterplaceRing(10);
if (d==2) {
setring R2;
list monoms=list(a(1),b(1));
} else {
setring R3;
list monoms=list(a(1),b(1),c(1));
}



// Construct ideal of all monomial nilcube relations
ideal G=0;
int m1,m2,m3;
poly p;
for (m1=1; m1<=d; m1=m1+1) {
for (m2=1; m2<=d; m2=m2+1) {
for (m3=1; m3<=d; m3=m3+1)
{
p=0;
p = p + lpMult(lpMult(monoms[m1],monoms[m2]),monoms[m3]);
p = p + lpMult(lpMult(monoms[m2],monoms[m3]),monoms[m1]);
p = p + lpMult(lpMult(monoms[m3],monoms[m1]),monoms[m2]);
p = p + lpMult(lpMult(monoms[m3],monoms[m2]),monoms[m1]);
p = p + lpMult(lpMult(monoms[m2],monoms[m1]),monoms[m3]);
p = p + lpMult(lpMult(monoms[m1],monoms[m3]),monoms[m2]);
G=G+p;
}
}
}



lpDHilbert(G);






share|cite|improve this answer











$endgroup$



I assume $V$ is a vector space over a field of characteristic zero. For $kgeq 0$ let $N_n^kV$ be the degree-$k$ part of $N_nV.$ Then:




$dim N_n^kV$ is the number of length $k$ words on the alphabet ${1,dots,dim V}$ containing no non-decreasing subword of length $n.$




In particular, $N_nV$ is infinite dimensional for $ngeq 3$ and $dim Vgeq 2.$
The sequence $dim N_n^kV$ for $k=0,1,2,dots$ is called the Hilbert series and can be computed directly from the algebraic description of $N_nV$ using Singular - I've included a script at the end that gives the following values for $dim N_3^kV$:



begin{align*}
(dim V=2)&&&1,2,4,4,5,4,5,4,5,4,5,dots\
(dim V=3)&&&1,3,9,17,36,63,126,225,441,801,1548,dots
end{align*}





To prove this formula I will use a variant of the result from "The diamond lemma for ring theory" mentioned in darij grinberg's comment, and you'll need that paper to understand what I'm writing. I won't use the language of Gröbner bases, but this does show that the obvious basis for the identities defining $N_nV$ in the free associative algebra are in fact a (noncommutative) Gröbner basis - see for example https://arxiv.org/abs/1303.0920 in particular the "Reduce" algorithm. A computer algebra system such as Singular one can easily verify directly that a particular basis is a Gröbner basis.



A convenient basis for the ideal



Let $V$ have basis $x_1,dots,x_d$ and, to avoid excessive subscripting, let $[i_1,dots,i_m]$ denote the monomial $x_{i_1}dots x_{i_m}.$



First I will argue that $N_nV$ can be defined by the two-sided ideal generated by the noncommutative polynomials of the form $$sum_{pi}[pi(i_1,dots,i_n)]tag{*}$$ where the sum runs over all permutation of $n$ elements. These identities are sufficient because any identity $(sum_{j=1}^d t_jx_j)^n,$ with scalars $t_j,$ can be grouped by monomials in $t,$ and each term is a scalar multiple of (*). For the converse let $(c_{i,j})$ with $0leq i,jleq d$ be an inverse Vandermonde matrix such that $sum_{t=0}^{d} c_{i,t}p(t)$ is the $x^i$ coefficient of $p$ for any degree-$d$ univariate polynomial $p(x).$ Then $sum_{tin{0,dots,k}^d}c_{i_1,t_1}dots c_{i_d,t_d}(sum_{j=1}^d t_jx_j)^n$ is the coefficient of $t_1^{i_1}dots t_d^{i_d}$ in $(sum_{j=1}^d t_jx_j)^n,$ which is a scalar multiple of (*).



Reductions



So we can work with (*) instead of the nilpotence condition. The set $S$ of reductions we will use consists of



$$[i_1,dots,i_n]mapsto -sum_{pineq id} [pi(i_1,dots,i_n)]$$



for any $i_1leq i_2leq dots leq i_n,$ where the sum is over the set of permutations of $(i_1,dots,i_n)$; permutations that give the same result are now only counted once. For example monomials $x_1^n$ are reduced to zero.



A monomial $[i_1,dots,i_k]$ is "$S$-irreducible" if this rule cannot be applied, or equivalently $i_1dots i_k$ doesn't have a non-decreasing subword of length $n.$
We want to use Bergman Theorem 1.2 (c) to show that these monomials give a basis of $N_n^kV.$



Example: for $n=3$ and $kgeq 3,$ writing $a,b$ instead of $x_1,x_2,$ the normalized words are of the form
"$(a|b|epsilon)(ba)^n(a|b|epsilon)$", by which I mean: some number of repetitions of $(ba)^n,$ optionally with an $a$ or $b$ before and optionally with an $a$ or $b$ after. This explains the dimension counts and shows that the $4,5$ periodic pattern shown above does continue.



We need to show these reductions have only resolvable ambiguities in the sense of Bergman's paper. Consider an overlap ambiguity of length $n+1$: given $i_1leqdotsleq i_{n+1},$ there are two reductions that can be applied to an input $[i_1,dots,i_{n+1}].$



Lemma. For $i_1leq dotsleq i_{n+1},$ the two ways of reducing $[i_1,dots,i_{n+1}]$ give the same result under some sequence of reductions.



Proof. I will use a table notation where $[j,dots,k]$ is represented in a row for $j$ and a column for $k,$ with "$1$" if
the middle elements are in non-decreasing order, and "$pi$" for the sum over all other orders which may be an empty sum.
A star in the row header means to sum over the remaining options (for the first case, ${i_1,dots,i_{n+1}}setminus{i_1,i_2,i_{n+1}}$),
and similarly for the columns.



Case $i_1<i_2leq i_n<i_{n+1}$



$[i_1,dots,i_{n+1}]$ is represented by:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&1\ hline
i_2 &&&&\ hline
* &&&&\ hline
i_{n+1}&&&&\ hline
end{array}



Applying the reduction rule to the first $n$ letters is the same as subtracting these terms:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&1+pi&1+pi&color{red}{1}+pi\ hline
i_2 &&&&\ hline
* &&&&\ hline
i_{n+1}&&&&\ hline
end{array}

I have drawn the left-hand-side of the reduction rule in red.



We can then reduce the the last $n$ letters of some of these monomials, which is the same as adding these terms:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&color{red}{1}+pi&color{red}{1}+pi&\ hline
i_2 &&1+pi&1+pi&\ hline
* &&1+pi&1+pi&\ hline
i_{n+1}&&1+pi&1+pi&\ hline
end{array}



Finally we subtract
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&\ hline
i_2 &&&&\ hline
* &&&&\ hline
i_{n+1}&1+pi&1+pi&color{red}{1}+pi&\ hline
end{array}



The final result is
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&-pi\ hline
i_2 &&1+pi&1+pi&\ hline
* &&1+pi&1+pi&\ hline
i_{n+1}&-pi&&&\ hline
end{array}



If we started by applying the reduction rule to the last $n$ letters we would get the same result - just flip the table contents so the top-left becomes the bottom-right.



Case $i_1=i_2< i_n<i_{n+1}$



$[i_1,dots,i_{n+1}]$ is represented by:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&1\ hline
* &&&&\ hline
i_{n+1}&&&&\ hline
end{array}



Reducing the first $n$ letters is represented by subtracting
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &1+pi&1+pi&1+pi&color{red}{1}+pi\ hline
* &&&&\ hline
i_{n+1}&&&&\ hline
end{array}



Add:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &color{red}{1}+pi&color{red}{1}+pi&color{red}{1}+pi&\ hline
* &1+pi&1+pi&1+pi&\ hline
i_{n+1}&1+pi&1+pi&1+pi&\ hline
end{array}



Subtract:
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&\ hline
* &&&&\ hline
i_{n+1}&1+pi&1+pi&color{red}{1}+pi&\ hline
end{array}



The result is
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&\ hline
* &1+pi&1+pi&1+pi&\ hline
i_{n+1}&&&&\ hline
end{array}



If we started by reducing the last $n$ letters we would instead subtract
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&color{red}{1}+pi\ hline
* &&&&1+pi\ hline
i_{n+1}&&&&\ hline
end{array}



Then add
begin{array}
{|c|c|c|c|c|}hline
&i_1&*&i_n&i_{n+1}\hline
i_1 &&&&\ hline
* &1+pi&1+pi&1+pi&color{red}{1}+pi\ hline
i_{n+1}&&&&\ hline
end{array}



which is the same result.



Case $i_1<i_2< i_n=i_{n+1}$



This is dual to the previous case.



Case $i_1=i_2<i_n=i_{n+1}$



I'll omit the steps, but the result is
begin{array}
{|c|c|c|c|}hline
&i_1&*&i_{n+1}\hline
i_1 &&&-pi\ hline
* &1+pi&1+pi&\ hline
i_{n+1}&1+pi&1+pi&\ hline
end{array}



Cases $i_1=i_n<i_{n+1}$ and $i_1<i_2<i_{n+1}$ and $i_1=i_{n+1}$



Left to the reader for now.



$Box$



Diamond lemma variant



We have shown that overlap ambiguities of $[i_1,dots,i_{n+1}]$ are resolvable, but there are longer overlaps possible.
Say that an overlap ambiguity $(sigma,nu,A,B,C)$ is decomposable if there are overlap ambiguities $(sigma,tau,A',A''B,C')$ and $(tau,nu,A'',BC',C'')$ with $A=A'A''$ and $C=C'C''.$
I claim that in Bergman Theorem 1.2 one can replace "all ambiguities" by "all indecomposable ambiguities".



The modification needed is in "Case 1". If the overlap ambiguity is indecomposable, the proof is the same: one uses the resolvability to show $L(f_sigma C-Af_tau)Min I_D.$. For the case of a decomposable overlap ambiguity $(sigma,nu,A,B,C)$ with word $D=LABCM$ we similarly need to show that $L(f_sigma C-Af_tau)Min I_D.$ By decomposability there are overlap ambiguities $(sigma,tau,A',A''B,C')$ and $(tau,nu,A'',BC',C'')$ with $A=A'A''$ and $C=C'C''$ (so $D=LA'A''BC'C''M$).



We can prove this by induction on the length of $ABC.$
If $(sigma,tau,A',A''B,C')$ is indecomposable then there are reductions proving that $L(f_sigma C'-A'f_tau)C''Min I_D.$ Otherwise we get the same result by the induction hypothesis. By the same argument for $(tau,nu,A'',BC',C'')$ we get $LA'(f_{tau} C''-A''f_nu)Min I_D.$ Adding these gives $L(f_sigma C-Af_tau)Min I_D$ as required.



By this variant of Bergman Theorem 1.2(c), and noting that there are no inclusion ambiguities, $N_nV$ is spanned by the $S$-irreducible monomials.





This part isn't entirely serious, but I have seen a quotient of $N_3^4V$ in the wild. In Riemannian normal co-ordinates near a point $0,$ the Riemannian metric has the form $g_{ac}(x)=g_{ac}(0)+sum_{b,d}tfrac12g_{ac,bd}x^bx^d+O(|x|^3).$ The Hessian $g_{ac,bd}$ various symmetries, but in particular, summing over permutations of $a,c,b$ or of $c,b,d$ gives zero. In this situation, the Hessian is equivalent to the Riemannian curvature tensor - they can be defined in terms of each other. See http://users.monash.edu.au/~leo/research/papers/files/lcb96-01.pdf for example for more details.





Singular script to generate the dimensions directly:




LIB "fpadim.lib";



// d must be 2 or 3
int d=2;
//int d=3;



ring r2 = 0,(a,b),dp;
def R2 = makeLetterplaceRing(10);
ring r3 = 0,(a,b,c),dp;
def R3 = makeLetterplaceRing(10);
if (d==2) {
setring R2;
list monoms=list(a(1),b(1));
} else {
setring R3;
list monoms=list(a(1),b(1),c(1));
}



// Construct ideal of all monomial nilcube relations
ideal G=0;
int m1,m2,m3;
poly p;
for (m1=1; m1<=d; m1=m1+1) {
for (m2=1; m2<=d; m2=m2+1) {
for (m3=1; m3<=d; m3=m3+1)
{
p=0;
p = p + lpMult(lpMult(monoms[m1],monoms[m2]),monoms[m3]);
p = p + lpMult(lpMult(monoms[m2],monoms[m3]),monoms[m1]);
p = p + lpMult(lpMult(monoms[m3],monoms[m1]),monoms[m2]);
p = p + lpMult(lpMult(monoms[m3],monoms[m2]),monoms[m1]);
p = p + lpMult(lpMult(monoms[m2],monoms[m1]),monoms[m3]);
p = p + lpMult(lpMult(monoms[m1],monoms[m3]),monoms[m2]);
G=G+p;
}
}
}



lpDHilbert(G);







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Jan 24 at 20:24

























answered Jan 20 at 21:24









DapDap

17.9k841




17.9k841












  • $begingroup$
    I think that as of now, what you have shown is that your reduction procedure is "sufficient", i.e. it defines an algebra which is a quotient of the original algebra of the OP. You have not shown yet that the procedure is "necessary", i.e. that your identities are necessarily satisfied in the original algebra.
    $endgroup$
    – Ewan Delanoy
    Jan 21 at 13:47










  • $begingroup$
    @EwanDelanoy: good point, I had forgotten to mention that step
    $endgroup$
    – Dap
    Jan 21 at 15:32










  • $begingroup$
    Note: Your argument requires that the base ring is a commutative $mathbb{Q}$-algebra, or perhaps even a field of characteristic $0$.
    $endgroup$
    – darij grinberg
    Jan 22 at 21:12










  • $begingroup$
    By "subword", you mean "factor" (i.e., "contiguous subword").
    $endgroup$
    – darij grinberg
    Jan 22 at 21:13










  • $begingroup$
    You are proving that overlap ambiguities with overlap size $n-1$ are resolvable. What about smaller overlap sizes? Or is there a result somewhere in Bergman that says the maximum proper overlap size is sufficient?
    $endgroup$
    – darij grinberg
    Jan 22 at 21:14


















  • $begingroup$
    I think that as of now, what you have shown is that your reduction procedure is "sufficient", i.e. it defines an algebra which is a quotient of the original algebra of the OP. You have not shown yet that the procedure is "necessary", i.e. that your identities are necessarily satisfied in the original algebra.
    $endgroup$
    – Ewan Delanoy
    Jan 21 at 13:47










  • $begingroup$
    @EwanDelanoy: good point, I had forgotten to mention that step
    $endgroup$
    – Dap
    Jan 21 at 15:32










  • $begingroup$
    Note: Your argument requires that the base ring is a commutative $mathbb{Q}$-algebra, or perhaps even a field of characteristic $0$.
    $endgroup$
    – darij grinberg
    Jan 22 at 21:12










  • $begingroup$
    By "subword", you mean "factor" (i.e., "contiguous subword").
    $endgroup$
    – darij grinberg
    Jan 22 at 21:13










  • $begingroup$
    You are proving that overlap ambiguities with overlap size $n-1$ are resolvable. What about smaller overlap sizes? Or is there a result somewhere in Bergman that says the maximum proper overlap size is sufficient?
    $endgroup$
    – darij grinberg
    Jan 22 at 21:14
















$begingroup$
I think that as of now, what you have shown is that your reduction procedure is "sufficient", i.e. it defines an algebra which is a quotient of the original algebra of the OP. You have not shown yet that the procedure is "necessary", i.e. that your identities are necessarily satisfied in the original algebra.
$endgroup$
– Ewan Delanoy
Jan 21 at 13:47




$begingroup$
I think that as of now, what you have shown is that your reduction procedure is "sufficient", i.e. it defines an algebra which is a quotient of the original algebra of the OP. You have not shown yet that the procedure is "necessary", i.e. that your identities are necessarily satisfied in the original algebra.
$endgroup$
– Ewan Delanoy
Jan 21 at 13:47












$begingroup$
@EwanDelanoy: good point, I had forgotten to mention that step
$endgroup$
– Dap
Jan 21 at 15:32




$begingroup$
@EwanDelanoy: good point, I had forgotten to mention that step
$endgroup$
– Dap
Jan 21 at 15:32












$begingroup$
Note: Your argument requires that the base ring is a commutative $mathbb{Q}$-algebra, or perhaps even a field of characteristic $0$.
$endgroup$
– darij grinberg
Jan 22 at 21:12




$begingroup$
Note: Your argument requires that the base ring is a commutative $mathbb{Q}$-algebra, or perhaps even a field of characteristic $0$.
$endgroup$
– darij grinberg
Jan 22 at 21:12












$begingroup$
By "subword", you mean "factor" (i.e., "contiguous subword").
$endgroup$
– darij grinberg
Jan 22 at 21:13




$begingroup$
By "subword", you mean "factor" (i.e., "contiguous subword").
$endgroup$
– darij grinberg
Jan 22 at 21:13












$begingroup$
You are proving that overlap ambiguities with overlap size $n-1$ are resolvable. What about smaller overlap sizes? Or is there a result somewhere in Bergman that says the maximum proper overlap size is sufficient?
$endgroup$
– darij grinberg
Jan 22 at 21:14




$begingroup$
You are proving that overlap ambiguities with overlap size $n-1$ are resolvable. What about smaller overlap sizes? Or is there a result somewhere in Bergman that says the maximum proper overlap size is sufficient?
$endgroup$
– darij grinberg
Jan 22 at 21:14


















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3040330%2fgeneralize-exterior-algebra-vectors-are-nilcube-instead-of-nilsquare%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Plaza Victoria

In PowerPoint, is there a keyboard shortcut for bulleted / numbered list?

How to put 3 figures in Latex with 2 figures side by side and 1 below these side by side images but in...