Algebraic relation between polynomials











up vote
5
down vote

favorite
3













The problem statement is "Let $F in mathbb{C}[t]$ have degree at most $D geq 1$, and let $G in mathbb{C}[t]$ have degree $E geq 1$.



Show that there is a $P not = 0$ in $mathbb{C}[X,Y]$ with degree at most $E$ in $X$ and $D$ in $Y$ such that $P(F,G) = 0$."




I've tried to work this out by supposing $$P = sum_{i=0}^{E} sum_{j=0}^{D}c_{ij}X^iY^j$$, and noticing that I have control over the choice of $(1+E)(1+D)$ coefficients. I was hoping to be able to use this information to create a homogeneous system of linear equations to give a non-trivial solution for the $c_{ij}$ that would force $P(F,G) = 0$.



Is this a viable approach? If so, what would my next step be? I asked my professor, and the hint he gave me was to think about the resultant of $F$ and $G$. I know how to construct the matrix whose determinant is the resultant of $F$ and $G$, and I know the resultant is $0$ if $F$ and $G$ share a common factor, but I don't know how that helps us with this problem



Thanks in advance for any comments, hints, or solutions!










share|cite|improve this question




















  • 1




    Your approach, I fear, only gives weaker bounds (degrees at most $2E$ and $2D$, not $E$ and $D$).
    – darij grinberg
    Sep 30 at 21:52






  • 5




    See mathoverflow.net/a/189344 for the proof using resultants. The idea is to take the resultant of the two polynomials $Fleft(Tright) - X$ and $Gleft(Tright) - Y$ in the indeterminate $T$ over the ring $mathbb{C}left[X,Yright]$. This resultant is nonzero as a polynomial, but will become $0$ when $Fleft(Uright)$ and $Gleft(Uright)$ (with $U$ being yet another indeterminate) are substituted for $X$ and $Y$, since then the two polynomials will have the common root $U = T$.
    – darij grinberg
    Sep 30 at 21:58












  • Would you like to post this as an answer so you can receive the bounty on the problem? This answered everything I had in mind.
    – JonHales
    Oct 8 at 0:19















up vote
5
down vote

favorite
3













The problem statement is "Let $F in mathbb{C}[t]$ have degree at most $D geq 1$, and let $G in mathbb{C}[t]$ have degree $E geq 1$.



Show that there is a $P not = 0$ in $mathbb{C}[X,Y]$ with degree at most $E$ in $X$ and $D$ in $Y$ such that $P(F,G) = 0$."




I've tried to work this out by supposing $$P = sum_{i=0}^{E} sum_{j=0}^{D}c_{ij}X^iY^j$$, and noticing that I have control over the choice of $(1+E)(1+D)$ coefficients. I was hoping to be able to use this information to create a homogeneous system of linear equations to give a non-trivial solution for the $c_{ij}$ that would force $P(F,G) = 0$.



Is this a viable approach? If so, what would my next step be? I asked my professor, and the hint he gave me was to think about the resultant of $F$ and $G$. I know how to construct the matrix whose determinant is the resultant of $F$ and $G$, and I know the resultant is $0$ if $F$ and $G$ share a common factor, but I don't know how that helps us with this problem



Thanks in advance for any comments, hints, or solutions!










share|cite|improve this question




















  • 1




    Your approach, I fear, only gives weaker bounds (degrees at most $2E$ and $2D$, not $E$ and $D$).
    – darij grinberg
    Sep 30 at 21:52






  • 5




    See mathoverflow.net/a/189344 for the proof using resultants. The idea is to take the resultant of the two polynomials $Fleft(Tright) - X$ and $Gleft(Tright) - Y$ in the indeterminate $T$ over the ring $mathbb{C}left[X,Yright]$. This resultant is nonzero as a polynomial, but will become $0$ when $Fleft(Uright)$ and $Gleft(Uright)$ (with $U$ being yet another indeterminate) are substituted for $X$ and $Y$, since then the two polynomials will have the common root $U = T$.
    – darij grinberg
    Sep 30 at 21:58












  • Would you like to post this as an answer so you can receive the bounty on the problem? This answered everything I had in mind.
    – JonHales
    Oct 8 at 0:19













up vote
5
down vote

favorite
3









up vote
5
down vote

favorite
3






3






The problem statement is "Let $F in mathbb{C}[t]$ have degree at most $D geq 1$, and let $G in mathbb{C}[t]$ have degree $E geq 1$.



Show that there is a $P not = 0$ in $mathbb{C}[X,Y]$ with degree at most $E$ in $X$ and $D$ in $Y$ such that $P(F,G) = 0$."




I've tried to work this out by supposing $$P = sum_{i=0}^{E} sum_{j=0}^{D}c_{ij}X^iY^j$$, and noticing that I have control over the choice of $(1+E)(1+D)$ coefficients. I was hoping to be able to use this information to create a homogeneous system of linear equations to give a non-trivial solution for the $c_{ij}$ that would force $P(F,G) = 0$.



Is this a viable approach? If so, what would my next step be? I asked my professor, and the hint he gave me was to think about the resultant of $F$ and $G$. I know how to construct the matrix whose determinant is the resultant of $F$ and $G$, and I know the resultant is $0$ if $F$ and $G$ share a common factor, but I don't know how that helps us with this problem



Thanks in advance for any comments, hints, or solutions!










share|cite|improve this question
















The problem statement is "Let $F in mathbb{C}[t]$ have degree at most $D geq 1$, and let $G in mathbb{C}[t]$ have degree $E geq 1$.



Show that there is a $P not = 0$ in $mathbb{C}[X,Y]$ with degree at most $E$ in $X$ and $D$ in $Y$ such that $P(F,G) = 0$."




I've tried to work this out by supposing $$P = sum_{i=0}^{E} sum_{j=0}^{D}c_{ij}X^iY^j$$, and noticing that I have control over the choice of $(1+E)(1+D)$ coefficients. I was hoping to be able to use this information to create a homogeneous system of linear equations to give a non-trivial solution for the $c_{ij}$ that would force $P(F,G) = 0$.



Is this a viable approach? If so, what would my next step be? I asked my professor, and the hint he gave me was to think about the resultant of $F$ and $G$. I know how to construct the matrix whose determinant is the resultant of $F$ and $G$, and I know the resultant is $0$ if $F$ and $G$ share a common factor, but I don't know how that helps us with this problem



Thanks in advance for any comments, hints, or solutions!







abstract-algebra polynomials resultant






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Sep 28 at 16:57









amWhy

191k28223439




191k28223439










asked Sep 28 at 16:48









JonHales

381211




381211








  • 1




    Your approach, I fear, only gives weaker bounds (degrees at most $2E$ and $2D$, not $E$ and $D$).
    – darij grinberg
    Sep 30 at 21:52






  • 5




    See mathoverflow.net/a/189344 for the proof using resultants. The idea is to take the resultant of the two polynomials $Fleft(Tright) - X$ and $Gleft(Tright) - Y$ in the indeterminate $T$ over the ring $mathbb{C}left[X,Yright]$. This resultant is nonzero as a polynomial, but will become $0$ when $Fleft(Uright)$ and $Gleft(Uright)$ (with $U$ being yet another indeterminate) are substituted for $X$ and $Y$, since then the two polynomials will have the common root $U = T$.
    – darij grinberg
    Sep 30 at 21:58












  • Would you like to post this as an answer so you can receive the bounty on the problem? This answered everything I had in mind.
    – JonHales
    Oct 8 at 0:19














  • 1




    Your approach, I fear, only gives weaker bounds (degrees at most $2E$ and $2D$, not $E$ and $D$).
    – darij grinberg
    Sep 30 at 21:52






  • 5




    See mathoverflow.net/a/189344 for the proof using resultants. The idea is to take the resultant of the two polynomials $Fleft(Tright) - X$ and $Gleft(Tright) - Y$ in the indeterminate $T$ over the ring $mathbb{C}left[X,Yright]$. This resultant is nonzero as a polynomial, but will become $0$ when $Fleft(Uright)$ and $Gleft(Uright)$ (with $U$ being yet another indeterminate) are substituted for $X$ and $Y$, since then the two polynomials will have the common root $U = T$.
    – darij grinberg
    Sep 30 at 21:58












  • Would you like to post this as an answer so you can receive the bounty on the problem? This answered everything I had in mind.
    – JonHales
    Oct 8 at 0:19








1




1




Your approach, I fear, only gives weaker bounds (degrees at most $2E$ and $2D$, not $E$ and $D$).
– darij grinberg
Sep 30 at 21:52




Your approach, I fear, only gives weaker bounds (degrees at most $2E$ and $2D$, not $E$ and $D$).
– darij grinberg
Sep 30 at 21:52




5




5




See mathoverflow.net/a/189344 for the proof using resultants. The idea is to take the resultant of the two polynomials $Fleft(Tright) - X$ and $Gleft(Tright) - Y$ in the indeterminate $T$ over the ring $mathbb{C}left[X,Yright]$. This resultant is nonzero as a polynomial, but will become $0$ when $Fleft(Uright)$ and $Gleft(Uright)$ (with $U$ being yet another indeterminate) are substituted for $X$ and $Y$, since then the two polynomials will have the common root $U = T$.
– darij grinberg
Sep 30 at 21:58






See mathoverflow.net/a/189344 for the proof using resultants. The idea is to take the resultant of the two polynomials $Fleft(Tright) - X$ and $Gleft(Tright) - Y$ in the indeterminate $T$ over the ring $mathbb{C}left[X,Yright]$. This resultant is nonzero as a polynomial, but will become $0$ when $Fleft(Uright)$ and $Gleft(Uright)$ (with $U$ being yet another indeterminate) are substituted for $X$ and $Y$, since then the two polynomials will have the common root $U = T$.
– darij grinberg
Sep 30 at 21:58














Would you like to post this as an answer so you can receive the bounty on the problem? This answered everything I had in mind.
– JonHales
Oct 8 at 0:19




Would you like to post this as an answer so you can receive the bounty on the problem? This answered everything I had in mind.
– JonHales
Oct 8 at 0:19










1 Answer
1






active

oldest

votes

















up vote
3
down vote



accepted










Here is a proof using resultants, taken mostly from https://mathoverflow.net/questions/189181//189344#189344 . (For a short summary, see one of my comments to the OP.)



$newcommand{KK}{mathbb{K}}
newcommand{LL}{mathbb{L}}
newcommand{NN}{mathbb{N}}
newcommand{ww}{mathbf{w}}
newcommand{eps}{varepsilon}
newcommand{Res}{operatorname{Res}}
newcommand{Syl}{operatorname{Syl}}
newcommand{adj}{operatorname{adj}}
newcommand{id}{operatorname{id}}
newcommand{tilF}{widetilde{F}}
newcommand{tilG}{widetilde{G}}
newcommand{ive}[1]{left[ #1 right]}
newcommand{tup}[1]{left( #1 right)}
newcommand{zeroes}[1]{underbrace{0,0,ldots,0}_{#1 text{ zeroes}}}$

We shall prove a more general statement:




Theorem 1. Let $KK$ be a nontrivial commutative ring. Let $F$ and $G$ be two
polynomials in the polynomial ring $KK ive{T}$. Let
$d$ and $e$ be nonnegative integers such that $d+e > 0$ and $deg F leq d$ and $deg G leq e$.
Then, there exists a nonzero polynomial $PinKK ive{X, Y}$ in two indeterminates $X$ and $Y$ such that $deg_X Pleq e$
and $deg_Y Pleq d$ and $Ptup{F, G} =0$.




Here and in the following, we are using the following notations:




  • "Ring" always means "associative ring with unity".


  • A ring $R$ is said to be nontrivial if $0 neq 1$ in $R$.


  • If $R$ is any polynomial in the polynomial ring $KK ive{X, Y}$, then $deg_X R$ denotes the degree of $R$ with respect to the variable $X$ (that is, it denotes the degree of $R$ when $R$ is considered as a polynomial in $tup{KK ive{Y}} ive{X} $), whereas $deg_Y R$ denotes the degree of the polynomial $R$ with respect to the variable $Y$.



To prove Theorem 1, we recall the notion of the resultant of two polynomials over a
commutative ring:




Definition. Let $KK$ be a commutative ring.
Let $Pin KK ive{T}$ and $QinKK ive{T}$ be two polynomials in the polynomial ring $KK ive{T}$.
Let $dinNN$ and $einNN$ be such that $deg Pleq d$ and $deg Qleq e$.
Thus, write the polynomials $P$ and $Q$ in the forms
begin{align*}
P & =p_0 +p_1 T+p_2 T^2 +cdots+p_d T^d qquadtext{and}\
Q & =q_0 +q_1 T+q_2 T^2 +cdots+q_e T^e ,
end{align*}

where $p_0 ,p_1 ,ldots,p_d ,q_0 ,q_1 ,ldots,q_e $ belong to $KK$.
Then, we let $Syl_{d,e} tup{P, Q}$ be the matrix
begin{equation}
left(
begin{array}[c]{c}
begin{array}[c]{ccccccccc}
p_0 & 0 & 0 & cdots & 0 & q_0 & 0 & cdots & 0\
p_1 & p_0 & 0 & cdots & 0 & q_1 & q_0 & cdots & 0\
vdots & p_1 & p_0 & cdots & 0 & vdots & q_1 & ddots & vdots\
vdots & vdots & p_1 & ddots & vdots & vdots & vdots & ddots &
q_0 \
p_d & vdots & vdots & ddots & p_0 & vdots & vdots & ddots & q_1 \
0 & p_d & vdots & ddots & p_1 & q_e & vdots & ddots & vdots\
vdots & vdots & ddots & ddots & vdots & 0 & q_e & ddots & vdots\
0 & 0 & 0 & ddots & vdots & vdots & vdots & ddots & vdots\
0 & 0 & 0 & cdots & p_d & 0 & 0 & cdots & q_e
end{array}
\
underbrace{ }_{etext{ columns}}
underbrace{ }_{dtext{ columns}}
end{array}
right) inKK^{tup{d+e} timestup{d+e}};
end{equation}

this is the $tup{d+e} timestup{d+e}$-matrix whose first $e$ columns have the form
begin{equation}
left( zeroes{k},p_0 ,p_1 ,ldots ,p_d ,zeroes{e-1-k}right) ^{T}
qquadtext{for }kinleft{ 0,1,ldots,e-1right} ,
end{equation}

and whose last $d$ columns have the form
begin{equation}
left( zeroes{ell},q_0 ,q_1 ,ldots,q_e ,zeroes{d-1-ell}right) ^{T}
qquadtext{for }ellinleft{ 0,1,ldots,d-1right} .
end{equation}

Furthermore, we define $Res_{d,e}tup{P, Q}$ to be the element
begin{equation}
det tup{ Syl_{d,e}tup{P, Q} } in KK .
end{equation}

The matrix $Syl_{d,e}tup{P, Q}$ is called the Sylvester matrix of $P$ and $Q$ in degrees $d$ and $e$.
Its determinant $Res_{d,e}tup{P, Q}$ is called the resultant of $P$ and $Q$ in degrees $d$ and $e$.



It is common to apply this definition to the case when $d=deg P$ and $e=deg Q$; in this case, we simply call $Res_{d,e}tup{P, Q}$ the resultant of $P$ and $Q$, and denote it by $Res tup{P, Q}$.




Here, we take $NN$ to mean the set $left{0,1,2,ldotsright}$ of all nonnegative integers.



One of the main properties of resultants is the following:




Theorem 2. Let $KK$ be a commutative ring.
Let $Pin KK ive{T}$ and $QinKK ive{T}$ be two polynomials in the polynomial ring $KK ive{T}$.
Let $dinNN$ and $einNN$ be such that $d+e > 0$ and $deg Pleq d$ and $deg Qleq e$.
Let $LL$ be a commutative $KK$-algebra, and let $winLL$ satisfy $Ptup{w} =0$ and $Qtup{w} = 0$.
Then, $Res_{d,e}tup{P, Q} =0$ in $LL$.




Proof of Theorem 2 (sketched). Recall that $Res_{d,e}tup{P, Q} =det tup{ Syl_{d,e}tup{P, Q} }$ (by the definition of $Res_{d,e}tup{P, Q}$).



Write the polynomials $P$ and $Q$ in the forms
begin{align*}
P & =p_0 +p_1 T+p_2 T^2 +cdots+p_d T^d qquadtext{and}\
Q & =q_0 +q_1 T+q_2 T^2 +cdots+q_e T^e ,
end{align*}

where $p_0 ,p_1 ,ldots,p_d ,q_0 ,q_1 ,ldots,q_e $ belong to
$KK$.
(We can do this, since $deg P leq d$ and $deg Q leq e$.)
From $p_0 +p_1 T+p_2 T^2 +cdots+p_d T^d = P$,
we obtain
$p_0 + p_1 w + p_2 w^2 + cdots + p_d w^d = Pleft(wright) = 0$.
Similarly,
$q_0 + q_1 w + q_2 w^2 + cdots + q_e w^e = 0$.



Let $A$ be the matrix $Syl_{d,e}tup{P, Q}
inKK^{tup{d+e} timestup{d+e} }$
, regarded as a
matrix in $LL ^{tup{d+e} timestup{d+e} }$ (by
applying the canonical $KK$-algebra homomorphism $KK
rightarrowLL$
to all its entries).



Let $ww$ be the row vector $left( w^{0},w^{1},ldots,w^{d+e-1}
right) inLL ^{1timestup{d+e} }$
. Let $mathbf{0}$ denote
the zero vector in $LL ^{1timestup{d+e} }$.



Now, it is easy to see that $ww A=mathbf{0}$. (Indeed, for each
$kinleft{ 1,2,ldots,d+eright} $, we have
begin{align*}
& wwleft( text{the }ktext{-th column of }Aright) \
& =
begin{cases}
p_0 w^{k-1}+p_1 w^k +p_2 w^{k+1}+cdots+p_d w^{k-1+d}, & text{if }kleq e;\
q_0 w^{k-e-1}+q_1 w^{k-e}+q_2 w^{k-e+1}+cdots+q_e w^{k-1}, & text{if }k>e
end{cases}
\
& =
begin{cases}
w^{k-1}left( p_0 +p_1 w+p_2 w^2 +cdots+p_d w^d right) , & text{if }kleq e;\
w^{k-e-1}left( q_0 +q_1 w+q_2 w^2 +cdots+q_e w^eright) , & text{if }k>e
end{cases}
\
& =
begin{cases}
w^{k-1}0, & text{if }kleq e;\
w^{k-e-1}0, & text{if }k>e
end{cases}
\
& qquadleft(
begin{array}[c]{c}
text{since }p_0 +p_1 w+p_2 w^2 +cdots+p_d w^d =0\
text{and }q_0 +q_1 w+q_2 w^2 +cdots+q_e w^e =0
end{array}
right) \
& =0.
end{align*}

But this means precisely that $ww A=mathbf{0}$.)



But $A$ is a square matrix over a commutative ring; thus, the
adjugate
$adj A$ of $A$ satisfies $Acdotadj A=det
Acdot I_{d+e}$
(where $I_{d+e}$ denotes the identity matrix of size $d+e$).
Hence, $wwunderbrace{Acdotadj A}_{=det Acdot
I_{d+e}}=wwdet Acdot I_{d+e}=det Acdotww$
. Comparing this
with $underbrace{ww A}_{=mathbf{0}}cdotadj A
=mathbf{0}cdotadj A=mathbf{0}$
, we obtain
$det Acdotww=mathbf{0}$.



But $d+e > 0$; thus, the row vector $ww$ has a well-defined first entry.
This first entry is $w^0 = 1$.
Hence, the first entry of the row vector $det Acdotww$ is $det A cdot 1 = det A$.
Hence, from $det Acdotww=mathbf{0}$, we conclude that $det A=0$.
Comparing this with
begin{equation}
detunderbrace{A}_{=Syl_{d,e}tup{P, Q}} =det tup{ Syl_{d,e}tup{P, Q} }
=Res_{d,e}tup{P, Q} ,
end{equation}

we obtain $Res_{d,e}tup{P, Q} =0$ (in $LL$). This proves Theorem 2. $blacksquare$



Theorem 2 (which I have proven in detail to stress how the proof uses nothing
about $LL$ other than its commutativity) was just the meek tip of the
resultant iceberg. Here are some further sources with deeper results:




  • Antoine Chambert-Loir, Résultants (minor errata).


  • Svante Janson, Resultant and discriminant of polynomials.


  • Gerald Myerson, On resultants, Proc. Amer. Math. Soc. 89 (1983), 419--420.



Some of these sources use the matrix $left( Syl_{d,e}tup{P, Q} right) ^{T}$ instead of our
$Syl_{d,e}tup{P, Q}$, but of course this
matrix has the same determinant as $Syl_{d,e}tup{P, Q}$, so that their definition of a resultant is the same as mine.



We are not yet ready to prove Theorem 1 directly. Instead, let us prove a
weaker version of Theorem 1:




Lemma 3. Let $KK$ be a commutative ring. Let $F$ and $G$ be two
polynomials in the polynomial ring $KK ive{T}$. Let
$d$ and $e$ be nonnegative integers such that $d+e > 0$ and $deg F leq d$ and $deg G leq e$.
Write the polynomial $G$ in the form $G=g_0 +g_1 T+g_2 T^2 +cdots
+g_e T^e $
, where $g_0 ,g_1 ,ldots,g_e inKK$.
Assume that $g_e^d neq 0$.
Then, there exists a nonzero polynomial $PinKK ive{X, Y}$
in two indeterminates $X$ and $Y$ such that $deg_X Pleq e$
and $deg_Y Pleq d$ and $Ptup{F, G} =0$.




Proof of Lemma 3 (sketched). Let $widetilde{KK}$ be the
commutative ring $KK ive{X, Y}$. Define two polynomials
$tilF inwidetilde{KK}ive{T}$ and $tilG inwidetilde{KK}ive{T}$ by
begin{equation}
tilF =F-X=Ftup{T} -Xqquadtext{and}qquadtilG
=G-Y=Gtup{T} -Y.
end{equation}

Note that $X$ and $Y$ have degree $0$ when considered as polynomials in
$widetilde{KK}ive{T}$ (since $X$ and $Y$ belong to the
ring $widetilde{KK}$). Thus, these new polynomials $tilF = F - X$
and $tilG = G - Y$ have degrees $degtilF leq d$ (because
$deg X = 0 leq d$ and $deg F leq d$) and
$degtilG leq e$ (similarly).
Hence, the resultant $Res_{d,e}tup{tilF, tilG} in
widetilde{KK}$
of these polynomials $tilF$ and
$tilG$ in degrees $d$ and $e$ is well-defined. Let us denote this
resultant $Res_{d,e}left( tilF
,tilG right)$
by $P$. Hence,
begin{equation}
P=Res_{d,e}left( tilF ,tilG
right) inwidetilde{KK}=KK ive{X, Y} .
end{equation}



Our next goal is to show that $P$ is a nonzero polynomial and satisfies
$deg_X Pleq e$ and $deg_Y Pleq d$ and $Ptup{F, G} =0$. Once
this is shown, Lemma 3 will obviously follow.



We have
begin{equation}
P=Res_{d,e}left( tilF ,tilG
right) =detleft( Syl_{d,e}left( tilF
,tilG right) right)
label{darij1.pf.t1.P=det}
tag{1}
end{equation}

(by the definition of $Res_{d,e}left(
tilF ,tilG right)$
).



Write the polynomial $F$ in the form $F=f_0 +f_1 T+f_2 T^2 +cdots
+f_d T^d $
, where $f_0 ,f_1 ,ldots,f_d inKK$. (This can be
done, since $deg F leq d$.)



Recall that $g_e ^d neq 0$. Thus, $left( -1right) ^e g_e^d neq 0$.



For each $pinNN$, we let $S_{p}$ be the group of all permutations of
the set $left{ 1,2,ldots,pright} $.



Now,
begin{align*}
tilF & =F-X=left( f_0 +f_1 T+f_2 T^2 +cdots+f_d
T^d right) -X\
& qquadleft( text{since }F=f_0 +f_1 T+f_2 T^2 +cdots+f_d
T^d right) \
& =tup{f_0 - X} +f_1 T+f_2 T^2 +cdots+f_d T^d .
end{align*}

Thus, $f_0 -X,f_1 ,f_2 ,ldots,f_d $ are the coefficients of the
polynomial $tilF inwidetilde{KK}ive{T}$ (since
$f_0 -Xinwidetilde{KK}$). Similarly, $g_0 -Y,g_1 ,g_2
,ldots,g_e $
are the coefficients of the polynomial $tilG
inwidetilde{KK}ive{T}$
. Hence, the definition of the
matrix $Syl_{d,e}left( tilF ,tilG
right)$
yields
begin{align}
&Syl_{d,e}tup{tilF, tilG} \
&=left(
begin{array}[c]{c}
begin{array}[c]{ccccccccc}
f_0 -X & 0 & 0 & cdots & 0 & g_0 -Y & 0 & cdots & 0\
f_1 & f_0 -X & 0 & cdots & 0 & g_1 & g_0 -Y & cdots & 0\
vdots & f_1 & f_0-X & cdots & 0 & vdots & g_1 & ddots & vdots\
vdots & vdots & f_1 & ddots & vdots & vdots & vdots & ddots &
g_0 -Y\
f_d & vdots & vdots & ddots & f_0 -X & vdots & vdots & ddots &
g_1 \
0 & f_d & vdots & ddots & f_1 & g_e & vdots & ddots & vdots\
vdots & vdots & ddots & ddots & vdots & 0 & g_e & ddots & vdots\
0 & 0 & 0 & ddots & vdots & vdots & vdots & ddots & vdots\
0 & 0 & 0 & cdots & f_d & 0 & 0 & cdots & g_e
end{array}
\
underbrace{qquad qquad qquad qquad qquad qquad qquad qquad qquad qquad}
_{etext{ columns}}
underbrace{qquad qquad qquad qquad qquad qquad qquad}
_{dtext{ columns}}
end{array}
right) \
&inwidetilde{KK}^{tup{d+e} timesleft(
d+eright) }.
end{align}

Now, let us use this explicit form of $Syl_{d,e} tup{tilF, tilG}$
to compute $detleft( Syl_{d,e}tup{tilF, tilG} right)$ using
the Leibniz formula.
The Leibniz formula yields
begin{equation}
detleft( Syl_{d,e} tup{ tilF, tilG } right)
=sum_{sigmain S_{d+e}}a_{sigma},
label{darij1.pf.t1.det=sum}
tag{2}
end{equation}

where for each permutation $sigmain S_{d+e}$, the addend $a_{sigma}$ is a
product of entries of $Syl_{d,e} tup{tilF, tilG}$, possibly with a minus sign. More
precisely,
begin{equation}
a_{sigma}=tup{-1}^{sigma}prod_{i=1}^{d+e}left( text{the }
left( i,sigmaleft( iright) right) text{-th entry of }
Syl_{d,e}tup{tilF, tilG}
right)
end{equation}

for each $sigmain S_{d+e}$ (where $tup{-1}^{sigma}$ denotes the
sign of the permutation $sigma$).



Now, eqref{darij1.pf.t1.P=det} becomes
begin{equation}
P=detleft( Syl_{d,e}tup{tilF, tilG} right)
=sum_{sigmain S_{d+e}}a_{sigma}
label{darij1.pf.t1.P=sum}
tag{3}
end{equation}

(by eqref{darij1.pf.t1.det=sum}).



All entries of the matrix $Syl_{d,e}tup{tilF, tilG}$ are polynomials in the two
indeterminates $X$ and $Y$; but only $d+e$ of these entries are non-constant
polynomials (since all of $f_0 ,f_1 ,ldots,f_d ,g_0 ,g_1 ,ldots,g_e $
belong to $KK$). More precisely, only $e$ entries of
$Syl_{d,e}tup{tilF, tilG}$
have non-zero degree with respect to the variable $X$ (namely, the first $e$
entries of the diagonal of $Syl_{d,e}tup{tilF, tilG}$), and these $e$ entries have degree $1$
with respect to this variable. Thus, for each $sigmain S_{d+e}$, the product
$a_{sigma}$ contains at most $e$ many factors that have degree $1$ with
respect to the variable $X$, while all its remaining factors have degree $0$
with respect to this variable. Therefore, for each $sigmain S_{d+e}$, the
product $a_{sigma}$ has degree $leq ecdot1=e$ with respect to the variable
$X$. Hence, the sum $sum_{sigmain S_{d+e}}a_{sigma}$ of all these products
$a_{sigma}$ also has degree $leq e$ with respect to the variable $X$. In
other words, $deg_X left( sum_{sigmain S_{d+e}}a_{sigma}right) leq
e$
. In view of eqref{darij1.pf.t1.P=sum}, this rewrites as $deg_X Pleq e$.
Similarly, $deg_Y Pleq d$ (since only $d$ entries of the matrix
$Syl_{d,e}tup{tilF, tilG}
$
have non-zero degree with respect to the variable $Y$, and these $d$ entries
have degree $1$ with respect to this variable).



Next, we shall show that the polynomial $P$ is nonzero. Indeed, let us
consider all elements of $widetilde{KK}$ as polynomials in the
variable $X$ over the ring $KK ive{Y} $. For each
permutation $sigmain S_{d+e}$, the product $a_{sigma}$ (thus considered)
has degree $leq e$ (as we have previously shown). Let us now compute the
coefficient of $X^e $ in this product $a_{sigma}$. There are three possible cases:




  • Case 1: The permutation $sigmain S_{d+e}$ does not satisfy $left(
    sigmaleft( iright) =itext{ for each }iinleft{ 1,2,ldots,eright}
    right)$
    . Thus, the product $a_{sigma}$ has strictly fewer than $e$
    factors that have degree $1$ with respect to the variable $X$, while all its
    remaining factors have degree $0$ with respect to this variable. Thus, the
    whole product $a_{sigma}$ has degree $<e$ with respect to the variable $X$.
    Hence, the coefficient of $X^e $ in this product $a_{sigma}$ is $0$.


  • Case 2: The permutation $sigmain S_{d+e}$ satisfies $left(
    sigmaleft( iright) =itext{ for each }iinleft{ 1,2,ldots,eright}
    right)$
    , but is not the identity map $idin S_{d+e}$. Thus,
    there must exist at least one $iinleft{ 1,2,ldots,d+eright} $ such
    that $sigmaleft( iright) <i$. Consider such an $i$, and notice that it
    must satisfy $i>e$ and $sigmaleft( iright) >e$; hence, the $left(
    i,sigmaleft( iright) right)$
    -th entry of
    $Syl_{d,e}tup{tilF, tilG}$ is $0$. Thus, the
    whole product $a_{sigma}$ is $0$ (since the latter entry is a factor in this
    product). Thus, the coefficient of $X^e $ in this product $a_{sigma}$ is $0$.


  • Case 3: The permutation $sigmain S_{d+e}$ is the identity map
    $idin S_{d+e}$. Thus, the product $a_{sigma}$ is $left(
    f_0 -Xright) ^e g_e^d $
    (since $tup{-1}^{id
    }=1$
    ). Hence, the coefficient of $X^e $ in this product $a_{sigma}$ is
    $tup{-1}^e g_e^d $.



Summarizing, we thus conclude that the coefficient of $X^e $ in the product
$a_{sigma}$ is $0$ unless $sigma=id$, in which case it is
$tup{-1}^e g_e^d $. Hence, the coefficient of $X^e $ in the
sum $sum_{sigmain S_{d+e}}a_{sigma}$ is $tup{-1}^e g_e^d
neq0$
. Therefore, $sum_{sigmain S_{d+e}}a_{sigma}neq0$. In view of
eqref{darij1.pf.t1.P=sum}, this rewrites as $Pneq0$. In other words, the
polynomial $P$ is nonzero.



Finally, it remains to prove that $Ptup{F, G} =0$. In order to do
this, we let $LL$ be the polynomial ring $KK ive{U}
$
in a new indeterminate $U$. We let $varphi:KK ive{X, Y}
rightarrowLL$
be the unique $KK$-algebra homomorphism that
sends $X$ to $Ftup{U}$ and sends $Y$ to $Gtup{U}$.
(This is well-defined by the universal property of the polynomial ring
$KK ive{X, Y}$.)
Note that $varphi$ is a $KK$-algebra homomorphism from
$widetilde{KK}$ to $LL$ (since $KK ive{X, Y}
=widetilde{KK}$
). Thus, $LL$ becomes a $widetilde{KK
}$
-algebra via this homomorphism $varphi$.



Now, recall that the polynomial $tilF inwidetilde{KK}ive{T}$
was defined by $tilF =F-X$. Hence, $tilF left(
Uright) =Ftup{U} -varphitup{X}$
. (Indeed, when we
regard $X$ as an element of $widetilde{KK}ive{T}$, the
polynomial $X$ is simply a constant, and thus evaluating it at $U$ yields the
canonical image of $X$ in $LL$, which is $varphitup{X}$.)
But $varphitup{X} =Ftup{U}$ (by the definition of
$varphi$).
Hence, $tilF tup{U} =Ftup{U} -varphitup{X} =0$ (since $varphitup{X} =Ftup{U}$). Similarly, $tilG tup{U} =0$.



Thus, the element $UinLL$ satisfies $tilF tup{U}
=0$
and $tilG tup{U} =0$. Hence, Theorem 2 (applied to
$widetilde{KK}$, $tilF$, $tilG$ and $U$ instead of $KK$, $P$, $Q$ and $w$) yields that
$Res_{d,e}tup{tilF, tilG} = 0$ in $LL$.
In other words, $varphitup{ Res_{d,e}tup{tilF, tilG} } =0$. In view of
$Res_{d,e}tup{tilF, tilG} =P$, this rewrites as $varphitup{P} =0$.



But recall that $varphi$ is the $KK$-algebra homomorphism that sends
$X$ to $Ftup{U}$ and sends $Y$ to $Gtup{U}$. Hence, it
sends any polynomial $QinKK ive{X, Y}$ to $Q tup{ Ftup{U}, Gtup{U} }$. Applying this to $Q=P$, we
conclude that it sends $P$ to $P tup{ Ftup{U}, Gtup{U} }$.
In other words, $varphitup{P} =P tup{ Ftup{U}, Gtup{U} }$;
hence, $P tup{ Ftup{U}, Gtup{U} } =varphitup{P} =0$.



Now, $Ftup{U}$ and $Gtup{U}$ are polynomials in the
indeterminate $U$ over $KK$. If we rename the indeterminate $U$ as
$T$, then these polynomials $Ftup{U}$ and $Gtup{U}$
become $Ftup{T}$ and $Gtup{T}$, and therefore the
polynomial $Pleft( Ftup{U} ,Gtup{U} right)$ becomes
$Ptup{ Ftup{T}, Gtup{T} }$.
Hence, $Ptup{ Ftup{T}, Gtup{T} } =0$ (since $Ptup{ Ftup{U}, Gtup{U} } =0$).
In other words, $P tup{F, G} =0$ (since $Ftup{T} =F$ and $Gtup{T} =G$).
This completes the proof of Lemma 3. $blacksquare$




Lemma 4. (a) Theorem 1 holds when $d = 0$.



(b) Theorem 1 holds when $e = 0$.




Proof of Lemma 4. (a) Assume that $d = 0$.
Thus, $d + e > 0$ rewrites as $e > 0$. Hence, $e geq 1$.
But the polynomial $F$ is constant (since $deg F leq d = 0$).
In other words, $F = f$ for some $f in KK$. Consider this $f$.
Now, let $Q$ be the polynomial $X - f in KKive{X, Y}$.
Then, $Q$ is nonzero and satisfies
$deg_X Q = 1 leq e$ (since $e geq 1$) and
$deg_Y Q = 0 leq d$ and
$Qleft(F, Gright) = F - f = 0$ (since $F = f$).
Hence, there exists a nonzero polynomial $PinKK ive{X, Y}$
in two indeterminates $X$ and $Y$ such that $deg_X Pleq e$
and $deg_Y Pleq d$ and $Ptup{F, G} =0$
(namely, $P = Q$). In other words, Theorem 1 holds (under
our assumption that $d = 0$). This proves Lemma 4 (a).



(b) The proof of Lemma 4 (b) is analogous to
our above proof of Lemma 4 (a). $blacksquare$



Now, we can prove Theorem 1 at last:



Proof of Theorem 1. We shall prove Theorem 1 by induction on $e$.



The induction base is the case when $e = 0$; this case follows
from Lemma 4 (b).



For the induction step, we fix a positive integer $eps$.
Assume (as the induction hypothesis) that Theorem 1 holds for $e = eps - 1$.
We must now prove that Theorem 1 holds for $e = eps$.



Let $KK$ be a commutative ring. Let $F$ and $G$ be two
polynomials in the polynomial ring $KK ive{T}$. Let
$d$ be a nonnegative integer such that $d+eps > 0$ and $deg F leq d$ and $deg G leq eps$.
Our goal is now to prove that the claim of Theorem 1 holds for $e = eps$.
In other words, our goal is to prove that there exists a nonzero polynomial $PinKK ive{X, Y}$
in two indeterminates $X$ and $Y$ such that $deg_X Pleq eps$
and $deg_Y Pleq d$ and $Ptup{F, G} =0$.



Write the polynomial $G$ in the form $G=g_0 +g_1 T+g_2 T^2 +cdots
+g_{eps}T^{eps}$
, where $g_0 ,g_1 ,ldots,g_{eps}inKK$.
(This can be done, since $deg G leq eps$.)
If $g_{eps}^d neq 0$, then our goal follows immediately
by applying Lemma 3 to $e = eps$.
Thus, for the rest of this induction step, we WLOG assume that $g_{eps}^d = 0$.
Hence, there exists a positive integer $m$ such that $g_{eps}^m = 0$ (namely, $m = eps$).
Thus, there exists a smallest such $m$.
Consider this smallest $m$.
Then, $g_{eps}^m = 0$, but
begin{align}
text{every positive integer $ell < m$ satisfies $g_{eps}^{ell} neq 0$.}
label{darij1.pf.t1.epsilon-ell}
tag{4}
end{align}



We claim that $g_{eps}^{m-1} neq 0$. Indeed, if $m-1$ is a
positive integer, then this follows from eqref{darij1.pf.t1.epsilon-ell} (applied to $ell = m-1$);
otherwise, it follows from the fact that $g_{eps}^0 = 1 neq 0$
(since the ring $KK$ is nontrivial).



Now recall again that our goal is to prove that the claim of Theorem 1 holds for $e = eps$.
If $d = 0$, then this goal follows from Lemma 4 (a).
Hence, for the rest of this induction step, we WLOG assume that $d neq 0$.
Hence, $d > 0$ (since $d$ is a nonnegative integer).



We have $e geq 1$ (since $e$ is a positive integer), thus
$e - 1 geq 0$. Hence, $d + left(e-1right) geq d > 0$.



Let $I$ be the subset $left{x in KK mid g_{eps}^{m-1} x = 0 right}$ of $KK$.
Then, $I$ is an ideal of $KK$ (namely, it is the
annihilator of the subset
$left{g_{eps}^{m-1}right}$ of $KK$);
thus, $KK / I$ is a commutative $KK$-algebra.
Denote this commutative $KK$-algebra $KK / I$ by $LL$.
Let $pi$ be the canonical projection $KK to LL$.
Of course, $pi$ is a surjective $KK$-algebra homomorphism.



For any $a in KK$, we will denote the image of $a$ under $pi$ by $overline{a}$.



The $KK$-algebra homomorphism $pi : KK to LL$ induces a canonical $KK$-algebra homomorphism $KKive{T} to LLive{T}$ (sending $T$ to $T$).
For any $a in KKive{T}$, we will denote the image of $a$ under the latter homomorphism by $overline{a}$.



The $KK$-algebra homomorphism $pi : KK to LL$ induces a canonical $KK$-algebra homomorphism $KKive{X, Y} to LLive{X, Y}$ (sending $X$ and $Y$ to $X$ and $Y$).
For any $a in KKive{X, Y}$, we will denote the image of $a$ under the latter homomorphism by $overline{a}$.



We have $g_{eps}^{m-1} g_{eps} = g_{eps}^m = 0$, so that $g_{eps} in I$ (by the definition of $I$);
hence, the residue class $overline{g_{eps}}$ of $g_{eps}$ modulo the ideal $I$ is $0$.



We have $g_{eps}^{m-1} cdot 1 = g_{eps}^{m-1} neq 0$ in $KK$,
and thus $1 notin I$ (by the definition of $I$).
Hence, the ideal $I$ is not the whole ring $KK$.
Thus, the quotient ring $KK / I = LL$ is nontrivial.



But $G=g_0 +g_1 T+g_2 T^2 +cdots +g_{eps}T^{eps}$
and thus
begin{align}
overline{G}
&= overline{g_0} + overline{g_1} T + overline{g_2} T^2 + cdots + overline{g_{eps}} T^{eps} \
&= left( overline{g_0} + overline{g_1} T + overline{g_2} T^2 + cdots + overline{g_{eps-1}} T^{eps-1} right)
+ underbrace{overline{g_{eps}}}_{= 0} T^{eps} \
&= overline{g_0} + overline{g_1} T + overline{g_2} T^2 + cdots + overline{g_{eps-1}} T^{eps-1} ,
end{align}

so that $deg overline{G} leq e-1$.
Also, $deg overline{F} leq deg F leq d$.
But the induction hypothesis tells us that Theorem 1 holds for $e = eps - 1$.
Hence, we can apply Theorem 1 to $LL$, $overline{F}$, $overline{G}$ and $eps - 1$
instead of $KK$, $F$, $G$ and $e$.
We thus conclude that there exists a nonzero polynomial $Pin LLive{X, Y}$ in two indeterminates $X$ and $Y$ such that $deg_X P leq eps - 1$ and $deg_Y P leq d$ and $Pleft( overline{F}, overline{G} right) =0$.
Consider this polynomial $P$, and denote it by $R$.
Thus, $R in LL ive{X, Y}$ is a nonzero polynomial in two indeterminates $X$ and $Y$ and satisfies $deg_X R leq eps - 1$ and $deg_Y R leq d$ and $R left( overline{F}, overline{G} right) =0$.



Clearly, there exists a polynomial $Q in KKive{X, Y}$ in two
indeterminates $X$ and $Y$ that satisfies $deg_X Q = deg_X R$ and
$deg_Y Q = deg_Y R$ and $overline{Q} = R$.
(Indeed, we can construct such a $Q$ as follows: Write
$R$ in the form
$R = sumlimits_{i = 0}^{deg_X R} sumlimits_{j = 0}^{deg_Y R} r_{i, j} X^i Y^j$
for some coefficients $r_{i, j} in LL$.
For each pair $left(i, jright)$, pick some
$p_{i, j} in KK$ such that $overline{p_{i, j}} = r_{i, j}$
(this can be done, since the homomorphism $pi : KK to LL$ is surjective).
Then, set $Q = sumlimits_{i = 0}^{deg_X R} sumlimits_{j = 0}^{deg_Y R} p_{i, j} X^i Y^j$.
It is clear that this polynomial $Q$ satisfies $deg_X Q = deg_X R$ and
$deg_Y Q = deg_Y R$ and $overline{Q} = R$.)



We have $overline{Q left(F, Gright)}
= underbrace{overline{Q}}_{=R} left( overline{F}, overline{G} right)
= R left( overline{F}, overline{G} right) = 0$
.
In other words, the polynomial $Q left(F, Gright) in KKive{T}$
lies in the kernel of the canonical
$KK$-algebra homomorphism $KKive{T} to LLive{T}$.
This means that each coefficient of this
polynomial $Q left(F, Gright) in KKive{T}$
lies in the kernel of the $KK$-algebra homomorphism $pi : KK to LL$.
In other words, each coefficient of this
polynomial $Q left(F, Gright) in KKive{T}$ lies in $I$
(since the kernel of the $KK$-algebra homomorphism $pi : KK to LL$
is $I$).
Hence, each coefficient $c$ of this
polynomial $Q left(F, Gright) in KKive{T}$
satisfies $g_{eps}^{m-1} c = 0$ (by the definition of $I$).
Therefore, $g_{eps}^{m-1} Q left(F, Gright) = 0$.



On the other hand, $overline{Q} = R$ is nonzero.
In other words, the polynomial $Q in KKive{X, Y}$ does not lie
in the kernel of the canonical
$KK$-algebra homomorphism $KKive{X, Y} to LLive{X, Y}$.
This means that not every coefficient of this
polynomial $Q in KKive{X, Y}$
lies in the kernel of the $KK$-algebra homomorphism $pi : KK to LL$.
In other words, not every coefficient of this
polynomial $Q in KKive{X, Y}$ lies in $I$
(since the kernel of the $KK$-algebra homomorphism $pi : KK to LL$
is $I$).
Hence, not every coefficient $c$ of this
polynomial $Q in KKive{X, Y}$
satisfies $g_{eps}^{m-1} c = 0$ (by the definition of $I$).
Therefore, $g_{eps}^{m-1} Q neq 0$.
So $g_{eps}^{m-1} Q in KKive{X, Y}$ is a nonzero polynomial
in two indeterminates $X$ and $Y$ and satisfies
$deg_X left( g_{eps}^{m-1} Q right) leq deg_X Q = deg_X R leq eps - 1 leq eps$
and
$deg_Y left( g_{eps}^{m-1} Q right) leq deg_Y Q = deg_Y R leq d$
and $left(g_{eps}^{m-1} Q right) left(F, Gright) = g_{eps}^{m-1} Q left(F, Gright) = 0$.
Hence, there exists a nonzero polynomial $PinKK ive{X, Y}$
in two indeterminates $X$ and $Y$ such that $deg_X Pleq eps$
and $deg_Y Pleq d$ and $Ptup{F, G} =0$
(namely, $P = g_{eps}^{m-1} Q$).
We have thus reached our goal.



So we have proven that Theorem 1 holds for $e = eps$.
This completes the induction step. Thus, Theorem 1 is proven by induction. $blacksquare$






share|cite|improve this answer























    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2934580%2falgebraic-relation-between-polynomials%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    3
    down vote



    accepted










    Here is a proof using resultants, taken mostly from https://mathoverflow.net/questions/189181//189344#189344 . (For a short summary, see one of my comments to the OP.)



    $newcommand{KK}{mathbb{K}}
    newcommand{LL}{mathbb{L}}
    newcommand{NN}{mathbb{N}}
    newcommand{ww}{mathbf{w}}
    newcommand{eps}{varepsilon}
    newcommand{Res}{operatorname{Res}}
    newcommand{Syl}{operatorname{Syl}}
    newcommand{adj}{operatorname{adj}}
    newcommand{id}{operatorname{id}}
    newcommand{tilF}{widetilde{F}}
    newcommand{tilG}{widetilde{G}}
    newcommand{ive}[1]{left[ #1 right]}
    newcommand{tup}[1]{left( #1 right)}
    newcommand{zeroes}[1]{underbrace{0,0,ldots,0}_{#1 text{ zeroes}}}$

    We shall prove a more general statement:




    Theorem 1. Let $KK$ be a nontrivial commutative ring. Let $F$ and $G$ be two
    polynomials in the polynomial ring $KK ive{T}$. Let
    $d$ and $e$ be nonnegative integers such that $d+e > 0$ and $deg F leq d$ and $deg G leq e$.
    Then, there exists a nonzero polynomial $PinKK ive{X, Y}$ in two indeterminates $X$ and $Y$ such that $deg_X Pleq e$
    and $deg_Y Pleq d$ and $Ptup{F, G} =0$.




    Here and in the following, we are using the following notations:




    • "Ring" always means "associative ring with unity".


    • A ring $R$ is said to be nontrivial if $0 neq 1$ in $R$.


    • If $R$ is any polynomial in the polynomial ring $KK ive{X, Y}$, then $deg_X R$ denotes the degree of $R$ with respect to the variable $X$ (that is, it denotes the degree of $R$ when $R$ is considered as a polynomial in $tup{KK ive{Y}} ive{X} $), whereas $deg_Y R$ denotes the degree of the polynomial $R$ with respect to the variable $Y$.



    To prove Theorem 1, we recall the notion of the resultant of two polynomials over a
    commutative ring:




    Definition. Let $KK$ be a commutative ring.
    Let $Pin KK ive{T}$ and $QinKK ive{T}$ be two polynomials in the polynomial ring $KK ive{T}$.
    Let $dinNN$ and $einNN$ be such that $deg Pleq d$ and $deg Qleq e$.
    Thus, write the polynomials $P$ and $Q$ in the forms
    begin{align*}
    P & =p_0 +p_1 T+p_2 T^2 +cdots+p_d T^d qquadtext{and}\
    Q & =q_0 +q_1 T+q_2 T^2 +cdots+q_e T^e ,
    end{align*}

    where $p_0 ,p_1 ,ldots,p_d ,q_0 ,q_1 ,ldots,q_e $ belong to $KK$.
    Then, we let $Syl_{d,e} tup{P, Q}$ be the matrix
    begin{equation}
    left(
    begin{array}[c]{c}
    begin{array}[c]{ccccccccc}
    p_0 & 0 & 0 & cdots & 0 & q_0 & 0 & cdots & 0\
    p_1 & p_0 & 0 & cdots & 0 & q_1 & q_0 & cdots & 0\
    vdots & p_1 & p_0 & cdots & 0 & vdots & q_1 & ddots & vdots\
    vdots & vdots & p_1 & ddots & vdots & vdots & vdots & ddots &
    q_0 \
    p_d & vdots & vdots & ddots & p_0 & vdots & vdots & ddots & q_1 \
    0 & p_d & vdots & ddots & p_1 & q_e & vdots & ddots & vdots\
    vdots & vdots & ddots & ddots & vdots & 0 & q_e & ddots & vdots\
    0 & 0 & 0 & ddots & vdots & vdots & vdots & ddots & vdots\
    0 & 0 & 0 & cdots & p_d & 0 & 0 & cdots & q_e
    end{array}
    \
    underbrace{ }_{etext{ columns}}
    underbrace{ }_{dtext{ columns}}
    end{array}
    right) inKK^{tup{d+e} timestup{d+e}};
    end{equation}

    this is the $tup{d+e} timestup{d+e}$-matrix whose first $e$ columns have the form
    begin{equation}
    left( zeroes{k},p_0 ,p_1 ,ldots ,p_d ,zeroes{e-1-k}right) ^{T}
    qquadtext{for }kinleft{ 0,1,ldots,e-1right} ,
    end{equation}

    and whose last $d$ columns have the form
    begin{equation}
    left( zeroes{ell},q_0 ,q_1 ,ldots,q_e ,zeroes{d-1-ell}right) ^{T}
    qquadtext{for }ellinleft{ 0,1,ldots,d-1right} .
    end{equation}

    Furthermore, we define $Res_{d,e}tup{P, Q}$ to be the element
    begin{equation}
    det tup{ Syl_{d,e}tup{P, Q} } in KK .
    end{equation}

    The matrix $Syl_{d,e}tup{P, Q}$ is called the Sylvester matrix of $P$ and $Q$ in degrees $d$ and $e$.
    Its determinant $Res_{d,e}tup{P, Q}$ is called the resultant of $P$ and $Q$ in degrees $d$ and $e$.



    It is common to apply this definition to the case when $d=deg P$ and $e=deg Q$; in this case, we simply call $Res_{d,e}tup{P, Q}$ the resultant of $P$ and $Q$, and denote it by $Res tup{P, Q}$.




    Here, we take $NN$ to mean the set $left{0,1,2,ldotsright}$ of all nonnegative integers.



    One of the main properties of resultants is the following:




    Theorem 2. Let $KK$ be a commutative ring.
    Let $Pin KK ive{T}$ and $QinKK ive{T}$ be two polynomials in the polynomial ring $KK ive{T}$.
    Let $dinNN$ and $einNN$ be such that $d+e > 0$ and $deg Pleq d$ and $deg Qleq e$.
    Let $LL$ be a commutative $KK$-algebra, and let $winLL$ satisfy $Ptup{w} =0$ and $Qtup{w} = 0$.
    Then, $Res_{d,e}tup{P, Q} =0$ in $LL$.




    Proof of Theorem 2 (sketched). Recall that $Res_{d,e}tup{P, Q} =det tup{ Syl_{d,e}tup{P, Q} }$ (by the definition of $Res_{d,e}tup{P, Q}$).



    Write the polynomials $P$ and $Q$ in the forms
    begin{align*}
    P & =p_0 +p_1 T+p_2 T^2 +cdots+p_d T^d qquadtext{and}\
    Q & =q_0 +q_1 T+q_2 T^2 +cdots+q_e T^e ,
    end{align*}

    where $p_0 ,p_1 ,ldots,p_d ,q_0 ,q_1 ,ldots,q_e $ belong to
    $KK$.
    (We can do this, since $deg P leq d$ and $deg Q leq e$.)
    From $p_0 +p_1 T+p_2 T^2 +cdots+p_d T^d = P$,
    we obtain
    $p_0 + p_1 w + p_2 w^2 + cdots + p_d w^d = Pleft(wright) = 0$.
    Similarly,
    $q_0 + q_1 w + q_2 w^2 + cdots + q_e w^e = 0$.



    Let $A$ be the matrix $Syl_{d,e}tup{P, Q}
    inKK^{tup{d+e} timestup{d+e} }$
    , regarded as a
    matrix in $LL ^{tup{d+e} timestup{d+e} }$ (by
    applying the canonical $KK$-algebra homomorphism $KK
    rightarrowLL$
    to all its entries).



    Let $ww$ be the row vector $left( w^{0},w^{1},ldots,w^{d+e-1}
    right) inLL ^{1timestup{d+e} }$
    . Let $mathbf{0}$ denote
    the zero vector in $LL ^{1timestup{d+e} }$.



    Now, it is easy to see that $ww A=mathbf{0}$. (Indeed, for each
    $kinleft{ 1,2,ldots,d+eright} $, we have
    begin{align*}
    & wwleft( text{the }ktext{-th column of }Aright) \
    & =
    begin{cases}
    p_0 w^{k-1}+p_1 w^k +p_2 w^{k+1}+cdots+p_d w^{k-1+d}, & text{if }kleq e;\
    q_0 w^{k-e-1}+q_1 w^{k-e}+q_2 w^{k-e+1}+cdots+q_e w^{k-1}, & text{if }k>e
    end{cases}
    \
    & =
    begin{cases}
    w^{k-1}left( p_0 +p_1 w+p_2 w^2 +cdots+p_d w^d right) , & text{if }kleq e;\
    w^{k-e-1}left( q_0 +q_1 w+q_2 w^2 +cdots+q_e w^eright) , & text{if }k>e
    end{cases}
    \
    & =
    begin{cases}
    w^{k-1}0, & text{if }kleq e;\
    w^{k-e-1}0, & text{if }k>e
    end{cases}
    \
    & qquadleft(
    begin{array}[c]{c}
    text{since }p_0 +p_1 w+p_2 w^2 +cdots+p_d w^d =0\
    text{and }q_0 +q_1 w+q_2 w^2 +cdots+q_e w^e =0
    end{array}
    right) \
    & =0.
    end{align*}

    But this means precisely that $ww A=mathbf{0}$.)



    But $A$ is a square matrix over a commutative ring; thus, the
    adjugate
    $adj A$ of $A$ satisfies $Acdotadj A=det
    Acdot I_{d+e}$
    (where $I_{d+e}$ denotes the identity matrix of size $d+e$).
    Hence, $wwunderbrace{Acdotadj A}_{=det Acdot
    I_{d+e}}=wwdet Acdot I_{d+e}=det Acdotww$
    . Comparing this
    with $underbrace{ww A}_{=mathbf{0}}cdotadj A
    =mathbf{0}cdotadj A=mathbf{0}$
    , we obtain
    $det Acdotww=mathbf{0}$.



    But $d+e > 0$; thus, the row vector $ww$ has a well-defined first entry.
    This first entry is $w^0 = 1$.
    Hence, the first entry of the row vector $det Acdotww$ is $det A cdot 1 = det A$.
    Hence, from $det Acdotww=mathbf{0}$, we conclude that $det A=0$.
    Comparing this with
    begin{equation}
    detunderbrace{A}_{=Syl_{d,e}tup{P, Q}} =det tup{ Syl_{d,e}tup{P, Q} }
    =Res_{d,e}tup{P, Q} ,
    end{equation}

    we obtain $Res_{d,e}tup{P, Q} =0$ (in $LL$). This proves Theorem 2. $blacksquare$



    Theorem 2 (which I have proven in detail to stress how the proof uses nothing
    about $LL$ other than its commutativity) was just the meek tip of the
    resultant iceberg. Here are some further sources with deeper results:




    • Antoine Chambert-Loir, Résultants (minor errata).


    • Svante Janson, Resultant and discriminant of polynomials.


    • Gerald Myerson, On resultants, Proc. Amer. Math. Soc. 89 (1983), 419--420.



    Some of these sources use the matrix $left( Syl_{d,e}tup{P, Q} right) ^{T}$ instead of our
    $Syl_{d,e}tup{P, Q}$, but of course this
    matrix has the same determinant as $Syl_{d,e}tup{P, Q}$, so that their definition of a resultant is the same as mine.



    We are not yet ready to prove Theorem 1 directly. Instead, let us prove a
    weaker version of Theorem 1:




    Lemma 3. Let $KK$ be a commutative ring. Let $F$ and $G$ be two
    polynomials in the polynomial ring $KK ive{T}$. Let
    $d$ and $e$ be nonnegative integers such that $d+e > 0$ and $deg F leq d$ and $deg G leq e$.
    Write the polynomial $G$ in the form $G=g_0 +g_1 T+g_2 T^2 +cdots
    +g_e T^e $
    , where $g_0 ,g_1 ,ldots,g_e inKK$.
    Assume that $g_e^d neq 0$.
    Then, there exists a nonzero polynomial $PinKK ive{X, Y}$
    in two indeterminates $X$ and $Y$ such that $deg_X Pleq e$
    and $deg_Y Pleq d$ and $Ptup{F, G} =0$.




    Proof of Lemma 3 (sketched). Let $widetilde{KK}$ be the
    commutative ring $KK ive{X, Y}$. Define two polynomials
    $tilF inwidetilde{KK}ive{T}$ and $tilG inwidetilde{KK}ive{T}$ by
    begin{equation}
    tilF =F-X=Ftup{T} -Xqquadtext{and}qquadtilG
    =G-Y=Gtup{T} -Y.
    end{equation}

    Note that $X$ and $Y$ have degree $0$ when considered as polynomials in
    $widetilde{KK}ive{T}$ (since $X$ and $Y$ belong to the
    ring $widetilde{KK}$). Thus, these new polynomials $tilF = F - X$
    and $tilG = G - Y$ have degrees $degtilF leq d$ (because
    $deg X = 0 leq d$ and $deg F leq d$) and
    $degtilG leq e$ (similarly).
    Hence, the resultant $Res_{d,e}tup{tilF, tilG} in
    widetilde{KK}$
    of these polynomials $tilF$ and
    $tilG$ in degrees $d$ and $e$ is well-defined. Let us denote this
    resultant $Res_{d,e}left( tilF
    ,tilG right)$
    by $P$. Hence,
    begin{equation}
    P=Res_{d,e}left( tilF ,tilG
    right) inwidetilde{KK}=KK ive{X, Y} .
    end{equation}



    Our next goal is to show that $P$ is a nonzero polynomial and satisfies
    $deg_X Pleq e$ and $deg_Y Pleq d$ and $Ptup{F, G} =0$. Once
    this is shown, Lemma 3 will obviously follow.



    We have
    begin{equation}
    P=Res_{d,e}left( tilF ,tilG
    right) =detleft( Syl_{d,e}left( tilF
    ,tilG right) right)
    label{darij1.pf.t1.P=det}
    tag{1}
    end{equation}

    (by the definition of $Res_{d,e}left(
    tilF ,tilG right)$
    ).



    Write the polynomial $F$ in the form $F=f_0 +f_1 T+f_2 T^2 +cdots
    +f_d T^d $
    , where $f_0 ,f_1 ,ldots,f_d inKK$. (This can be
    done, since $deg F leq d$.)



    Recall that $g_e ^d neq 0$. Thus, $left( -1right) ^e g_e^d neq 0$.



    For each $pinNN$, we let $S_{p}$ be the group of all permutations of
    the set $left{ 1,2,ldots,pright} $.



    Now,
    begin{align*}
    tilF & =F-X=left( f_0 +f_1 T+f_2 T^2 +cdots+f_d
    T^d right) -X\
    & qquadleft( text{since }F=f_0 +f_1 T+f_2 T^2 +cdots+f_d
    T^d right) \
    & =tup{f_0 - X} +f_1 T+f_2 T^2 +cdots+f_d T^d .
    end{align*}

    Thus, $f_0 -X,f_1 ,f_2 ,ldots,f_d $ are the coefficients of the
    polynomial $tilF inwidetilde{KK}ive{T}$ (since
    $f_0 -Xinwidetilde{KK}$). Similarly, $g_0 -Y,g_1 ,g_2
    ,ldots,g_e $
    are the coefficients of the polynomial $tilG
    inwidetilde{KK}ive{T}$
    . Hence, the definition of the
    matrix $Syl_{d,e}left( tilF ,tilG
    right)$
    yields
    begin{align}
    &Syl_{d,e}tup{tilF, tilG} \
    &=left(
    begin{array}[c]{c}
    begin{array}[c]{ccccccccc}
    f_0 -X & 0 & 0 & cdots & 0 & g_0 -Y & 0 & cdots & 0\
    f_1 & f_0 -X & 0 & cdots & 0 & g_1 & g_0 -Y & cdots & 0\
    vdots & f_1 & f_0-X & cdots & 0 & vdots & g_1 & ddots & vdots\
    vdots & vdots & f_1 & ddots & vdots & vdots & vdots & ddots &
    g_0 -Y\
    f_d & vdots & vdots & ddots & f_0 -X & vdots & vdots & ddots &
    g_1 \
    0 & f_d & vdots & ddots & f_1 & g_e & vdots & ddots & vdots\
    vdots & vdots & ddots & ddots & vdots & 0 & g_e & ddots & vdots\
    0 & 0 & 0 & ddots & vdots & vdots & vdots & ddots & vdots\
    0 & 0 & 0 & cdots & f_d & 0 & 0 & cdots & g_e
    end{array}
    \
    underbrace{qquad qquad qquad qquad qquad qquad qquad qquad qquad qquad}
    _{etext{ columns}}
    underbrace{qquad qquad qquad qquad qquad qquad qquad}
    _{dtext{ columns}}
    end{array}
    right) \
    &inwidetilde{KK}^{tup{d+e} timesleft(
    d+eright) }.
    end{align}

    Now, let us use this explicit form of $Syl_{d,e} tup{tilF, tilG}$
    to compute $detleft( Syl_{d,e}tup{tilF, tilG} right)$ using
    the Leibniz formula.
    The Leibniz formula yields
    begin{equation}
    detleft( Syl_{d,e} tup{ tilF, tilG } right)
    =sum_{sigmain S_{d+e}}a_{sigma},
    label{darij1.pf.t1.det=sum}
    tag{2}
    end{equation}

    where for each permutation $sigmain S_{d+e}$, the addend $a_{sigma}$ is a
    product of entries of $Syl_{d,e} tup{tilF, tilG}$, possibly with a minus sign. More
    precisely,
    begin{equation}
    a_{sigma}=tup{-1}^{sigma}prod_{i=1}^{d+e}left( text{the }
    left( i,sigmaleft( iright) right) text{-th entry of }
    Syl_{d,e}tup{tilF, tilG}
    right)
    end{equation}

    for each $sigmain S_{d+e}$ (where $tup{-1}^{sigma}$ denotes the
    sign of the permutation $sigma$).



    Now, eqref{darij1.pf.t1.P=det} becomes
    begin{equation}
    P=detleft( Syl_{d,e}tup{tilF, tilG} right)
    =sum_{sigmain S_{d+e}}a_{sigma}
    label{darij1.pf.t1.P=sum}
    tag{3}
    end{equation}

    (by eqref{darij1.pf.t1.det=sum}).



    All entries of the matrix $Syl_{d,e}tup{tilF, tilG}$ are polynomials in the two
    indeterminates $X$ and $Y$; but only $d+e$ of these entries are non-constant
    polynomials (since all of $f_0 ,f_1 ,ldots,f_d ,g_0 ,g_1 ,ldots,g_e $
    belong to $KK$). More precisely, only $e$ entries of
    $Syl_{d,e}tup{tilF, tilG}$
    have non-zero degree with respect to the variable $X$ (namely, the first $e$
    entries of the diagonal of $Syl_{d,e}tup{tilF, tilG}$), and these $e$ entries have degree $1$
    with respect to this variable. Thus, for each $sigmain S_{d+e}$, the product
    $a_{sigma}$ contains at most $e$ many factors that have degree $1$ with
    respect to the variable $X$, while all its remaining factors have degree $0$
    with respect to this variable. Therefore, for each $sigmain S_{d+e}$, the
    product $a_{sigma}$ has degree $leq ecdot1=e$ with respect to the variable
    $X$. Hence, the sum $sum_{sigmain S_{d+e}}a_{sigma}$ of all these products
    $a_{sigma}$ also has degree $leq e$ with respect to the variable $X$. In
    other words, $deg_X left( sum_{sigmain S_{d+e}}a_{sigma}right) leq
    e$
    . In view of eqref{darij1.pf.t1.P=sum}, this rewrites as $deg_X Pleq e$.
    Similarly, $deg_Y Pleq d$ (since only $d$ entries of the matrix
    $Syl_{d,e}tup{tilF, tilG}
    $
    have non-zero degree with respect to the variable $Y$, and these $d$ entries
    have degree $1$ with respect to this variable).



    Next, we shall show that the polynomial $P$ is nonzero. Indeed, let us
    consider all elements of $widetilde{KK}$ as polynomials in the
    variable $X$ over the ring $KK ive{Y} $. For each
    permutation $sigmain S_{d+e}$, the product $a_{sigma}$ (thus considered)
    has degree $leq e$ (as we have previously shown). Let us now compute the
    coefficient of $X^e $ in this product $a_{sigma}$. There are three possible cases:




    • Case 1: The permutation $sigmain S_{d+e}$ does not satisfy $left(
      sigmaleft( iright) =itext{ for each }iinleft{ 1,2,ldots,eright}
      right)$
      . Thus, the product $a_{sigma}$ has strictly fewer than $e$
      factors that have degree $1$ with respect to the variable $X$, while all its
      remaining factors have degree $0$ with respect to this variable. Thus, the
      whole product $a_{sigma}$ has degree $<e$ with respect to the variable $X$.
      Hence, the coefficient of $X^e $ in this product $a_{sigma}$ is $0$.


    • Case 2: The permutation $sigmain S_{d+e}$ satisfies $left(
      sigmaleft( iright) =itext{ for each }iinleft{ 1,2,ldots,eright}
      right)$
      , but is not the identity map $idin S_{d+e}$. Thus,
      there must exist at least one $iinleft{ 1,2,ldots,d+eright} $ such
      that $sigmaleft( iright) <i$. Consider such an $i$, and notice that it
      must satisfy $i>e$ and $sigmaleft( iright) >e$; hence, the $left(
      i,sigmaleft( iright) right)$
      -th entry of
      $Syl_{d,e}tup{tilF, tilG}$ is $0$. Thus, the
      whole product $a_{sigma}$ is $0$ (since the latter entry is a factor in this
      product). Thus, the coefficient of $X^e $ in this product $a_{sigma}$ is $0$.


    • Case 3: The permutation $sigmain S_{d+e}$ is the identity map
      $idin S_{d+e}$. Thus, the product $a_{sigma}$ is $left(
      f_0 -Xright) ^e g_e^d $
      (since $tup{-1}^{id
      }=1$
      ). Hence, the coefficient of $X^e $ in this product $a_{sigma}$ is
      $tup{-1}^e g_e^d $.



    Summarizing, we thus conclude that the coefficient of $X^e $ in the product
    $a_{sigma}$ is $0$ unless $sigma=id$, in which case it is
    $tup{-1}^e g_e^d $. Hence, the coefficient of $X^e $ in the
    sum $sum_{sigmain S_{d+e}}a_{sigma}$ is $tup{-1}^e g_e^d
    neq0$
    . Therefore, $sum_{sigmain S_{d+e}}a_{sigma}neq0$. In view of
    eqref{darij1.pf.t1.P=sum}, this rewrites as $Pneq0$. In other words, the
    polynomial $P$ is nonzero.



    Finally, it remains to prove that $Ptup{F, G} =0$. In order to do
    this, we let $LL$ be the polynomial ring $KK ive{U}
    $
    in a new indeterminate $U$. We let $varphi:KK ive{X, Y}
    rightarrowLL$
    be the unique $KK$-algebra homomorphism that
    sends $X$ to $Ftup{U}$ and sends $Y$ to $Gtup{U}$.
    (This is well-defined by the universal property of the polynomial ring
    $KK ive{X, Y}$.)
    Note that $varphi$ is a $KK$-algebra homomorphism from
    $widetilde{KK}$ to $LL$ (since $KK ive{X, Y}
    =widetilde{KK}$
    ). Thus, $LL$ becomes a $widetilde{KK
    }$
    -algebra via this homomorphism $varphi$.



    Now, recall that the polynomial $tilF inwidetilde{KK}ive{T}$
    was defined by $tilF =F-X$. Hence, $tilF left(
    Uright) =Ftup{U} -varphitup{X}$
    . (Indeed, when we
    regard $X$ as an element of $widetilde{KK}ive{T}$, the
    polynomial $X$ is simply a constant, and thus evaluating it at $U$ yields the
    canonical image of $X$ in $LL$, which is $varphitup{X}$.)
    But $varphitup{X} =Ftup{U}$ (by the definition of
    $varphi$).
    Hence, $tilF tup{U} =Ftup{U} -varphitup{X} =0$ (since $varphitup{X} =Ftup{U}$). Similarly, $tilG tup{U} =0$.



    Thus, the element $UinLL$ satisfies $tilF tup{U}
    =0$
    and $tilG tup{U} =0$. Hence, Theorem 2 (applied to
    $widetilde{KK}$, $tilF$, $tilG$ and $U$ instead of $KK$, $P$, $Q$ and $w$) yields that
    $Res_{d,e}tup{tilF, tilG} = 0$ in $LL$.
    In other words, $varphitup{ Res_{d,e}tup{tilF, tilG} } =0$. In view of
    $Res_{d,e}tup{tilF, tilG} =P$, this rewrites as $varphitup{P} =0$.



    But recall that $varphi$ is the $KK$-algebra homomorphism that sends
    $X$ to $Ftup{U}$ and sends $Y$ to $Gtup{U}$. Hence, it
    sends any polynomial $QinKK ive{X, Y}$ to $Q tup{ Ftup{U}, Gtup{U} }$. Applying this to $Q=P$, we
    conclude that it sends $P$ to $P tup{ Ftup{U}, Gtup{U} }$.
    In other words, $varphitup{P} =P tup{ Ftup{U}, Gtup{U} }$;
    hence, $P tup{ Ftup{U}, Gtup{U} } =varphitup{P} =0$.



    Now, $Ftup{U}$ and $Gtup{U}$ are polynomials in the
    indeterminate $U$ over $KK$. If we rename the indeterminate $U$ as
    $T$, then these polynomials $Ftup{U}$ and $Gtup{U}$
    become $Ftup{T}$ and $Gtup{T}$, and therefore the
    polynomial $Pleft( Ftup{U} ,Gtup{U} right)$ becomes
    $Ptup{ Ftup{T}, Gtup{T} }$.
    Hence, $Ptup{ Ftup{T}, Gtup{T} } =0$ (since $Ptup{ Ftup{U}, Gtup{U} } =0$).
    In other words, $P tup{F, G} =0$ (since $Ftup{T} =F$ and $Gtup{T} =G$).
    This completes the proof of Lemma 3. $blacksquare$




    Lemma 4. (a) Theorem 1 holds when $d = 0$.



    (b) Theorem 1 holds when $e = 0$.




    Proof of Lemma 4. (a) Assume that $d = 0$.
    Thus, $d + e > 0$ rewrites as $e > 0$. Hence, $e geq 1$.
    But the polynomial $F$ is constant (since $deg F leq d = 0$).
    In other words, $F = f$ for some $f in KK$. Consider this $f$.
    Now, let $Q$ be the polynomial $X - f in KKive{X, Y}$.
    Then, $Q$ is nonzero and satisfies
    $deg_X Q = 1 leq e$ (since $e geq 1$) and
    $deg_Y Q = 0 leq d$ and
    $Qleft(F, Gright) = F - f = 0$ (since $F = f$).
    Hence, there exists a nonzero polynomial $PinKK ive{X, Y}$
    in two indeterminates $X$ and $Y$ such that $deg_X Pleq e$
    and $deg_Y Pleq d$ and $Ptup{F, G} =0$
    (namely, $P = Q$). In other words, Theorem 1 holds (under
    our assumption that $d = 0$). This proves Lemma 4 (a).



    (b) The proof of Lemma 4 (b) is analogous to
    our above proof of Lemma 4 (a). $blacksquare$



    Now, we can prove Theorem 1 at last:



    Proof of Theorem 1. We shall prove Theorem 1 by induction on $e$.



    The induction base is the case when $e = 0$; this case follows
    from Lemma 4 (b).



    For the induction step, we fix a positive integer $eps$.
    Assume (as the induction hypothesis) that Theorem 1 holds for $e = eps - 1$.
    We must now prove that Theorem 1 holds for $e = eps$.



    Let $KK$ be a commutative ring. Let $F$ and $G$ be two
    polynomials in the polynomial ring $KK ive{T}$. Let
    $d$ be a nonnegative integer such that $d+eps > 0$ and $deg F leq d$ and $deg G leq eps$.
    Our goal is now to prove that the claim of Theorem 1 holds for $e = eps$.
    In other words, our goal is to prove that there exists a nonzero polynomial $PinKK ive{X, Y}$
    in two indeterminates $X$ and $Y$ such that $deg_X Pleq eps$
    and $deg_Y Pleq d$ and $Ptup{F, G} =0$.



    Write the polynomial $G$ in the form $G=g_0 +g_1 T+g_2 T^2 +cdots
    +g_{eps}T^{eps}$
    , where $g_0 ,g_1 ,ldots,g_{eps}inKK$.
    (This can be done, since $deg G leq eps$.)
    If $g_{eps}^d neq 0$, then our goal follows immediately
    by applying Lemma 3 to $e = eps$.
    Thus, for the rest of this induction step, we WLOG assume that $g_{eps}^d = 0$.
    Hence, there exists a positive integer $m$ such that $g_{eps}^m = 0$ (namely, $m = eps$).
    Thus, there exists a smallest such $m$.
    Consider this smallest $m$.
    Then, $g_{eps}^m = 0$, but
    begin{align}
    text{every positive integer $ell < m$ satisfies $g_{eps}^{ell} neq 0$.}
    label{darij1.pf.t1.epsilon-ell}
    tag{4}
    end{align}



    We claim that $g_{eps}^{m-1} neq 0$. Indeed, if $m-1$ is a
    positive integer, then this follows from eqref{darij1.pf.t1.epsilon-ell} (applied to $ell = m-1$);
    otherwise, it follows from the fact that $g_{eps}^0 = 1 neq 0$
    (since the ring $KK$ is nontrivial).



    Now recall again that our goal is to prove that the claim of Theorem 1 holds for $e = eps$.
    If $d = 0$, then this goal follows from Lemma 4 (a).
    Hence, for the rest of this induction step, we WLOG assume that $d neq 0$.
    Hence, $d > 0$ (since $d$ is a nonnegative integer).



    We have $e geq 1$ (since $e$ is a positive integer), thus
    $e - 1 geq 0$. Hence, $d + left(e-1right) geq d > 0$.



    Let $I$ be the subset $left{x in KK mid g_{eps}^{m-1} x = 0 right}$ of $KK$.
    Then, $I$ is an ideal of $KK$ (namely, it is the
    annihilator of the subset
    $left{g_{eps}^{m-1}right}$ of $KK$);
    thus, $KK / I$ is a commutative $KK$-algebra.
    Denote this commutative $KK$-algebra $KK / I$ by $LL$.
    Let $pi$ be the canonical projection $KK to LL$.
    Of course, $pi$ is a surjective $KK$-algebra homomorphism.



    For any $a in KK$, we will denote the image of $a$ under $pi$ by $overline{a}$.



    The $KK$-algebra homomorphism $pi : KK to LL$ induces a canonical $KK$-algebra homomorphism $KKive{T} to LLive{T}$ (sending $T$ to $T$).
    For any $a in KKive{T}$, we will denote the image of $a$ under the latter homomorphism by $overline{a}$.



    The $KK$-algebra homomorphism $pi : KK to LL$ induces a canonical $KK$-algebra homomorphism $KKive{X, Y} to LLive{X, Y}$ (sending $X$ and $Y$ to $X$ and $Y$).
    For any $a in KKive{X, Y}$, we will denote the image of $a$ under the latter homomorphism by $overline{a}$.



    We have $g_{eps}^{m-1} g_{eps} = g_{eps}^m = 0$, so that $g_{eps} in I$ (by the definition of $I$);
    hence, the residue class $overline{g_{eps}}$ of $g_{eps}$ modulo the ideal $I$ is $0$.



    We have $g_{eps}^{m-1} cdot 1 = g_{eps}^{m-1} neq 0$ in $KK$,
    and thus $1 notin I$ (by the definition of $I$).
    Hence, the ideal $I$ is not the whole ring $KK$.
    Thus, the quotient ring $KK / I = LL$ is nontrivial.



    But $G=g_0 +g_1 T+g_2 T^2 +cdots +g_{eps}T^{eps}$
    and thus
    begin{align}
    overline{G}
    &= overline{g_0} + overline{g_1} T + overline{g_2} T^2 + cdots + overline{g_{eps}} T^{eps} \
    &= left( overline{g_0} + overline{g_1} T + overline{g_2} T^2 + cdots + overline{g_{eps-1}} T^{eps-1} right)
    + underbrace{overline{g_{eps}}}_{= 0} T^{eps} \
    &= overline{g_0} + overline{g_1} T + overline{g_2} T^2 + cdots + overline{g_{eps-1}} T^{eps-1} ,
    end{align}

    so that $deg overline{G} leq e-1$.
    Also, $deg overline{F} leq deg F leq d$.
    But the induction hypothesis tells us that Theorem 1 holds for $e = eps - 1$.
    Hence, we can apply Theorem 1 to $LL$, $overline{F}$, $overline{G}$ and $eps - 1$
    instead of $KK$, $F$, $G$ and $e$.
    We thus conclude that there exists a nonzero polynomial $Pin LLive{X, Y}$ in two indeterminates $X$ and $Y$ such that $deg_X P leq eps - 1$ and $deg_Y P leq d$ and $Pleft( overline{F}, overline{G} right) =0$.
    Consider this polynomial $P$, and denote it by $R$.
    Thus, $R in LL ive{X, Y}$ is a nonzero polynomial in two indeterminates $X$ and $Y$ and satisfies $deg_X R leq eps - 1$ and $deg_Y R leq d$ and $R left( overline{F}, overline{G} right) =0$.



    Clearly, there exists a polynomial $Q in KKive{X, Y}$ in two
    indeterminates $X$ and $Y$ that satisfies $deg_X Q = deg_X R$ and
    $deg_Y Q = deg_Y R$ and $overline{Q} = R$.
    (Indeed, we can construct such a $Q$ as follows: Write
    $R$ in the form
    $R = sumlimits_{i = 0}^{deg_X R} sumlimits_{j = 0}^{deg_Y R} r_{i, j} X^i Y^j$
    for some coefficients $r_{i, j} in LL$.
    For each pair $left(i, jright)$, pick some
    $p_{i, j} in KK$ such that $overline{p_{i, j}} = r_{i, j}$
    (this can be done, since the homomorphism $pi : KK to LL$ is surjective).
    Then, set $Q = sumlimits_{i = 0}^{deg_X R} sumlimits_{j = 0}^{deg_Y R} p_{i, j} X^i Y^j$.
    It is clear that this polynomial $Q$ satisfies $deg_X Q = deg_X R$ and
    $deg_Y Q = deg_Y R$ and $overline{Q} = R$.)



    We have $overline{Q left(F, Gright)}
    = underbrace{overline{Q}}_{=R} left( overline{F}, overline{G} right)
    = R left( overline{F}, overline{G} right) = 0$
    .
    In other words, the polynomial $Q left(F, Gright) in KKive{T}$
    lies in the kernel of the canonical
    $KK$-algebra homomorphism $KKive{T} to LLive{T}$.
    This means that each coefficient of this
    polynomial $Q left(F, Gright) in KKive{T}$
    lies in the kernel of the $KK$-algebra homomorphism $pi : KK to LL$.
    In other words, each coefficient of this
    polynomial $Q left(F, Gright) in KKive{T}$ lies in $I$
    (since the kernel of the $KK$-algebra homomorphism $pi : KK to LL$
    is $I$).
    Hence, each coefficient $c$ of this
    polynomial $Q left(F, Gright) in KKive{T}$
    satisfies $g_{eps}^{m-1} c = 0$ (by the definition of $I$).
    Therefore, $g_{eps}^{m-1} Q left(F, Gright) = 0$.



    On the other hand, $overline{Q} = R$ is nonzero.
    In other words, the polynomial $Q in KKive{X, Y}$ does not lie
    in the kernel of the canonical
    $KK$-algebra homomorphism $KKive{X, Y} to LLive{X, Y}$.
    This means that not every coefficient of this
    polynomial $Q in KKive{X, Y}$
    lies in the kernel of the $KK$-algebra homomorphism $pi : KK to LL$.
    In other words, not every coefficient of this
    polynomial $Q in KKive{X, Y}$ lies in $I$
    (since the kernel of the $KK$-algebra homomorphism $pi : KK to LL$
    is $I$).
    Hence, not every coefficient $c$ of this
    polynomial $Q in KKive{X, Y}$
    satisfies $g_{eps}^{m-1} c = 0$ (by the definition of $I$).
    Therefore, $g_{eps}^{m-1} Q neq 0$.
    So $g_{eps}^{m-1} Q in KKive{X, Y}$ is a nonzero polynomial
    in two indeterminates $X$ and $Y$ and satisfies
    $deg_X left( g_{eps}^{m-1} Q right) leq deg_X Q = deg_X R leq eps - 1 leq eps$
    and
    $deg_Y left( g_{eps}^{m-1} Q right) leq deg_Y Q = deg_Y R leq d$
    and $left(g_{eps}^{m-1} Q right) left(F, Gright) = g_{eps}^{m-1} Q left(F, Gright) = 0$.
    Hence, there exists a nonzero polynomial $PinKK ive{X, Y}$
    in two indeterminates $X$ and $Y$ such that $deg_X Pleq eps$
    and $deg_Y Pleq d$ and $Ptup{F, G} =0$
    (namely, $P = g_{eps}^{m-1} Q$).
    We have thus reached our goal.



    So we have proven that Theorem 1 holds for $e = eps$.
    This completes the induction step. Thus, Theorem 1 is proven by induction. $blacksquare$






    share|cite|improve this answer



























      up vote
      3
      down vote



      accepted










      Here is a proof using resultants, taken mostly from https://mathoverflow.net/questions/189181//189344#189344 . (For a short summary, see one of my comments to the OP.)



      $newcommand{KK}{mathbb{K}}
      newcommand{LL}{mathbb{L}}
      newcommand{NN}{mathbb{N}}
      newcommand{ww}{mathbf{w}}
      newcommand{eps}{varepsilon}
      newcommand{Res}{operatorname{Res}}
      newcommand{Syl}{operatorname{Syl}}
      newcommand{adj}{operatorname{adj}}
      newcommand{id}{operatorname{id}}
      newcommand{tilF}{widetilde{F}}
      newcommand{tilG}{widetilde{G}}
      newcommand{ive}[1]{left[ #1 right]}
      newcommand{tup}[1]{left( #1 right)}
      newcommand{zeroes}[1]{underbrace{0,0,ldots,0}_{#1 text{ zeroes}}}$

      We shall prove a more general statement:




      Theorem 1. Let $KK$ be a nontrivial commutative ring. Let $F$ and $G$ be two
      polynomials in the polynomial ring $KK ive{T}$. Let
      $d$ and $e$ be nonnegative integers such that $d+e > 0$ and $deg F leq d$ and $deg G leq e$.
      Then, there exists a nonzero polynomial $PinKK ive{X, Y}$ in two indeterminates $X$ and $Y$ such that $deg_X Pleq e$
      and $deg_Y Pleq d$ and $Ptup{F, G} =0$.




      Here and in the following, we are using the following notations:




      • "Ring" always means "associative ring with unity".


      • A ring $R$ is said to be nontrivial if $0 neq 1$ in $R$.


      • If $R$ is any polynomial in the polynomial ring $KK ive{X, Y}$, then $deg_X R$ denotes the degree of $R$ with respect to the variable $X$ (that is, it denotes the degree of $R$ when $R$ is considered as a polynomial in $tup{KK ive{Y}} ive{X} $), whereas $deg_Y R$ denotes the degree of the polynomial $R$ with respect to the variable $Y$.



      To prove Theorem 1, we recall the notion of the resultant of two polynomials over a
      commutative ring:




      Definition. Let $KK$ be a commutative ring.
      Let $Pin KK ive{T}$ and $QinKK ive{T}$ be two polynomials in the polynomial ring $KK ive{T}$.
      Let $dinNN$ and $einNN$ be such that $deg Pleq d$ and $deg Qleq e$.
      Thus, write the polynomials $P$ and $Q$ in the forms
      begin{align*}
      P & =p_0 +p_1 T+p_2 T^2 +cdots+p_d T^d qquadtext{and}\
      Q & =q_0 +q_1 T+q_2 T^2 +cdots+q_e T^e ,
      end{align*}

      where $p_0 ,p_1 ,ldots,p_d ,q_0 ,q_1 ,ldots,q_e $ belong to $KK$.
      Then, we let $Syl_{d,e} tup{P, Q}$ be the matrix
      begin{equation}
      left(
      begin{array}[c]{c}
      begin{array}[c]{ccccccccc}
      p_0 & 0 & 0 & cdots & 0 & q_0 & 0 & cdots & 0\
      p_1 & p_0 & 0 & cdots & 0 & q_1 & q_0 & cdots & 0\
      vdots & p_1 & p_0 & cdots & 0 & vdots & q_1 & ddots & vdots\
      vdots & vdots & p_1 & ddots & vdots & vdots & vdots & ddots &
      q_0 \
      p_d & vdots & vdots & ddots & p_0 & vdots & vdots & ddots & q_1 \
      0 & p_d & vdots & ddots & p_1 & q_e & vdots & ddots & vdots\
      vdots & vdots & ddots & ddots & vdots & 0 & q_e & ddots & vdots\
      0 & 0 & 0 & ddots & vdots & vdots & vdots & ddots & vdots\
      0 & 0 & 0 & cdots & p_d & 0 & 0 & cdots & q_e
      end{array}
      \
      underbrace{ }_{etext{ columns}}
      underbrace{ }_{dtext{ columns}}
      end{array}
      right) inKK^{tup{d+e} timestup{d+e}};
      end{equation}

      this is the $tup{d+e} timestup{d+e}$-matrix whose first $e$ columns have the form
      begin{equation}
      left( zeroes{k},p_0 ,p_1 ,ldots ,p_d ,zeroes{e-1-k}right) ^{T}
      qquadtext{for }kinleft{ 0,1,ldots,e-1right} ,
      end{equation}

      and whose last $d$ columns have the form
      begin{equation}
      left( zeroes{ell},q_0 ,q_1 ,ldots,q_e ,zeroes{d-1-ell}right) ^{T}
      qquadtext{for }ellinleft{ 0,1,ldots,d-1right} .
      end{equation}

      Furthermore, we define $Res_{d,e}tup{P, Q}$ to be the element
      begin{equation}
      det tup{ Syl_{d,e}tup{P, Q} } in KK .
      end{equation}

      The matrix $Syl_{d,e}tup{P, Q}$ is called the Sylvester matrix of $P$ and $Q$ in degrees $d$ and $e$.
      Its determinant $Res_{d,e}tup{P, Q}$ is called the resultant of $P$ and $Q$ in degrees $d$ and $e$.



      It is common to apply this definition to the case when $d=deg P$ and $e=deg Q$; in this case, we simply call $Res_{d,e}tup{P, Q}$ the resultant of $P$ and $Q$, and denote it by $Res tup{P, Q}$.




      Here, we take $NN$ to mean the set $left{0,1,2,ldotsright}$ of all nonnegative integers.



      One of the main properties of resultants is the following:




      Theorem 2. Let $KK$ be a commutative ring.
      Let $Pin KK ive{T}$ and $QinKK ive{T}$ be two polynomials in the polynomial ring $KK ive{T}$.
      Let $dinNN$ and $einNN$ be such that $d+e > 0$ and $deg Pleq d$ and $deg Qleq e$.
      Let $LL$ be a commutative $KK$-algebra, and let $winLL$ satisfy $Ptup{w} =0$ and $Qtup{w} = 0$.
      Then, $Res_{d,e}tup{P, Q} =0$ in $LL$.




      Proof of Theorem 2 (sketched). Recall that $Res_{d,e}tup{P, Q} =det tup{ Syl_{d,e}tup{P, Q} }$ (by the definition of $Res_{d,e}tup{P, Q}$).



      Write the polynomials $P$ and $Q$ in the forms
      begin{align*}
      P & =p_0 +p_1 T+p_2 T^2 +cdots+p_d T^d qquadtext{and}\
      Q & =q_0 +q_1 T+q_2 T^2 +cdots+q_e T^e ,
      end{align*}

      where $p_0 ,p_1 ,ldots,p_d ,q_0 ,q_1 ,ldots,q_e $ belong to
      $KK$.
      (We can do this, since $deg P leq d$ and $deg Q leq e$.)
      From $p_0 +p_1 T+p_2 T^2 +cdots+p_d T^d = P$,
      we obtain
      $p_0 + p_1 w + p_2 w^2 + cdots + p_d w^d = Pleft(wright) = 0$.
      Similarly,
      $q_0 + q_1 w + q_2 w^2 + cdots + q_e w^e = 0$.



      Let $A$ be the matrix $Syl_{d,e}tup{P, Q}
      inKK^{tup{d+e} timestup{d+e} }$
      , regarded as a
      matrix in $LL ^{tup{d+e} timestup{d+e} }$ (by
      applying the canonical $KK$-algebra homomorphism $KK
      rightarrowLL$
      to all its entries).



      Let $ww$ be the row vector $left( w^{0},w^{1},ldots,w^{d+e-1}
      right) inLL ^{1timestup{d+e} }$
      . Let $mathbf{0}$ denote
      the zero vector in $LL ^{1timestup{d+e} }$.



      Now, it is easy to see that $ww A=mathbf{0}$. (Indeed, for each
      $kinleft{ 1,2,ldots,d+eright} $, we have
      begin{align*}
      & wwleft( text{the }ktext{-th column of }Aright) \
      & =
      begin{cases}
      p_0 w^{k-1}+p_1 w^k +p_2 w^{k+1}+cdots+p_d w^{k-1+d}, & text{if }kleq e;\
      q_0 w^{k-e-1}+q_1 w^{k-e}+q_2 w^{k-e+1}+cdots+q_e w^{k-1}, & text{if }k>e
      end{cases}
      \
      & =
      begin{cases}
      w^{k-1}left( p_0 +p_1 w+p_2 w^2 +cdots+p_d w^d right) , & text{if }kleq e;\
      w^{k-e-1}left( q_0 +q_1 w+q_2 w^2 +cdots+q_e w^eright) , & text{if }k>e
      end{cases}
      \
      & =
      begin{cases}
      w^{k-1}0, & text{if }kleq e;\
      w^{k-e-1}0, & text{if }k>e
      end{cases}
      \
      & qquadleft(
      begin{array}[c]{c}
      text{since }p_0 +p_1 w+p_2 w^2 +cdots+p_d w^d =0\
      text{and }q_0 +q_1 w+q_2 w^2 +cdots+q_e w^e =0
      end{array}
      right) \
      & =0.
      end{align*}

      But this means precisely that $ww A=mathbf{0}$.)



      But $A$ is a square matrix over a commutative ring; thus, the
      adjugate
      $adj A$ of $A$ satisfies $Acdotadj A=det
      Acdot I_{d+e}$
      (where $I_{d+e}$ denotes the identity matrix of size $d+e$).
      Hence, $wwunderbrace{Acdotadj A}_{=det Acdot
      I_{d+e}}=wwdet Acdot I_{d+e}=det Acdotww$
      . Comparing this
      with $underbrace{ww A}_{=mathbf{0}}cdotadj A
      =mathbf{0}cdotadj A=mathbf{0}$
      , we obtain
      $det Acdotww=mathbf{0}$.



      But $d+e > 0$; thus, the row vector $ww$ has a well-defined first entry.
      This first entry is $w^0 = 1$.
      Hence, the first entry of the row vector $det Acdotww$ is $det A cdot 1 = det A$.
      Hence, from $det Acdotww=mathbf{0}$, we conclude that $det A=0$.
      Comparing this with
      begin{equation}
      detunderbrace{A}_{=Syl_{d,e}tup{P, Q}} =det tup{ Syl_{d,e}tup{P, Q} }
      =Res_{d,e}tup{P, Q} ,
      end{equation}

      we obtain $Res_{d,e}tup{P, Q} =0$ (in $LL$). This proves Theorem 2. $blacksquare$



      Theorem 2 (which I have proven in detail to stress how the proof uses nothing
      about $LL$ other than its commutativity) was just the meek tip of the
      resultant iceberg. Here are some further sources with deeper results:




      • Antoine Chambert-Loir, Résultants (minor errata).


      • Svante Janson, Resultant and discriminant of polynomials.


      • Gerald Myerson, On resultants, Proc. Amer. Math. Soc. 89 (1983), 419--420.



      Some of these sources use the matrix $left( Syl_{d,e}tup{P, Q} right) ^{T}$ instead of our
      $Syl_{d,e}tup{P, Q}$, but of course this
      matrix has the same determinant as $Syl_{d,e}tup{P, Q}$, so that their definition of a resultant is the same as mine.



      We are not yet ready to prove Theorem 1 directly. Instead, let us prove a
      weaker version of Theorem 1:




      Lemma 3. Let $KK$ be a commutative ring. Let $F$ and $G$ be two
      polynomials in the polynomial ring $KK ive{T}$. Let
      $d$ and $e$ be nonnegative integers such that $d+e > 0$ and $deg F leq d$ and $deg G leq e$.
      Write the polynomial $G$ in the form $G=g_0 +g_1 T+g_2 T^2 +cdots
      +g_e T^e $
      , where $g_0 ,g_1 ,ldots,g_e inKK$.
      Assume that $g_e^d neq 0$.
      Then, there exists a nonzero polynomial $PinKK ive{X, Y}$
      in two indeterminates $X$ and $Y$ such that $deg_X Pleq e$
      and $deg_Y Pleq d$ and $Ptup{F, G} =0$.




      Proof of Lemma 3 (sketched). Let $widetilde{KK}$ be the
      commutative ring $KK ive{X, Y}$. Define two polynomials
      $tilF inwidetilde{KK}ive{T}$ and $tilG inwidetilde{KK}ive{T}$ by
      begin{equation}
      tilF =F-X=Ftup{T} -Xqquadtext{and}qquadtilG
      =G-Y=Gtup{T} -Y.
      end{equation}

      Note that $X$ and $Y$ have degree $0$ when considered as polynomials in
      $widetilde{KK}ive{T}$ (since $X$ and $Y$ belong to the
      ring $widetilde{KK}$). Thus, these new polynomials $tilF = F - X$
      and $tilG = G - Y$ have degrees $degtilF leq d$ (because
      $deg X = 0 leq d$ and $deg F leq d$) and
      $degtilG leq e$ (similarly).
      Hence, the resultant $Res_{d,e}tup{tilF, tilG} in
      widetilde{KK}$
      of these polynomials $tilF$ and
      $tilG$ in degrees $d$ and $e$ is well-defined. Let us denote this
      resultant $Res_{d,e}left( tilF
      ,tilG right)$
      by $P$. Hence,
      begin{equation}
      P=Res_{d,e}left( tilF ,tilG
      right) inwidetilde{KK}=KK ive{X, Y} .
      end{equation}



      Our next goal is to show that $P$ is a nonzero polynomial and satisfies
      $deg_X Pleq e$ and $deg_Y Pleq d$ and $Ptup{F, G} =0$. Once
      this is shown, Lemma 3 will obviously follow.



      We have
      begin{equation}
      P=Res_{d,e}left( tilF ,tilG
      right) =detleft( Syl_{d,e}left( tilF
      ,tilG right) right)
      label{darij1.pf.t1.P=det}
      tag{1}
      end{equation}

      (by the definition of $Res_{d,e}left(
      tilF ,tilG right)$
      ).



      Write the polynomial $F$ in the form $F=f_0 +f_1 T+f_2 T^2 +cdots
      +f_d T^d $
      , where $f_0 ,f_1 ,ldots,f_d inKK$. (This can be
      done, since $deg F leq d$.)



      Recall that $g_e ^d neq 0$. Thus, $left( -1right) ^e g_e^d neq 0$.



      For each $pinNN$, we let $S_{p}$ be the group of all permutations of
      the set $left{ 1,2,ldots,pright} $.



      Now,
      begin{align*}
      tilF & =F-X=left( f_0 +f_1 T+f_2 T^2 +cdots+f_d
      T^d right) -X\
      & qquadleft( text{since }F=f_0 +f_1 T+f_2 T^2 +cdots+f_d
      T^d right) \
      & =tup{f_0 - X} +f_1 T+f_2 T^2 +cdots+f_d T^d .
      end{align*}

      Thus, $f_0 -X,f_1 ,f_2 ,ldots,f_d $ are the coefficients of the
      polynomial $tilF inwidetilde{KK}ive{T}$ (since
      $f_0 -Xinwidetilde{KK}$). Similarly, $g_0 -Y,g_1 ,g_2
      ,ldots,g_e $
      are the coefficients of the polynomial $tilG
      inwidetilde{KK}ive{T}$
      . Hence, the definition of the
      matrix $Syl_{d,e}left( tilF ,tilG
      right)$
      yields
      begin{align}
      &Syl_{d,e}tup{tilF, tilG} \
      &=left(
      begin{array}[c]{c}
      begin{array}[c]{ccccccccc}
      f_0 -X & 0 & 0 & cdots & 0 & g_0 -Y & 0 & cdots & 0\
      f_1 & f_0 -X & 0 & cdots & 0 & g_1 & g_0 -Y & cdots & 0\
      vdots & f_1 & f_0-X & cdots & 0 & vdots & g_1 & ddots & vdots\
      vdots & vdots & f_1 & ddots & vdots & vdots & vdots & ddots &
      g_0 -Y\
      f_d & vdots & vdots & ddots & f_0 -X & vdots & vdots & ddots &
      g_1 \
      0 & f_d & vdots & ddots & f_1 & g_e & vdots & ddots & vdots\
      vdots & vdots & ddots & ddots & vdots & 0 & g_e & ddots & vdots\
      0 & 0 & 0 & ddots & vdots & vdots & vdots & ddots & vdots\
      0 & 0 & 0 & cdots & f_d & 0 & 0 & cdots & g_e
      end{array}
      \
      underbrace{qquad qquad qquad qquad qquad qquad qquad qquad qquad qquad}
      _{etext{ columns}}
      underbrace{qquad qquad qquad qquad qquad qquad qquad}
      _{dtext{ columns}}
      end{array}
      right) \
      &inwidetilde{KK}^{tup{d+e} timesleft(
      d+eright) }.
      end{align}

      Now, let us use this explicit form of $Syl_{d,e} tup{tilF, tilG}$
      to compute $detleft( Syl_{d,e}tup{tilF, tilG} right)$ using
      the Leibniz formula.
      The Leibniz formula yields
      begin{equation}
      detleft( Syl_{d,e} tup{ tilF, tilG } right)
      =sum_{sigmain S_{d+e}}a_{sigma},
      label{darij1.pf.t1.det=sum}
      tag{2}
      end{equation}

      where for each permutation $sigmain S_{d+e}$, the addend $a_{sigma}$ is a
      product of entries of $Syl_{d,e} tup{tilF, tilG}$, possibly with a minus sign. More
      precisely,
      begin{equation}
      a_{sigma}=tup{-1}^{sigma}prod_{i=1}^{d+e}left( text{the }
      left( i,sigmaleft( iright) right) text{-th entry of }
      Syl_{d,e}tup{tilF, tilG}
      right)
      end{equation}

      for each $sigmain S_{d+e}$ (where $tup{-1}^{sigma}$ denotes the
      sign of the permutation $sigma$).



      Now, eqref{darij1.pf.t1.P=det} becomes
      begin{equation}
      P=detleft( Syl_{d,e}tup{tilF, tilG} right)
      =sum_{sigmain S_{d+e}}a_{sigma}
      label{darij1.pf.t1.P=sum}
      tag{3}
      end{equation}

      (by eqref{darij1.pf.t1.det=sum}).



      All entries of the matrix $Syl_{d,e}tup{tilF, tilG}$ are polynomials in the two
      indeterminates $X$ and $Y$; but only $d+e$ of these entries are non-constant
      polynomials (since all of $f_0 ,f_1 ,ldots,f_d ,g_0 ,g_1 ,ldots,g_e $
      belong to $KK$). More precisely, only $e$ entries of
      $Syl_{d,e}tup{tilF, tilG}$
      have non-zero degree with respect to the variable $X$ (namely, the first $e$
      entries of the diagonal of $Syl_{d,e}tup{tilF, tilG}$), and these $e$ entries have degree $1$
      with respect to this variable. Thus, for each $sigmain S_{d+e}$, the product
      $a_{sigma}$ contains at most $e$ many factors that have degree $1$ with
      respect to the variable $X$, while all its remaining factors have degree $0$
      with respect to this variable. Therefore, for each $sigmain S_{d+e}$, the
      product $a_{sigma}$ has degree $leq ecdot1=e$ with respect to the variable
      $X$. Hence, the sum $sum_{sigmain S_{d+e}}a_{sigma}$ of all these products
      $a_{sigma}$ also has degree $leq e$ with respect to the variable $X$. In
      other words, $deg_X left( sum_{sigmain S_{d+e}}a_{sigma}right) leq
      e$
      . In view of eqref{darij1.pf.t1.P=sum}, this rewrites as $deg_X Pleq e$.
      Similarly, $deg_Y Pleq d$ (since only $d$ entries of the matrix
      $Syl_{d,e}tup{tilF, tilG}
      $
      have non-zero degree with respect to the variable $Y$, and these $d$ entries
      have degree $1$ with respect to this variable).



      Next, we shall show that the polynomial $P$ is nonzero. Indeed, let us
      consider all elements of $widetilde{KK}$ as polynomials in the
      variable $X$ over the ring $KK ive{Y} $. For each
      permutation $sigmain S_{d+e}$, the product $a_{sigma}$ (thus considered)
      has degree $leq e$ (as we have previously shown). Let us now compute the
      coefficient of $X^e $ in this product $a_{sigma}$. There are three possible cases:




      • Case 1: The permutation $sigmain S_{d+e}$ does not satisfy $left(
        sigmaleft( iright) =itext{ for each }iinleft{ 1,2,ldots,eright}
        right)$
        . Thus, the product $a_{sigma}$ has strictly fewer than $e$
        factors that have degree $1$ with respect to the variable $X$, while all its
        remaining factors have degree $0$ with respect to this variable. Thus, the
        whole product $a_{sigma}$ has degree $<e$ with respect to the variable $X$.
        Hence, the coefficient of $X^e $ in this product $a_{sigma}$ is $0$.


      • Case 2: The permutation $sigmain S_{d+e}$ satisfies $left(
        sigmaleft( iright) =itext{ for each }iinleft{ 1,2,ldots,eright}
        right)$
        , but is not the identity map $idin S_{d+e}$. Thus,
        there must exist at least one $iinleft{ 1,2,ldots,d+eright} $ such
        that $sigmaleft( iright) <i$. Consider such an $i$, and notice that it
        must satisfy $i>e$ and $sigmaleft( iright) >e$; hence, the $left(
        i,sigmaleft( iright) right)$
        -th entry of
        $Syl_{d,e}tup{tilF, tilG}$ is $0$. Thus, the
        whole product $a_{sigma}$ is $0$ (since the latter entry is a factor in this
        product). Thus, the coefficient of $X^e $ in this product $a_{sigma}$ is $0$.


      • Case 3: The permutation $sigmain S_{d+e}$ is the identity map
        $idin S_{d+e}$. Thus, the product $a_{sigma}$ is $left(
        f_0 -Xright) ^e g_e^d $
        (since $tup{-1}^{id
        }=1$
        ). Hence, the coefficient of $X^e $ in this product $a_{sigma}$ is
        $tup{-1}^e g_e^d $.



      Summarizing, we thus conclude that the coefficient of $X^e $ in the product
      $a_{sigma}$ is $0$ unless $sigma=id$, in which case it is
      $tup{-1}^e g_e^d $. Hence, the coefficient of $X^e $ in the
      sum $sum_{sigmain S_{d+e}}a_{sigma}$ is $tup{-1}^e g_e^d
      neq0$
      . Therefore, $sum_{sigmain S_{d+e}}a_{sigma}neq0$. In view of
      eqref{darij1.pf.t1.P=sum}, this rewrites as $Pneq0$. In other words, the
      polynomial $P$ is nonzero.



      Finally, it remains to prove that $Ptup{F, G} =0$. In order to do
      this, we let $LL$ be the polynomial ring $KK ive{U}
      $
      in a new indeterminate $U$. We let $varphi:KK ive{X, Y}
      rightarrowLL$
      be the unique $KK$-algebra homomorphism that
      sends $X$ to $Ftup{U}$ and sends $Y$ to $Gtup{U}$.
      (This is well-defined by the universal property of the polynomial ring
      $KK ive{X, Y}$.)
      Note that $varphi$ is a $KK$-algebra homomorphism from
      $widetilde{KK}$ to $LL$ (since $KK ive{X, Y}
      =widetilde{KK}$
      ). Thus, $LL$ becomes a $widetilde{KK
      }$
      -algebra via this homomorphism $varphi$.



      Now, recall that the polynomial $tilF inwidetilde{KK}ive{T}$
      was defined by $tilF =F-X$. Hence, $tilF left(
      Uright) =Ftup{U} -varphitup{X}$
      . (Indeed, when we
      regard $X$ as an element of $widetilde{KK}ive{T}$, the
      polynomial $X$ is simply a constant, and thus evaluating it at $U$ yields the
      canonical image of $X$ in $LL$, which is $varphitup{X}$.)
      But $varphitup{X} =Ftup{U}$ (by the definition of
      $varphi$).
      Hence, $tilF tup{U} =Ftup{U} -varphitup{X} =0$ (since $varphitup{X} =Ftup{U}$). Similarly, $tilG tup{U} =0$.



      Thus, the element $UinLL$ satisfies $tilF tup{U}
      =0$
      and $tilG tup{U} =0$. Hence, Theorem 2 (applied to
      $widetilde{KK}$, $tilF$, $tilG$ and $U$ instead of $KK$, $P$, $Q$ and $w$) yields that
      $Res_{d,e}tup{tilF, tilG} = 0$ in $LL$.
      In other words, $varphitup{ Res_{d,e}tup{tilF, tilG} } =0$. In view of
      $Res_{d,e}tup{tilF, tilG} =P$, this rewrites as $varphitup{P} =0$.



      But recall that $varphi$ is the $KK$-algebra homomorphism that sends
      $X$ to $Ftup{U}$ and sends $Y$ to $Gtup{U}$. Hence, it
      sends any polynomial $QinKK ive{X, Y}$ to $Q tup{ Ftup{U}, Gtup{U} }$. Applying this to $Q=P$, we
      conclude that it sends $P$ to $P tup{ Ftup{U}, Gtup{U} }$.
      In other words, $varphitup{P} =P tup{ Ftup{U}, Gtup{U} }$;
      hence, $P tup{ Ftup{U}, Gtup{U} } =varphitup{P} =0$.



      Now, $Ftup{U}$ and $Gtup{U}$ are polynomials in the
      indeterminate $U$ over $KK$. If we rename the indeterminate $U$ as
      $T$, then these polynomials $Ftup{U}$ and $Gtup{U}$
      become $Ftup{T}$ and $Gtup{T}$, and therefore the
      polynomial $Pleft( Ftup{U} ,Gtup{U} right)$ becomes
      $Ptup{ Ftup{T}, Gtup{T} }$.
      Hence, $Ptup{ Ftup{T}, Gtup{T} } =0$ (since $Ptup{ Ftup{U}, Gtup{U} } =0$).
      In other words, $P tup{F, G} =0$ (since $Ftup{T} =F$ and $Gtup{T} =G$).
      This completes the proof of Lemma 3. $blacksquare$




      Lemma 4. (a) Theorem 1 holds when $d = 0$.



      (b) Theorem 1 holds when $e = 0$.




      Proof of Lemma 4. (a) Assume that $d = 0$.
      Thus, $d + e > 0$ rewrites as $e > 0$. Hence, $e geq 1$.
      But the polynomial $F$ is constant (since $deg F leq d = 0$).
      In other words, $F = f$ for some $f in KK$. Consider this $f$.
      Now, let $Q$ be the polynomial $X - f in KKive{X, Y}$.
      Then, $Q$ is nonzero and satisfies
      $deg_X Q = 1 leq e$ (since $e geq 1$) and
      $deg_Y Q = 0 leq d$ and
      $Qleft(F, Gright) = F - f = 0$ (since $F = f$).
      Hence, there exists a nonzero polynomial $PinKK ive{X, Y}$
      in two indeterminates $X$ and $Y$ such that $deg_X Pleq e$
      and $deg_Y Pleq d$ and $Ptup{F, G} =0$
      (namely, $P = Q$). In other words, Theorem 1 holds (under
      our assumption that $d = 0$). This proves Lemma 4 (a).



      (b) The proof of Lemma 4 (b) is analogous to
      our above proof of Lemma 4 (a). $blacksquare$



      Now, we can prove Theorem 1 at last:



      Proof of Theorem 1. We shall prove Theorem 1 by induction on $e$.



      The induction base is the case when $e = 0$; this case follows
      from Lemma 4 (b).



      For the induction step, we fix a positive integer $eps$.
      Assume (as the induction hypothesis) that Theorem 1 holds for $e = eps - 1$.
      We must now prove that Theorem 1 holds for $e = eps$.



      Let $KK$ be a commutative ring. Let $F$ and $G$ be two
      polynomials in the polynomial ring $KK ive{T}$. Let
      $d$ be a nonnegative integer such that $d+eps > 0$ and $deg F leq d$ and $deg G leq eps$.
      Our goal is now to prove that the claim of Theorem 1 holds for $e = eps$.
      In other words, our goal is to prove that there exists a nonzero polynomial $PinKK ive{X, Y}$
      in two indeterminates $X$ and $Y$ such that $deg_X Pleq eps$
      and $deg_Y Pleq d$ and $Ptup{F, G} =0$.



      Write the polynomial $G$ in the form $G=g_0 +g_1 T+g_2 T^2 +cdots
      +g_{eps}T^{eps}$
      , where $g_0 ,g_1 ,ldots,g_{eps}inKK$.
      (This can be done, since $deg G leq eps$.)
      If $g_{eps}^d neq 0$, then our goal follows immediately
      by applying Lemma 3 to $e = eps$.
      Thus, for the rest of this induction step, we WLOG assume that $g_{eps}^d = 0$.
      Hence, there exists a positive integer $m$ such that $g_{eps}^m = 0$ (namely, $m = eps$).
      Thus, there exists a smallest such $m$.
      Consider this smallest $m$.
      Then, $g_{eps}^m = 0$, but
      begin{align}
      text{every positive integer $ell < m$ satisfies $g_{eps}^{ell} neq 0$.}
      label{darij1.pf.t1.epsilon-ell}
      tag{4}
      end{align}



      We claim that $g_{eps}^{m-1} neq 0$. Indeed, if $m-1$ is a
      positive integer, then this follows from eqref{darij1.pf.t1.epsilon-ell} (applied to $ell = m-1$);
      otherwise, it follows from the fact that $g_{eps}^0 = 1 neq 0$
      (since the ring $KK$ is nontrivial).



      Now recall again that our goal is to prove that the claim of Theorem 1 holds for $e = eps$.
      If $d = 0$, then this goal follows from Lemma 4 (a).
      Hence, for the rest of this induction step, we WLOG assume that $d neq 0$.
      Hence, $d > 0$ (since $d$ is a nonnegative integer).



      We have $e geq 1$ (since $e$ is a positive integer), thus
      $e - 1 geq 0$. Hence, $d + left(e-1right) geq d > 0$.



      Let $I$ be the subset $left{x in KK mid g_{eps}^{m-1} x = 0 right}$ of $KK$.
      Then, $I$ is an ideal of $KK$ (namely, it is the
      annihilator of the subset
      $left{g_{eps}^{m-1}right}$ of $KK$);
      thus, $KK / I$ is a commutative $KK$-algebra.
      Denote this commutative $KK$-algebra $KK / I$ by $LL$.
      Let $pi$ be the canonical projection $KK to LL$.
      Of course, $pi$ is a surjective $KK$-algebra homomorphism.



      For any $a in KK$, we will denote the image of $a$ under $pi$ by $overline{a}$.



      The $KK$-algebra homomorphism $pi : KK to LL$ induces a canonical $KK$-algebra homomorphism $KKive{T} to LLive{T}$ (sending $T$ to $T$).
      For any $a in KKive{T}$, we will denote the image of $a$ under the latter homomorphism by $overline{a}$.



      The $KK$-algebra homomorphism $pi : KK to LL$ induces a canonical $KK$-algebra homomorphism $KKive{X, Y} to LLive{X, Y}$ (sending $X$ and $Y$ to $X$ and $Y$).
      For any $a in KKive{X, Y}$, we will denote the image of $a$ under the latter homomorphism by $overline{a}$.



      We have $g_{eps}^{m-1} g_{eps} = g_{eps}^m = 0$, so that $g_{eps} in I$ (by the definition of $I$);
      hence, the residue class $overline{g_{eps}}$ of $g_{eps}$ modulo the ideal $I$ is $0$.



      We have $g_{eps}^{m-1} cdot 1 = g_{eps}^{m-1} neq 0$ in $KK$,
      and thus $1 notin I$ (by the definition of $I$).
      Hence, the ideal $I$ is not the whole ring $KK$.
      Thus, the quotient ring $KK / I = LL$ is nontrivial.



      But $G=g_0 +g_1 T+g_2 T^2 +cdots +g_{eps}T^{eps}$
      and thus
      begin{align}
      overline{G}
      &= overline{g_0} + overline{g_1} T + overline{g_2} T^2 + cdots + overline{g_{eps}} T^{eps} \
      &= left( overline{g_0} + overline{g_1} T + overline{g_2} T^2 + cdots + overline{g_{eps-1}} T^{eps-1} right)
      + underbrace{overline{g_{eps}}}_{= 0} T^{eps} \
      &= overline{g_0} + overline{g_1} T + overline{g_2} T^2 + cdots + overline{g_{eps-1}} T^{eps-1} ,
      end{align}

      so that $deg overline{G} leq e-1$.
      Also, $deg overline{F} leq deg F leq d$.
      But the induction hypothesis tells us that Theorem 1 holds for $e = eps - 1$.
      Hence, we can apply Theorem 1 to $LL$, $overline{F}$, $overline{G}$ and $eps - 1$
      instead of $KK$, $F$, $G$ and $e$.
      We thus conclude that there exists a nonzero polynomial $Pin LLive{X, Y}$ in two indeterminates $X$ and $Y$ such that $deg_X P leq eps - 1$ and $deg_Y P leq d$ and $Pleft( overline{F}, overline{G} right) =0$.
      Consider this polynomial $P$, and denote it by $R$.
      Thus, $R in LL ive{X, Y}$ is a nonzero polynomial in two indeterminates $X$ and $Y$ and satisfies $deg_X R leq eps - 1$ and $deg_Y R leq d$ and $R left( overline{F}, overline{G} right) =0$.



      Clearly, there exists a polynomial $Q in KKive{X, Y}$ in two
      indeterminates $X$ and $Y$ that satisfies $deg_X Q = deg_X R$ and
      $deg_Y Q = deg_Y R$ and $overline{Q} = R$.
      (Indeed, we can construct such a $Q$ as follows: Write
      $R$ in the form
      $R = sumlimits_{i = 0}^{deg_X R} sumlimits_{j = 0}^{deg_Y R} r_{i, j} X^i Y^j$
      for some coefficients $r_{i, j} in LL$.
      For each pair $left(i, jright)$, pick some
      $p_{i, j} in KK$ such that $overline{p_{i, j}} = r_{i, j}$
      (this can be done, since the homomorphism $pi : KK to LL$ is surjective).
      Then, set $Q = sumlimits_{i = 0}^{deg_X R} sumlimits_{j = 0}^{deg_Y R} p_{i, j} X^i Y^j$.
      It is clear that this polynomial $Q$ satisfies $deg_X Q = deg_X R$ and
      $deg_Y Q = deg_Y R$ and $overline{Q} = R$.)



      We have $overline{Q left(F, Gright)}
      = underbrace{overline{Q}}_{=R} left( overline{F}, overline{G} right)
      = R left( overline{F}, overline{G} right) = 0$
      .
      In other words, the polynomial $Q left(F, Gright) in KKive{T}$
      lies in the kernel of the canonical
      $KK$-algebra homomorphism $KKive{T} to LLive{T}$.
      This means that each coefficient of this
      polynomial $Q left(F, Gright) in KKive{T}$
      lies in the kernel of the $KK$-algebra homomorphism $pi : KK to LL$.
      In other words, each coefficient of this
      polynomial $Q left(F, Gright) in KKive{T}$ lies in $I$
      (since the kernel of the $KK$-algebra homomorphism $pi : KK to LL$
      is $I$).
      Hence, each coefficient $c$ of this
      polynomial $Q left(F, Gright) in KKive{T}$
      satisfies $g_{eps}^{m-1} c = 0$ (by the definition of $I$).
      Therefore, $g_{eps}^{m-1} Q left(F, Gright) = 0$.



      On the other hand, $overline{Q} = R$ is nonzero.
      In other words, the polynomial $Q in KKive{X, Y}$ does not lie
      in the kernel of the canonical
      $KK$-algebra homomorphism $KKive{X, Y} to LLive{X, Y}$.
      This means that not every coefficient of this
      polynomial $Q in KKive{X, Y}$
      lies in the kernel of the $KK$-algebra homomorphism $pi : KK to LL$.
      In other words, not every coefficient of this
      polynomial $Q in KKive{X, Y}$ lies in $I$
      (since the kernel of the $KK$-algebra homomorphism $pi : KK to LL$
      is $I$).
      Hence, not every coefficient $c$ of this
      polynomial $Q in KKive{X, Y}$
      satisfies $g_{eps}^{m-1} c = 0$ (by the definition of $I$).
      Therefore, $g_{eps}^{m-1} Q neq 0$.
      So $g_{eps}^{m-1} Q in KKive{X, Y}$ is a nonzero polynomial
      in two indeterminates $X$ and $Y$ and satisfies
      $deg_X left( g_{eps}^{m-1} Q right) leq deg_X Q = deg_X R leq eps - 1 leq eps$
      and
      $deg_Y left( g_{eps}^{m-1} Q right) leq deg_Y Q = deg_Y R leq d$
      and $left(g_{eps}^{m-1} Q right) left(F, Gright) = g_{eps}^{m-1} Q left(F, Gright) = 0$.
      Hence, there exists a nonzero polynomial $PinKK ive{X, Y}$
      in two indeterminates $X$ and $Y$ such that $deg_X Pleq eps$
      and $deg_Y Pleq d$ and $Ptup{F, G} =0$
      (namely, $P = g_{eps}^{m-1} Q$).
      We have thus reached our goal.



      So we have proven that Theorem 1 holds for $e = eps$.
      This completes the induction step. Thus, Theorem 1 is proven by induction. $blacksquare$






      share|cite|improve this answer

























        up vote
        3
        down vote



        accepted







        up vote
        3
        down vote



        accepted






        Here is a proof using resultants, taken mostly from https://mathoverflow.net/questions/189181//189344#189344 . (For a short summary, see one of my comments to the OP.)



        $newcommand{KK}{mathbb{K}}
        newcommand{LL}{mathbb{L}}
        newcommand{NN}{mathbb{N}}
        newcommand{ww}{mathbf{w}}
        newcommand{eps}{varepsilon}
        newcommand{Res}{operatorname{Res}}
        newcommand{Syl}{operatorname{Syl}}
        newcommand{adj}{operatorname{adj}}
        newcommand{id}{operatorname{id}}
        newcommand{tilF}{widetilde{F}}
        newcommand{tilG}{widetilde{G}}
        newcommand{ive}[1]{left[ #1 right]}
        newcommand{tup}[1]{left( #1 right)}
        newcommand{zeroes}[1]{underbrace{0,0,ldots,0}_{#1 text{ zeroes}}}$

        We shall prove a more general statement:




        Theorem 1. Let $KK$ be a nontrivial commutative ring. Let $F$ and $G$ be two
        polynomials in the polynomial ring $KK ive{T}$. Let
        $d$ and $e$ be nonnegative integers such that $d+e > 0$ and $deg F leq d$ and $deg G leq e$.
        Then, there exists a nonzero polynomial $PinKK ive{X, Y}$ in two indeterminates $X$ and $Y$ such that $deg_X Pleq e$
        and $deg_Y Pleq d$ and $Ptup{F, G} =0$.




        Here and in the following, we are using the following notations:




        • "Ring" always means "associative ring with unity".


        • A ring $R$ is said to be nontrivial if $0 neq 1$ in $R$.


        • If $R$ is any polynomial in the polynomial ring $KK ive{X, Y}$, then $deg_X R$ denotes the degree of $R$ with respect to the variable $X$ (that is, it denotes the degree of $R$ when $R$ is considered as a polynomial in $tup{KK ive{Y}} ive{X} $), whereas $deg_Y R$ denotes the degree of the polynomial $R$ with respect to the variable $Y$.



        To prove Theorem 1, we recall the notion of the resultant of two polynomials over a
        commutative ring:




        Definition. Let $KK$ be a commutative ring.
        Let $Pin KK ive{T}$ and $QinKK ive{T}$ be two polynomials in the polynomial ring $KK ive{T}$.
        Let $dinNN$ and $einNN$ be such that $deg Pleq d$ and $deg Qleq e$.
        Thus, write the polynomials $P$ and $Q$ in the forms
        begin{align*}
        P & =p_0 +p_1 T+p_2 T^2 +cdots+p_d T^d qquadtext{and}\
        Q & =q_0 +q_1 T+q_2 T^2 +cdots+q_e T^e ,
        end{align*}

        where $p_0 ,p_1 ,ldots,p_d ,q_0 ,q_1 ,ldots,q_e $ belong to $KK$.
        Then, we let $Syl_{d,e} tup{P, Q}$ be the matrix
        begin{equation}
        left(
        begin{array}[c]{c}
        begin{array}[c]{ccccccccc}
        p_0 & 0 & 0 & cdots & 0 & q_0 & 0 & cdots & 0\
        p_1 & p_0 & 0 & cdots & 0 & q_1 & q_0 & cdots & 0\
        vdots & p_1 & p_0 & cdots & 0 & vdots & q_1 & ddots & vdots\
        vdots & vdots & p_1 & ddots & vdots & vdots & vdots & ddots &
        q_0 \
        p_d & vdots & vdots & ddots & p_0 & vdots & vdots & ddots & q_1 \
        0 & p_d & vdots & ddots & p_1 & q_e & vdots & ddots & vdots\
        vdots & vdots & ddots & ddots & vdots & 0 & q_e & ddots & vdots\
        0 & 0 & 0 & ddots & vdots & vdots & vdots & ddots & vdots\
        0 & 0 & 0 & cdots & p_d & 0 & 0 & cdots & q_e
        end{array}
        \
        underbrace{ }_{etext{ columns}}
        underbrace{ }_{dtext{ columns}}
        end{array}
        right) inKK^{tup{d+e} timestup{d+e}};
        end{equation}

        this is the $tup{d+e} timestup{d+e}$-matrix whose first $e$ columns have the form
        begin{equation}
        left( zeroes{k},p_0 ,p_1 ,ldots ,p_d ,zeroes{e-1-k}right) ^{T}
        qquadtext{for }kinleft{ 0,1,ldots,e-1right} ,
        end{equation}

        and whose last $d$ columns have the form
        begin{equation}
        left( zeroes{ell},q_0 ,q_1 ,ldots,q_e ,zeroes{d-1-ell}right) ^{T}
        qquadtext{for }ellinleft{ 0,1,ldots,d-1right} .
        end{equation}

        Furthermore, we define $Res_{d,e}tup{P, Q}$ to be the element
        begin{equation}
        det tup{ Syl_{d,e}tup{P, Q} } in KK .
        end{equation}

        The matrix $Syl_{d,e}tup{P, Q}$ is called the Sylvester matrix of $P$ and $Q$ in degrees $d$ and $e$.
        Its determinant $Res_{d,e}tup{P, Q}$ is called the resultant of $P$ and $Q$ in degrees $d$ and $e$.



        It is common to apply this definition to the case when $d=deg P$ and $e=deg Q$; in this case, we simply call $Res_{d,e}tup{P, Q}$ the resultant of $P$ and $Q$, and denote it by $Res tup{P, Q}$.




        Here, we take $NN$ to mean the set $left{0,1,2,ldotsright}$ of all nonnegative integers.



        One of the main properties of resultants is the following:




        Theorem 2. Let $KK$ be a commutative ring.
        Let $Pin KK ive{T}$ and $QinKK ive{T}$ be two polynomials in the polynomial ring $KK ive{T}$.
        Let $dinNN$ and $einNN$ be such that $d+e > 0$ and $deg Pleq d$ and $deg Qleq e$.
        Let $LL$ be a commutative $KK$-algebra, and let $winLL$ satisfy $Ptup{w} =0$ and $Qtup{w} = 0$.
        Then, $Res_{d,e}tup{P, Q} =0$ in $LL$.




        Proof of Theorem 2 (sketched). Recall that $Res_{d,e}tup{P, Q} =det tup{ Syl_{d,e}tup{P, Q} }$ (by the definition of $Res_{d,e}tup{P, Q}$).



        Write the polynomials $P$ and $Q$ in the forms
        begin{align*}
        P & =p_0 +p_1 T+p_2 T^2 +cdots+p_d T^d qquadtext{and}\
        Q & =q_0 +q_1 T+q_2 T^2 +cdots+q_e T^e ,
        end{align*}

        where $p_0 ,p_1 ,ldots,p_d ,q_0 ,q_1 ,ldots,q_e $ belong to
        $KK$.
        (We can do this, since $deg P leq d$ and $deg Q leq e$.)
        From $p_0 +p_1 T+p_2 T^2 +cdots+p_d T^d = P$,
        we obtain
        $p_0 + p_1 w + p_2 w^2 + cdots + p_d w^d = Pleft(wright) = 0$.
        Similarly,
        $q_0 + q_1 w + q_2 w^2 + cdots + q_e w^e = 0$.



        Let $A$ be the matrix $Syl_{d,e}tup{P, Q}
        inKK^{tup{d+e} timestup{d+e} }$
        , regarded as a
        matrix in $LL ^{tup{d+e} timestup{d+e} }$ (by
        applying the canonical $KK$-algebra homomorphism $KK
        rightarrowLL$
        to all its entries).



        Let $ww$ be the row vector $left( w^{0},w^{1},ldots,w^{d+e-1}
        right) inLL ^{1timestup{d+e} }$
        . Let $mathbf{0}$ denote
        the zero vector in $LL ^{1timestup{d+e} }$.



        Now, it is easy to see that $ww A=mathbf{0}$. (Indeed, for each
        $kinleft{ 1,2,ldots,d+eright} $, we have
        begin{align*}
        & wwleft( text{the }ktext{-th column of }Aright) \
        & =
        begin{cases}
        p_0 w^{k-1}+p_1 w^k +p_2 w^{k+1}+cdots+p_d w^{k-1+d}, & text{if }kleq e;\
        q_0 w^{k-e-1}+q_1 w^{k-e}+q_2 w^{k-e+1}+cdots+q_e w^{k-1}, & text{if }k>e
        end{cases}
        \
        & =
        begin{cases}
        w^{k-1}left( p_0 +p_1 w+p_2 w^2 +cdots+p_d w^d right) , & text{if }kleq e;\
        w^{k-e-1}left( q_0 +q_1 w+q_2 w^2 +cdots+q_e w^eright) , & text{if }k>e
        end{cases}
        \
        & =
        begin{cases}
        w^{k-1}0, & text{if }kleq e;\
        w^{k-e-1}0, & text{if }k>e
        end{cases}
        \
        & qquadleft(
        begin{array}[c]{c}
        text{since }p_0 +p_1 w+p_2 w^2 +cdots+p_d w^d =0\
        text{and }q_0 +q_1 w+q_2 w^2 +cdots+q_e w^e =0
        end{array}
        right) \
        & =0.
        end{align*}

        But this means precisely that $ww A=mathbf{0}$.)



        But $A$ is a square matrix over a commutative ring; thus, the
        adjugate
        $adj A$ of $A$ satisfies $Acdotadj A=det
        Acdot I_{d+e}$
        (where $I_{d+e}$ denotes the identity matrix of size $d+e$).
        Hence, $wwunderbrace{Acdotadj A}_{=det Acdot
        I_{d+e}}=wwdet Acdot I_{d+e}=det Acdotww$
        . Comparing this
        with $underbrace{ww A}_{=mathbf{0}}cdotadj A
        =mathbf{0}cdotadj A=mathbf{0}$
        , we obtain
        $det Acdotww=mathbf{0}$.



        But $d+e > 0$; thus, the row vector $ww$ has a well-defined first entry.
        This first entry is $w^0 = 1$.
        Hence, the first entry of the row vector $det Acdotww$ is $det A cdot 1 = det A$.
        Hence, from $det Acdotww=mathbf{0}$, we conclude that $det A=0$.
        Comparing this with
        begin{equation}
        detunderbrace{A}_{=Syl_{d,e}tup{P, Q}} =det tup{ Syl_{d,e}tup{P, Q} }
        =Res_{d,e}tup{P, Q} ,
        end{equation}

        we obtain $Res_{d,e}tup{P, Q} =0$ (in $LL$). This proves Theorem 2. $blacksquare$



        Theorem 2 (which I have proven in detail to stress how the proof uses nothing
        about $LL$ other than its commutativity) was just the meek tip of the
        resultant iceberg. Here are some further sources with deeper results:




        • Antoine Chambert-Loir, Résultants (minor errata).


        • Svante Janson, Resultant and discriminant of polynomials.


        • Gerald Myerson, On resultants, Proc. Amer. Math. Soc. 89 (1983), 419--420.



        Some of these sources use the matrix $left( Syl_{d,e}tup{P, Q} right) ^{T}$ instead of our
        $Syl_{d,e}tup{P, Q}$, but of course this
        matrix has the same determinant as $Syl_{d,e}tup{P, Q}$, so that their definition of a resultant is the same as mine.



        We are not yet ready to prove Theorem 1 directly. Instead, let us prove a
        weaker version of Theorem 1:




        Lemma 3. Let $KK$ be a commutative ring. Let $F$ and $G$ be two
        polynomials in the polynomial ring $KK ive{T}$. Let
        $d$ and $e$ be nonnegative integers such that $d+e > 0$ and $deg F leq d$ and $deg G leq e$.
        Write the polynomial $G$ in the form $G=g_0 +g_1 T+g_2 T^2 +cdots
        +g_e T^e $
        , where $g_0 ,g_1 ,ldots,g_e inKK$.
        Assume that $g_e^d neq 0$.
        Then, there exists a nonzero polynomial $PinKK ive{X, Y}$
        in two indeterminates $X$ and $Y$ such that $deg_X Pleq e$
        and $deg_Y Pleq d$ and $Ptup{F, G} =0$.




        Proof of Lemma 3 (sketched). Let $widetilde{KK}$ be the
        commutative ring $KK ive{X, Y}$. Define two polynomials
        $tilF inwidetilde{KK}ive{T}$ and $tilG inwidetilde{KK}ive{T}$ by
        begin{equation}
        tilF =F-X=Ftup{T} -Xqquadtext{and}qquadtilG
        =G-Y=Gtup{T} -Y.
        end{equation}

        Note that $X$ and $Y$ have degree $0$ when considered as polynomials in
        $widetilde{KK}ive{T}$ (since $X$ and $Y$ belong to the
        ring $widetilde{KK}$). Thus, these new polynomials $tilF = F - X$
        and $tilG = G - Y$ have degrees $degtilF leq d$ (because
        $deg X = 0 leq d$ and $deg F leq d$) and
        $degtilG leq e$ (similarly).
        Hence, the resultant $Res_{d,e}tup{tilF, tilG} in
        widetilde{KK}$
        of these polynomials $tilF$ and
        $tilG$ in degrees $d$ and $e$ is well-defined. Let us denote this
        resultant $Res_{d,e}left( tilF
        ,tilG right)$
        by $P$. Hence,
        begin{equation}
        P=Res_{d,e}left( tilF ,tilG
        right) inwidetilde{KK}=KK ive{X, Y} .
        end{equation}



        Our next goal is to show that $P$ is a nonzero polynomial and satisfies
        $deg_X Pleq e$ and $deg_Y Pleq d$ and $Ptup{F, G} =0$. Once
        this is shown, Lemma 3 will obviously follow.



        We have
        begin{equation}
        P=Res_{d,e}left( tilF ,tilG
        right) =detleft( Syl_{d,e}left( tilF
        ,tilG right) right)
        label{darij1.pf.t1.P=det}
        tag{1}
        end{equation}

        (by the definition of $Res_{d,e}left(
        tilF ,tilG right)$
        ).



        Write the polynomial $F$ in the form $F=f_0 +f_1 T+f_2 T^2 +cdots
        +f_d T^d $
        , where $f_0 ,f_1 ,ldots,f_d inKK$. (This can be
        done, since $deg F leq d$.)



        Recall that $g_e ^d neq 0$. Thus, $left( -1right) ^e g_e^d neq 0$.



        For each $pinNN$, we let $S_{p}$ be the group of all permutations of
        the set $left{ 1,2,ldots,pright} $.



        Now,
        begin{align*}
        tilF & =F-X=left( f_0 +f_1 T+f_2 T^2 +cdots+f_d
        T^d right) -X\
        & qquadleft( text{since }F=f_0 +f_1 T+f_2 T^2 +cdots+f_d
        T^d right) \
        & =tup{f_0 - X} +f_1 T+f_2 T^2 +cdots+f_d T^d .
        end{align*}

        Thus, $f_0 -X,f_1 ,f_2 ,ldots,f_d $ are the coefficients of the
        polynomial $tilF inwidetilde{KK}ive{T}$ (since
        $f_0 -Xinwidetilde{KK}$). Similarly, $g_0 -Y,g_1 ,g_2
        ,ldots,g_e $
        are the coefficients of the polynomial $tilG
        inwidetilde{KK}ive{T}$
        . Hence, the definition of the
        matrix $Syl_{d,e}left( tilF ,tilG
        right)$
        yields
        begin{align}
        &Syl_{d,e}tup{tilF, tilG} \
        &=left(
        begin{array}[c]{c}
        begin{array}[c]{ccccccccc}
        f_0 -X & 0 & 0 & cdots & 0 & g_0 -Y & 0 & cdots & 0\
        f_1 & f_0 -X & 0 & cdots & 0 & g_1 & g_0 -Y & cdots & 0\
        vdots & f_1 & f_0-X & cdots & 0 & vdots & g_1 & ddots & vdots\
        vdots & vdots & f_1 & ddots & vdots & vdots & vdots & ddots &
        g_0 -Y\
        f_d & vdots & vdots & ddots & f_0 -X & vdots & vdots & ddots &
        g_1 \
        0 & f_d & vdots & ddots & f_1 & g_e & vdots & ddots & vdots\
        vdots & vdots & ddots & ddots & vdots & 0 & g_e & ddots & vdots\
        0 & 0 & 0 & ddots & vdots & vdots & vdots & ddots & vdots\
        0 & 0 & 0 & cdots & f_d & 0 & 0 & cdots & g_e
        end{array}
        \
        underbrace{qquad qquad qquad qquad qquad qquad qquad qquad qquad qquad}
        _{etext{ columns}}
        underbrace{qquad qquad qquad qquad qquad qquad qquad}
        _{dtext{ columns}}
        end{array}
        right) \
        &inwidetilde{KK}^{tup{d+e} timesleft(
        d+eright) }.
        end{align}

        Now, let us use this explicit form of $Syl_{d,e} tup{tilF, tilG}$
        to compute $detleft( Syl_{d,e}tup{tilF, tilG} right)$ using
        the Leibniz formula.
        The Leibniz formula yields
        begin{equation}
        detleft( Syl_{d,e} tup{ tilF, tilG } right)
        =sum_{sigmain S_{d+e}}a_{sigma},
        label{darij1.pf.t1.det=sum}
        tag{2}
        end{equation}

        where for each permutation $sigmain S_{d+e}$, the addend $a_{sigma}$ is a
        product of entries of $Syl_{d,e} tup{tilF, tilG}$, possibly with a minus sign. More
        precisely,
        begin{equation}
        a_{sigma}=tup{-1}^{sigma}prod_{i=1}^{d+e}left( text{the }
        left( i,sigmaleft( iright) right) text{-th entry of }
        Syl_{d,e}tup{tilF, tilG}
        right)
        end{equation}

        for each $sigmain S_{d+e}$ (where $tup{-1}^{sigma}$ denotes the
        sign of the permutation $sigma$).



        Now, eqref{darij1.pf.t1.P=det} becomes
        begin{equation}
        P=detleft( Syl_{d,e}tup{tilF, tilG} right)
        =sum_{sigmain S_{d+e}}a_{sigma}
        label{darij1.pf.t1.P=sum}
        tag{3}
        end{equation}

        (by eqref{darij1.pf.t1.det=sum}).



        All entries of the matrix $Syl_{d,e}tup{tilF, tilG}$ are polynomials in the two
        indeterminates $X$ and $Y$; but only $d+e$ of these entries are non-constant
        polynomials (since all of $f_0 ,f_1 ,ldots,f_d ,g_0 ,g_1 ,ldots,g_e $
        belong to $KK$). More precisely, only $e$ entries of
        $Syl_{d,e}tup{tilF, tilG}$
        have non-zero degree with respect to the variable $X$ (namely, the first $e$
        entries of the diagonal of $Syl_{d,e}tup{tilF, tilG}$), and these $e$ entries have degree $1$
        with respect to this variable. Thus, for each $sigmain S_{d+e}$, the product
        $a_{sigma}$ contains at most $e$ many factors that have degree $1$ with
        respect to the variable $X$, while all its remaining factors have degree $0$
        with respect to this variable. Therefore, for each $sigmain S_{d+e}$, the
        product $a_{sigma}$ has degree $leq ecdot1=e$ with respect to the variable
        $X$. Hence, the sum $sum_{sigmain S_{d+e}}a_{sigma}$ of all these products
        $a_{sigma}$ also has degree $leq e$ with respect to the variable $X$. In
        other words, $deg_X left( sum_{sigmain S_{d+e}}a_{sigma}right) leq
        e$
        . In view of eqref{darij1.pf.t1.P=sum}, this rewrites as $deg_X Pleq e$.
        Similarly, $deg_Y Pleq d$ (since only $d$ entries of the matrix
        $Syl_{d,e}tup{tilF, tilG}
        $
        have non-zero degree with respect to the variable $Y$, and these $d$ entries
        have degree $1$ with respect to this variable).



        Next, we shall show that the polynomial $P$ is nonzero. Indeed, let us
        consider all elements of $widetilde{KK}$ as polynomials in the
        variable $X$ over the ring $KK ive{Y} $. For each
        permutation $sigmain S_{d+e}$, the product $a_{sigma}$ (thus considered)
        has degree $leq e$ (as we have previously shown). Let us now compute the
        coefficient of $X^e $ in this product $a_{sigma}$. There are three possible cases:




        • Case 1: The permutation $sigmain S_{d+e}$ does not satisfy $left(
          sigmaleft( iright) =itext{ for each }iinleft{ 1,2,ldots,eright}
          right)$
          . Thus, the product $a_{sigma}$ has strictly fewer than $e$
          factors that have degree $1$ with respect to the variable $X$, while all its
          remaining factors have degree $0$ with respect to this variable. Thus, the
          whole product $a_{sigma}$ has degree $<e$ with respect to the variable $X$.
          Hence, the coefficient of $X^e $ in this product $a_{sigma}$ is $0$.


        • Case 2: The permutation $sigmain S_{d+e}$ satisfies $left(
          sigmaleft( iright) =itext{ for each }iinleft{ 1,2,ldots,eright}
          right)$
          , but is not the identity map $idin S_{d+e}$. Thus,
          there must exist at least one $iinleft{ 1,2,ldots,d+eright} $ such
          that $sigmaleft( iright) <i$. Consider such an $i$, and notice that it
          must satisfy $i>e$ and $sigmaleft( iright) >e$; hence, the $left(
          i,sigmaleft( iright) right)$
          -th entry of
          $Syl_{d,e}tup{tilF, tilG}$ is $0$. Thus, the
          whole product $a_{sigma}$ is $0$ (since the latter entry is a factor in this
          product). Thus, the coefficient of $X^e $ in this product $a_{sigma}$ is $0$.


        • Case 3: The permutation $sigmain S_{d+e}$ is the identity map
          $idin S_{d+e}$. Thus, the product $a_{sigma}$ is $left(
          f_0 -Xright) ^e g_e^d $
          (since $tup{-1}^{id
          }=1$
          ). Hence, the coefficient of $X^e $ in this product $a_{sigma}$ is
          $tup{-1}^e g_e^d $.



        Summarizing, we thus conclude that the coefficient of $X^e $ in the product
        $a_{sigma}$ is $0$ unless $sigma=id$, in which case it is
        $tup{-1}^e g_e^d $. Hence, the coefficient of $X^e $ in the
        sum $sum_{sigmain S_{d+e}}a_{sigma}$ is $tup{-1}^e g_e^d
        neq0$
        . Therefore, $sum_{sigmain S_{d+e}}a_{sigma}neq0$. In view of
        eqref{darij1.pf.t1.P=sum}, this rewrites as $Pneq0$. In other words, the
        polynomial $P$ is nonzero.



        Finally, it remains to prove that $Ptup{F, G} =0$. In order to do
        this, we let $LL$ be the polynomial ring $KK ive{U}
        $
        in a new indeterminate $U$. We let $varphi:KK ive{X, Y}
        rightarrowLL$
        be the unique $KK$-algebra homomorphism that
        sends $X$ to $Ftup{U}$ and sends $Y$ to $Gtup{U}$.
        (This is well-defined by the universal property of the polynomial ring
        $KK ive{X, Y}$.)
        Note that $varphi$ is a $KK$-algebra homomorphism from
        $widetilde{KK}$ to $LL$ (since $KK ive{X, Y}
        =widetilde{KK}$
        ). Thus, $LL$ becomes a $widetilde{KK
        }$
        -algebra via this homomorphism $varphi$.



        Now, recall that the polynomial $tilF inwidetilde{KK}ive{T}$
        was defined by $tilF =F-X$. Hence, $tilF left(
        Uright) =Ftup{U} -varphitup{X}$
        . (Indeed, when we
        regard $X$ as an element of $widetilde{KK}ive{T}$, the
        polynomial $X$ is simply a constant, and thus evaluating it at $U$ yields the
        canonical image of $X$ in $LL$, which is $varphitup{X}$.)
        But $varphitup{X} =Ftup{U}$ (by the definition of
        $varphi$).
        Hence, $tilF tup{U} =Ftup{U} -varphitup{X} =0$ (since $varphitup{X} =Ftup{U}$). Similarly, $tilG tup{U} =0$.



        Thus, the element $UinLL$ satisfies $tilF tup{U}
        =0$
        and $tilG tup{U} =0$. Hence, Theorem 2 (applied to
        $widetilde{KK}$, $tilF$, $tilG$ and $U$ instead of $KK$, $P$, $Q$ and $w$) yields that
        $Res_{d,e}tup{tilF, tilG} = 0$ in $LL$.
        In other words, $varphitup{ Res_{d,e}tup{tilF, tilG} } =0$. In view of
        $Res_{d,e}tup{tilF, tilG} =P$, this rewrites as $varphitup{P} =0$.



        But recall that $varphi$ is the $KK$-algebra homomorphism that sends
        $X$ to $Ftup{U}$ and sends $Y$ to $Gtup{U}$. Hence, it
        sends any polynomial $QinKK ive{X, Y}$ to $Q tup{ Ftup{U}, Gtup{U} }$. Applying this to $Q=P$, we
        conclude that it sends $P$ to $P tup{ Ftup{U}, Gtup{U} }$.
        In other words, $varphitup{P} =P tup{ Ftup{U}, Gtup{U} }$;
        hence, $P tup{ Ftup{U}, Gtup{U} } =varphitup{P} =0$.



        Now, $Ftup{U}$ and $Gtup{U}$ are polynomials in the
        indeterminate $U$ over $KK$. If we rename the indeterminate $U$ as
        $T$, then these polynomials $Ftup{U}$ and $Gtup{U}$
        become $Ftup{T}$ and $Gtup{T}$, and therefore the
        polynomial $Pleft( Ftup{U} ,Gtup{U} right)$ becomes
        $Ptup{ Ftup{T}, Gtup{T} }$.
        Hence, $Ptup{ Ftup{T}, Gtup{T} } =0$ (since $Ptup{ Ftup{U}, Gtup{U} } =0$).
        In other words, $P tup{F, G} =0$ (since $Ftup{T} =F$ and $Gtup{T} =G$).
        This completes the proof of Lemma 3. $blacksquare$




        Lemma 4. (a) Theorem 1 holds when $d = 0$.



        (b) Theorem 1 holds when $e = 0$.




        Proof of Lemma 4. (a) Assume that $d = 0$.
        Thus, $d + e > 0$ rewrites as $e > 0$. Hence, $e geq 1$.
        But the polynomial $F$ is constant (since $deg F leq d = 0$).
        In other words, $F = f$ for some $f in KK$. Consider this $f$.
        Now, let $Q$ be the polynomial $X - f in KKive{X, Y}$.
        Then, $Q$ is nonzero and satisfies
        $deg_X Q = 1 leq e$ (since $e geq 1$) and
        $deg_Y Q = 0 leq d$ and
        $Qleft(F, Gright) = F - f = 0$ (since $F = f$).
        Hence, there exists a nonzero polynomial $PinKK ive{X, Y}$
        in two indeterminates $X$ and $Y$ such that $deg_X Pleq e$
        and $deg_Y Pleq d$ and $Ptup{F, G} =0$
        (namely, $P = Q$). In other words, Theorem 1 holds (under
        our assumption that $d = 0$). This proves Lemma 4 (a).



        (b) The proof of Lemma 4 (b) is analogous to
        our above proof of Lemma 4 (a). $blacksquare$



        Now, we can prove Theorem 1 at last:



        Proof of Theorem 1. We shall prove Theorem 1 by induction on $e$.



        The induction base is the case when $e = 0$; this case follows
        from Lemma 4 (b).



        For the induction step, we fix a positive integer $eps$.
        Assume (as the induction hypothesis) that Theorem 1 holds for $e = eps - 1$.
        We must now prove that Theorem 1 holds for $e = eps$.



        Let $KK$ be a commutative ring. Let $F$ and $G$ be two
        polynomials in the polynomial ring $KK ive{T}$. Let
        $d$ be a nonnegative integer such that $d+eps > 0$ and $deg F leq d$ and $deg G leq eps$.
        Our goal is now to prove that the claim of Theorem 1 holds for $e = eps$.
        In other words, our goal is to prove that there exists a nonzero polynomial $PinKK ive{X, Y}$
        in two indeterminates $X$ and $Y$ such that $deg_X Pleq eps$
        and $deg_Y Pleq d$ and $Ptup{F, G} =0$.



        Write the polynomial $G$ in the form $G=g_0 +g_1 T+g_2 T^2 +cdots
        +g_{eps}T^{eps}$
        , where $g_0 ,g_1 ,ldots,g_{eps}inKK$.
        (This can be done, since $deg G leq eps$.)
        If $g_{eps}^d neq 0$, then our goal follows immediately
        by applying Lemma 3 to $e = eps$.
        Thus, for the rest of this induction step, we WLOG assume that $g_{eps}^d = 0$.
        Hence, there exists a positive integer $m$ such that $g_{eps}^m = 0$ (namely, $m = eps$).
        Thus, there exists a smallest such $m$.
        Consider this smallest $m$.
        Then, $g_{eps}^m = 0$, but
        begin{align}
        text{every positive integer $ell < m$ satisfies $g_{eps}^{ell} neq 0$.}
        label{darij1.pf.t1.epsilon-ell}
        tag{4}
        end{align}



        We claim that $g_{eps}^{m-1} neq 0$. Indeed, if $m-1$ is a
        positive integer, then this follows from eqref{darij1.pf.t1.epsilon-ell} (applied to $ell = m-1$);
        otherwise, it follows from the fact that $g_{eps}^0 = 1 neq 0$
        (since the ring $KK$ is nontrivial).



        Now recall again that our goal is to prove that the claim of Theorem 1 holds for $e = eps$.
        If $d = 0$, then this goal follows from Lemma 4 (a).
        Hence, for the rest of this induction step, we WLOG assume that $d neq 0$.
        Hence, $d > 0$ (since $d$ is a nonnegative integer).



        We have $e geq 1$ (since $e$ is a positive integer), thus
        $e - 1 geq 0$. Hence, $d + left(e-1right) geq d > 0$.



        Let $I$ be the subset $left{x in KK mid g_{eps}^{m-1} x = 0 right}$ of $KK$.
        Then, $I$ is an ideal of $KK$ (namely, it is the
        annihilator of the subset
        $left{g_{eps}^{m-1}right}$ of $KK$);
        thus, $KK / I$ is a commutative $KK$-algebra.
        Denote this commutative $KK$-algebra $KK / I$ by $LL$.
        Let $pi$ be the canonical projection $KK to LL$.
        Of course, $pi$ is a surjective $KK$-algebra homomorphism.



        For any $a in KK$, we will denote the image of $a$ under $pi$ by $overline{a}$.



        The $KK$-algebra homomorphism $pi : KK to LL$ induces a canonical $KK$-algebra homomorphism $KKive{T} to LLive{T}$ (sending $T$ to $T$).
        For any $a in KKive{T}$, we will denote the image of $a$ under the latter homomorphism by $overline{a}$.



        The $KK$-algebra homomorphism $pi : KK to LL$ induces a canonical $KK$-algebra homomorphism $KKive{X, Y} to LLive{X, Y}$ (sending $X$ and $Y$ to $X$ and $Y$).
        For any $a in KKive{X, Y}$, we will denote the image of $a$ under the latter homomorphism by $overline{a}$.



        We have $g_{eps}^{m-1} g_{eps} = g_{eps}^m = 0$, so that $g_{eps} in I$ (by the definition of $I$);
        hence, the residue class $overline{g_{eps}}$ of $g_{eps}$ modulo the ideal $I$ is $0$.



        We have $g_{eps}^{m-1} cdot 1 = g_{eps}^{m-1} neq 0$ in $KK$,
        and thus $1 notin I$ (by the definition of $I$).
        Hence, the ideal $I$ is not the whole ring $KK$.
        Thus, the quotient ring $KK / I = LL$ is nontrivial.



        But $G=g_0 +g_1 T+g_2 T^2 +cdots +g_{eps}T^{eps}$
        and thus
        begin{align}
        overline{G}
        &= overline{g_0} + overline{g_1} T + overline{g_2} T^2 + cdots + overline{g_{eps}} T^{eps} \
        &= left( overline{g_0} + overline{g_1} T + overline{g_2} T^2 + cdots + overline{g_{eps-1}} T^{eps-1} right)
        + underbrace{overline{g_{eps}}}_{= 0} T^{eps} \
        &= overline{g_0} + overline{g_1} T + overline{g_2} T^2 + cdots + overline{g_{eps-1}} T^{eps-1} ,
        end{align}

        so that $deg overline{G} leq e-1$.
        Also, $deg overline{F} leq deg F leq d$.
        But the induction hypothesis tells us that Theorem 1 holds for $e = eps - 1$.
        Hence, we can apply Theorem 1 to $LL$, $overline{F}$, $overline{G}$ and $eps - 1$
        instead of $KK$, $F$, $G$ and $e$.
        We thus conclude that there exists a nonzero polynomial $Pin LLive{X, Y}$ in two indeterminates $X$ and $Y$ such that $deg_X P leq eps - 1$ and $deg_Y P leq d$ and $Pleft( overline{F}, overline{G} right) =0$.
        Consider this polynomial $P$, and denote it by $R$.
        Thus, $R in LL ive{X, Y}$ is a nonzero polynomial in two indeterminates $X$ and $Y$ and satisfies $deg_X R leq eps - 1$ and $deg_Y R leq d$ and $R left( overline{F}, overline{G} right) =0$.



        Clearly, there exists a polynomial $Q in KKive{X, Y}$ in two
        indeterminates $X$ and $Y$ that satisfies $deg_X Q = deg_X R$ and
        $deg_Y Q = deg_Y R$ and $overline{Q} = R$.
        (Indeed, we can construct such a $Q$ as follows: Write
        $R$ in the form
        $R = sumlimits_{i = 0}^{deg_X R} sumlimits_{j = 0}^{deg_Y R} r_{i, j} X^i Y^j$
        for some coefficients $r_{i, j} in LL$.
        For each pair $left(i, jright)$, pick some
        $p_{i, j} in KK$ such that $overline{p_{i, j}} = r_{i, j}$
        (this can be done, since the homomorphism $pi : KK to LL$ is surjective).
        Then, set $Q = sumlimits_{i = 0}^{deg_X R} sumlimits_{j = 0}^{deg_Y R} p_{i, j} X^i Y^j$.
        It is clear that this polynomial $Q$ satisfies $deg_X Q = deg_X R$ and
        $deg_Y Q = deg_Y R$ and $overline{Q} = R$.)



        We have $overline{Q left(F, Gright)}
        = underbrace{overline{Q}}_{=R} left( overline{F}, overline{G} right)
        = R left( overline{F}, overline{G} right) = 0$
        .
        In other words, the polynomial $Q left(F, Gright) in KKive{T}$
        lies in the kernel of the canonical
        $KK$-algebra homomorphism $KKive{T} to LLive{T}$.
        This means that each coefficient of this
        polynomial $Q left(F, Gright) in KKive{T}$
        lies in the kernel of the $KK$-algebra homomorphism $pi : KK to LL$.
        In other words, each coefficient of this
        polynomial $Q left(F, Gright) in KKive{T}$ lies in $I$
        (since the kernel of the $KK$-algebra homomorphism $pi : KK to LL$
        is $I$).
        Hence, each coefficient $c$ of this
        polynomial $Q left(F, Gright) in KKive{T}$
        satisfies $g_{eps}^{m-1} c = 0$ (by the definition of $I$).
        Therefore, $g_{eps}^{m-1} Q left(F, Gright) = 0$.



        On the other hand, $overline{Q} = R$ is nonzero.
        In other words, the polynomial $Q in KKive{X, Y}$ does not lie
        in the kernel of the canonical
        $KK$-algebra homomorphism $KKive{X, Y} to LLive{X, Y}$.
        This means that not every coefficient of this
        polynomial $Q in KKive{X, Y}$
        lies in the kernel of the $KK$-algebra homomorphism $pi : KK to LL$.
        In other words, not every coefficient of this
        polynomial $Q in KKive{X, Y}$ lies in $I$
        (since the kernel of the $KK$-algebra homomorphism $pi : KK to LL$
        is $I$).
        Hence, not every coefficient $c$ of this
        polynomial $Q in KKive{X, Y}$
        satisfies $g_{eps}^{m-1} c = 0$ (by the definition of $I$).
        Therefore, $g_{eps}^{m-1} Q neq 0$.
        So $g_{eps}^{m-1} Q in KKive{X, Y}$ is a nonzero polynomial
        in two indeterminates $X$ and $Y$ and satisfies
        $deg_X left( g_{eps}^{m-1} Q right) leq deg_X Q = deg_X R leq eps - 1 leq eps$
        and
        $deg_Y left( g_{eps}^{m-1} Q right) leq deg_Y Q = deg_Y R leq d$
        and $left(g_{eps}^{m-1} Q right) left(F, Gright) = g_{eps}^{m-1} Q left(F, Gright) = 0$.
        Hence, there exists a nonzero polynomial $PinKK ive{X, Y}$
        in two indeterminates $X$ and $Y$ such that $deg_X Pleq eps$
        and $deg_Y Pleq d$ and $Ptup{F, G} =0$
        (namely, $P = g_{eps}^{m-1} Q$).
        We have thus reached our goal.



        So we have proven that Theorem 1 holds for $e = eps$.
        This completes the induction step. Thus, Theorem 1 is proven by induction. $blacksquare$






        share|cite|improve this answer














        Here is a proof using resultants, taken mostly from https://mathoverflow.net/questions/189181//189344#189344 . (For a short summary, see one of my comments to the OP.)



        $newcommand{KK}{mathbb{K}}
        newcommand{LL}{mathbb{L}}
        newcommand{NN}{mathbb{N}}
        newcommand{ww}{mathbf{w}}
        newcommand{eps}{varepsilon}
        newcommand{Res}{operatorname{Res}}
        newcommand{Syl}{operatorname{Syl}}
        newcommand{adj}{operatorname{adj}}
        newcommand{id}{operatorname{id}}
        newcommand{tilF}{widetilde{F}}
        newcommand{tilG}{widetilde{G}}
        newcommand{ive}[1]{left[ #1 right]}
        newcommand{tup}[1]{left( #1 right)}
        newcommand{zeroes}[1]{underbrace{0,0,ldots,0}_{#1 text{ zeroes}}}$

        We shall prove a more general statement:




        Theorem 1. Let $KK$ be a nontrivial commutative ring. Let $F$ and $G$ be two
        polynomials in the polynomial ring $KK ive{T}$. Let
        $d$ and $e$ be nonnegative integers such that $d+e > 0$ and $deg F leq d$ and $deg G leq e$.
        Then, there exists a nonzero polynomial $PinKK ive{X, Y}$ in two indeterminates $X$ and $Y$ such that $deg_X Pleq e$
        and $deg_Y Pleq d$ and $Ptup{F, G} =0$.




        Here and in the following, we are using the following notations:




        • "Ring" always means "associative ring with unity".


        • A ring $R$ is said to be nontrivial if $0 neq 1$ in $R$.


        • If $R$ is any polynomial in the polynomial ring $KK ive{X, Y}$, then $deg_X R$ denotes the degree of $R$ with respect to the variable $X$ (that is, it denotes the degree of $R$ when $R$ is considered as a polynomial in $tup{KK ive{Y}} ive{X} $), whereas $deg_Y R$ denotes the degree of the polynomial $R$ with respect to the variable $Y$.



        To prove Theorem 1, we recall the notion of the resultant of two polynomials over a
        commutative ring:




        Definition. Let $KK$ be a commutative ring.
        Let $Pin KK ive{T}$ and $QinKK ive{T}$ be two polynomials in the polynomial ring $KK ive{T}$.
        Let $dinNN$ and $einNN$ be such that $deg Pleq d$ and $deg Qleq e$.
        Thus, write the polynomials $P$ and $Q$ in the forms
        begin{align*}
        P & =p_0 +p_1 T+p_2 T^2 +cdots+p_d T^d qquadtext{and}\
        Q & =q_0 +q_1 T+q_2 T^2 +cdots+q_e T^e ,
        end{align*}

        where $p_0 ,p_1 ,ldots,p_d ,q_0 ,q_1 ,ldots,q_e $ belong to $KK$.
        Then, we let $Syl_{d,e} tup{P, Q}$ be the matrix
        begin{equation}
        left(
        begin{array}[c]{c}
        begin{array}[c]{ccccccccc}
        p_0 & 0 & 0 & cdots & 0 & q_0 & 0 & cdots & 0\
        p_1 & p_0 & 0 & cdots & 0 & q_1 & q_0 & cdots & 0\
        vdots & p_1 & p_0 & cdots & 0 & vdots & q_1 & ddots & vdots\
        vdots & vdots & p_1 & ddots & vdots & vdots & vdots & ddots &
        q_0 \
        p_d & vdots & vdots & ddots & p_0 & vdots & vdots & ddots & q_1 \
        0 & p_d & vdots & ddots & p_1 & q_e & vdots & ddots & vdots\
        vdots & vdots & ddots & ddots & vdots & 0 & q_e & ddots & vdots\
        0 & 0 & 0 & ddots & vdots & vdots & vdots & ddots & vdots\
        0 & 0 & 0 & cdots & p_d & 0 & 0 & cdots & q_e
        end{array}
        \
        underbrace{ }_{etext{ columns}}
        underbrace{ }_{dtext{ columns}}
        end{array}
        right) inKK^{tup{d+e} timestup{d+e}};
        end{equation}

        this is the $tup{d+e} timestup{d+e}$-matrix whose first $e$ columns have the form
        begin{equation}
        left( zeroes{k},p_0 ,p_1 ,ldots ,p_d ,zeroes{e-1-k}right) ^{T}
        qquadtext{for }kinleft{ 0,1,ldots,e-1right} ,
        end{equation}

        and whose last $d$ columns have the form
        begin{equation}
        left( zeroes{ell},q_0 ,q_1 ,ldots,q_e ,zeroes{d-1-ell}right) ^{T}
        qquadtext{for }ellinleft{ 0,1,ldots,d-1right} .
        end{equation}

        Furthermore, we define $Res_{d,e}tup{P, Q}$ to be the element
        begin{equation}
        det tup{ Syl_{d,e}tup{P, Q} } in KK .
        end{equation}

        The matrix $Syl_{d,e}tup{P, Q}$ is called the Sylvester matrix of $P$ and $Q$ in degrees $d$ and $e$.
        Its determinant $Res_{d,e}tup{P, Q}$ is called the resultant of $P$ and $Q$ in degrees $d$ and $e$.



        It is common to apply this definition to the case when $d=deg P$ and $e=deg Q$; in this case, we simply call $Res_{d,e}tup{P, Q}$ the resultant of $P$ and $Q$, and denote it by $Res tup{P, Q}$.




        Here, we take $NN$ to mean the set $left{0,1,2,ldotsright}$ of all nonnegative integers.



        One of the main properties of resultants is the following:




        Theorem 2. Let $KK$ be a commutative ring.
        Let $Pin KK ive{T}$ and $QinKK ive{T}$ be two polynomials in the polynomial ring $KK ive{T}$.
        Let $dinNN$ and $einNN$ be such that $d+e > 0$ and $deg Pleq d$ and $deg Qleq e$.
        Let $LL$ be a commutative $KK$-algebra, and let $winLL$ satisfy $Ptup{w} =0$ and $Qtup{w} = 0$.
        Then, $Res_{d,e}tup{P, Q} =0$ in $LL$.




        Proof of Theorem 2 (sketched). Recall that $Res_{d,e}tup{P, Q} =det tup{ Syl_{d,e}tup{P, Q} }$ (by the definition of $Res_{d,e}tup{P, Q}$).



        Write the polynomials $P$ and $Q$ in the forms
        begin{align*}
        P & =p_0 +p_1 T+p_2 T^2 +cdots+p_d T^d qquadtext{and}\
        Q & =q_0 +q_1 T+q_2 T^2 +cdots+q_e T^e ,
        end{align*}

        where $p_0 ,p_1 ,ldots,p_d ,q_0 ,q_1 ,ldots,q_e $ belong to
        $KK$.
        (We can do this, since $deg P leq d$ and $deg Q leq e$.)
        From $p_0 +p_1 T+p_2 T^2 +cdots+p_d T^d = P$,
        we obtain
        $p_0 + p_1 w + p_2 w^2 + cdots + p_d w^d = Pleft(wright) = 0$.
        Similarly,
        $q_0 + q_1 w + q_2 w^2 + cdots + q_e w^e = 0$.



        Let $A$ be the matrix $Syl_{d,e}tup{P, Q}
        inKK^{tup{d+e} timestup{d+e} }$
        , regarded as a
        matrix in $LL ^{tup{d+e} timestup{d+e} }$ (by
        applying the canonical $KK$-algebra homomorphism $KK
        rightarrowLL$
        to all its entries).



        Let $ww$ be the row vector $left( w^{0},w^{1},ldots,w^{d+e-1}
        right) inLL ^{1timestup{d+e} }$
        . Let $mathbf{0}$ denote
        the zero vector in $LL ^{1timestup{d+e} }$.



        Now, it is easy to see that $ww A=mathbf{0}$. (Indeed, for each
        $kinleft{ 1,2,ldots,d+eright} $, we have
        begin{align*}
        & wwleft( text{the }ktext{-th column of }Aright) \
        & =
        begin{cases}
        p_0 w^{k-1}+p_1 w^k +p_2 w^{k+1}+cdots+p_d w^{k-1+d}, & text{if }kleq e;\
        q_0 w^{k-e-1}+q_1 w^{k-e}+q_2 w^{k-e+1}+cdots+q_e w^{k-1}, & text{if }k>e
        end{cases}
        \
        & =
        begin{cases}
        w^{k-1}left( p_0 +p_1 w+p_2 w^2 +cdots+p_d w^d right) , & text{if }kleq e;\
        w^{k-e-1}left( q_0 +q_1 w+q_2 w^2 +cdots+q_e w^eright) , & text{if }k>e
        end{cases}
        \
        & =
        begin{cases}
        w^{k-1}0, & text{if }kleq e;\
        w^{k-e-1}0, & text{if }k>e
        end{cases}
        \
        & qquadleft(
        begin{array}[c]{c}
        text{since }p_0 +p_1 w+p_2 w^2 +cdots+p_d w^d =0\
        text{and }q_0 +q_1 w+q_2 w^2 +cdots+q_e w^e =0
        end{array}
        right) \
        & =0.
        end{align*}

        But this means precisely that $ww A=mathbf{0}$.)



        But $A$ is a square matrix over a commutative ring; thus, the
        adjugate
        $adj A$ of $A$ satisfies $Acdotadj A=det
        Acdot I_{d+e}$
        (where $I_{d+e}$ denotes the identity matrix of size $d+e$).
        Hence, $wwunderbrace{Acdotadj A}_{=det Acdot
        I_{d+e}}=wwdet Acdot I_{d+e}=det Acdotww$
        . Comparing this
        with $underbrace{ww A}_{=mathbf{0}}cdotadj A
        =mathbf{0}cdotadj A=mathbf{0}$
        , we obtain
        $det Acdotww=mathbf{0}$.



        But $d+e > 0$; thus, the row vector $ww$ has a well-defined first entry.
        This first entry is $w^0 = 1$.
        Hence, the first entry of the row vector $det Acdotww$ is $det A cdot 1 = det A$.
        Hence, from $det Acdotww=mathbf{0}$, we conclude that $det A=0$.
        Comparing this with
        begin{equation}
        detunderbrace{A}_{=Syl_{d,e}tup{P, Q}} =det tup{ Syl_{d,e}tup{P, Q} }
        =Res_{d,e}tup{P, Q} ,
        end{equation}

        we obtain $Res_{d,e}tup{P, Q} =0$ (in $LL$). This proves Theorem 2. $blacksquare$



        Theorem 2 (which I have proven in detail to stress how the proof uses nothing
        about $LL$ other than its commutativity) was just the meek tip of the
        resultant iceberg. Here are some further sources with deeper results:




        • Antoine Chambert-Loir, Résultants (minor errata).


        • Svante Janson, Resultant and discriminant of polynomials.


        • Gerald Myerson, On resultants, Proc. Amer. Math. Soc. 89 (1983), 419--420.



        Some of these sources use the matrix $left( Syl_{d,e}tup{P, Q} right) ^{T}$ instead of our
        $Syl_{d,e}tup{P, Q}$, but of course this
        matrix has the same determinant as $Syl_{d,e}tup{P, Q}$, so that their definition of a resultant is the same as mine.



        We are not yet ready to prove Theorem 1 directly. Instead, let us prove a
        weaker version of Theorem 1:




        Lemma 3. Let $KK$ be a commutative ring. Let $F$ and $G$ be two
        polynomials in the polynomial ring $KK ive{T}$. Let
        $d$ and $e$ be nonnegative integers such that $d+e > 0$ and $deg F leq d$ and $deg G leq e$.
        Write the polynomial $G$ in the form $G=g_0 +g_1 T+g_2 T^2 +cdots
        +g_e T^e $
        , where $g_0 ,g_1 ,ldots,g_e inKK$.
        Assume that $g_e^d neq 0$.
        Then, there exists a nonzero polynomial $PinKK ive{X, Y}$
        in two indeterminates $X$ and $Y$ such that $deg_X Pleq e$
        and $deg_Y Pleq d$ and $Ptup{F, G} =0$.




        Proof of Lemma 3 (sketched). Let $widetilde{KK}$ be the
        commutative ring $KK ive{X, Y}$. Define two polynomials
        $tilF inwidetilde{KK}ive{T}$ and $tilG inwidetilde{KK}ive{T}$ by
        begin{equation}
        tilF =F-X=Ftup{T} -Xqquadtext{and}qquadtilG
        =G-Y=Gtup{T} -Y.
        end{equation}

        Note that $X$ and $Y$ have degree $0$ when considered as polynomials in
        $widetilde{KK}ive{T}$ (since $X$ and $Y$ belong to the
        ring $widetilde{KK}$). Thus, these new polynomials $tilF = F - X$
        and $tilG = G - Y$ have degrees $degtilF leq d$ (because
        $deg X = 0 leq d$ and $deg F leq d$) and
        $degtilG leq e$ (similarly).
        Hence, the resultant $Res_{d,e}tup{tilF, tilG} in
        widetilde{KK}$
        of these polynomials $tilF$ and
        $tilG$ in degrees $d$ and $e$ is well-defined. Let us denote this
        resultant $Res_{d,e}left( tilF
        ,tilG right)$
        by $P$. Hence,
        begin{equation}
        P=Res_{d,e}left( tilF ,tilG
        right) inwidetilde{KK}=KK ive{X, Y} .
        end{equation}



        Our next goal is to show that $P$ is a nonzero polynomial and satisfies
        $deg_X Pleq e$ and $deg_Y Pleq d$ and $Ptup{F, G} =0$. Once
        this is shown, Lemma 3 will obviously follow.



        We have
        begin{equation}
        P=Res_{d,e}left( tilF ,tilG
        right) =detleft( Syl_{d,e}left( tilF
        ,tilG right) right)
        label{darij1.pf.t1.P=det}
        tag{1}
        end{equation}

        (by the definition of $Res_{d,e}left(
        tilF ,tilG right)$
        ).



        Write the polynomial $F$ in the form $F=f_0 +f_1 T+f_2 T^2 +cdots
        +f_d T^d $
        , where $f_0 ,f_1 ,ldots,f_d inKK$. (This can be
        done, since $deg F leq d$.)



        Recall that $g_e ^d neq 0$. Thus, $left( -1right) ^e g_e^d neq 0$.



        For each $pinNN$, we let $S_{p}$ be the group of all permutations of
        the set $left{ 1,2,ldots,pright} $.



        Now,
        begin{align*}
        tilF & =F-X=left( f_0 +f_1 T+f_2 T^2 +cdots+f_d
        T^d right) -X\
        & qquadleft( text{since }F=f_0 +f_1 T+f_2 T^2 +cdots+f_d
        T^d right) \
        & =tup{f_0 - X} +f_1 T+f_2 T^2 +cdots+f_d T^d .
        end{align*}

        Thus, $f_0 -X,f_1 ,f_2 ,ldots,f_d $ are the coefficients of the
        polynomial $tilF inwidetilde{KK}ive{T}$ (since
        $f_0 -Xinwidetilde{KK}$). Similarly, $g_0 -Y,g_1 ,g_2
        ,ldots,g_e $
        are the coefficients of the polynomial $tilG
        inwidetilde{KK}ive{T}$
        . Hence, the definition of the
        matrix $Syl_{d,e}left( tilF ,tilG
        right)$
        yields
        begin{align}
        &Syl_{d,e}tup{tilF, tilG} \
        &=left(
        begin{array}[c]{c}
        begin{array}[c]{ccccccccc}
        f_0 -X & 0 & 0 & cdots & 0 & g_0 -Y & 0 & cdots & 0\
        f_1 & f_0 -X & 0 & cdots & 0 & g_1 & g_0 -Y & cdots & 0\
        vdots & f_1 & f_0-X & cdots & 0 & vdots & g_1 & ddots & vdots\
        vdots & vdots & f_1 & ddots & vdots & vdots & vdots & ddots &
        g_0 -Y\
        f_d & vdots & vdots & ddots & f_0 -X & vdots & vdots & ddots &
        g_1 \
        0 & f_d & vdots & ddots & f_1 & g_e & vdots & ddots & vdots\
        vdots & vdots & ddots & ddots & vdots & 0 & g_e & ddots & vdots\
        0 & 0 & 0 & ddots & vdots & vdots & vdots & ddots & vdots\
        0 & 0 & 0 & cdots & f_d & 0 & 0 & cdots & g_e
        end{array}
        \
        underbrace{qquad qquad qquad qquad qquad qquad qquad qquad qquad qquad}
        _{etext{ columns}}
        underbrace{qquad qquad qquad qquad qquad qquad qquad}
        _{dtext{ columns}}
        end{array}
        right) \
        &inwidetilde{KK}^{tup{d+e} timesleft(
        d+eright) }.
        end{align}

        Now, let us use this explicit form of $Syl_{d,e} tup{tilF, tilG}$
        to compute $detleft( Syl_{d,e}tup{tilF, tilG} right)$ using
        the Leibniz formula.
        The Leibniz formula yields
        begin{equation}
        detleft( Syl_{d,e} tup{ tilF, tilG } right)
        =sum_{sigmain S_{d+e}}a_{sigma},
        label{darij1.pf.t1.det=sum}
        tag{2}
        end{equation}

        where for each permutation $sigmain S_{d+e}$, the addend $a_{sigma}$ is a
        product of entries of $Syl_{d,e} tup{tilF, tilG}$, possibly with a minus sign. More
        precisely,
        begin{equation}
        a_{sigma}=tup{-1}^{sigma}prod_{i=1}^{d+e}left( text{the }
        left( i,sigmaleft( iright) right) text{-th entry of }
        Syl_{d,e}tup{tilF, tilG}
        right)
        end{equation}

        for each $sigmain S_{d+e}$ (where $tup{-1}^{sigma}$ denotes the
        sign of the permutation $sigma$).



        Now, eqref{darij1.pf.t1.P=det} becomes
        begin{equation}
        P=detleft( Syl_{d,e}tup{tilF, tilG} right)
        =sum_{sigmain S_{d+e}}a_{sigma}
        label{darij1.pf.t1.P=sum}
        tag{3}
        end{equation}

        (by eqref{darij1.pf.t1.det=sum}).



        All entries of the matrix $Syl_{d,e}tup{tilF, tilG}$ are polynomials in the two
        indeterminates $X$ and $Y$; but only $d+e$ of these entries are non-constant
        polynomials (since all of $f_0 ,f_1 ,ldots,f_d ,g_0 ,g_1 ,ldots,g_e $
        belong to $KK$). More precisely, only $e$ entries of
        $Syl_{d,e}tup{tilF, tilG}$
        have non-zero degree with respect to the variable $X$ (namely, the first $e$
        entries of the diagonal of $Syl_{d,e}tup{tilF, tilG}$), and these $e$ entries have degree $1$
        with respect to this variable. Thus, for each $sigmain S_{d+e}$, the product
        $a_{sigma}$ contains at most $e$ many factors that have degree $1$ with
        respect to the variable $X$, while all its remaining factors have degree $0$
        with respect to this variable. Therefore, for each $sigmain S_{d+e}$, the
        product $a_{sigma}$ has degree $leq ecdot1=e$ with respect to the variable
        $X$. Hence, the sum $sum_{sigmain S_{d+e}}a_{sigma}$ of all these products
        $a_{sigma}$ also has degree $leq e$ with respect to the variable $X$. In
        other words, $deg_X left( sum_{sigmain S_{d+e}}a_{sigma}right) leq
        e$
        . In view of eqref{darij1.pf.t1.P=sum}, this rewrites as $deg_X Pleq e$.
        Similarly, $deg_Y Pleq d$ (since only $d$ entries of the matrix
        $Syl_{d,e}tup{tilF, tilG}
        $
        have non-zero degree with respect to the variable $Y$, and these $d$ entries
        have degree $1$ with respect to this variable).



        Next, we shall show that the polynomial $P$ is nonzero. Indeed, let us
        consider all elements of $widetilde{KK}$ as polynomials in the
        variable $X$ over the ring $KK ive{Y} $. For each
        permutation $sigmain S_{d+e}$, the product $a_{sigma}$ (thus considered)
        has degree $leq e$ (as we have previously shown). Let us now compute the
        coefficient of $X^e $ in this product $a_{sigma}$. There are three possible cases:




        • Case 1: The permutation $sigmain S_{d+e}$ does not satisfy $left(
          sigmaleft( iright) =itext{ for each }iinleft{ 1,2,ldots,eright}
          right)$
          . Thus, the product $a_{sigma}$ has strictly fewer than $e$
          factors that have degree $1$ with respect to the variable $X$, while all its
          remaining factors have degree $0$ with respect to this variable. Thus, the
          whole product $a_{sigma}$ has degree $<e$ with respect to the variable $X$.
          Hence, the coefficient of $X^e $ in this product $a_{sigma}$ is $0$.


        • Case 2: The permutation $sigmain S_{d+e}$ satisfies $left(
          sigmaleft( iright) =itext{ for each }iinleft{ 1,2,ldots,eright}
          right)$
          , but is not the identity map $idin S_{d+e}$. Thus,
          there must exist at least one $iinleft{ 1,2,ldots,d+eright} $ such
          that $sigmaleft( iright) <i$. Consider such an $i$, and notice that it
          must satisfy $i>e$ and $sigmaleft( iright) >e$; hence, the $left(
          i,sigmaleft( iright) right)$
          -th entry of
          $Syl_{d,e}tup{tilF, tilG}$ is $0$. Thus, the
          whole product $a_{sigma}$ is $0$ (since the latter entry is a factor in this
          product). Thus, the coefficient of $X^e $ in this product $a_{sigma}$ is $0$.


        • Case 3: The permutation $sigmain S_{d+e}$ is the identity map
          $idin S_{d+e}$. Thus, the product $a_{sigma}$ is $left(
          f_0 -Xright) ^e g_e^d $
          (since $tup{-1}^{id
          }=1$
          ). Hence, the coefficient of $X^e $ in this product $a_{sigma}$ is
          $tup{-1}^e g_e^d $.



        Summarizing, we thus conclude that the coefficient of $X^e $ in the product
        $a_{sigma}$ is $0$ unless $sigma=id$, in which case it is
        $tup{-1}^e g_e^d $. Hence, the coefficient of $X^e $ in the
        sum $sum_{sigmain S_{d+e}}a_{sigma}$ is $tup{-1}^e g_e^d
        neq0$
        . Therefore, $sum_{sigmain S_{d+e}}a_{sigma}neq0$. In view of
        eqref{darij1.pf.t1.P=sum}, this rewrites as $Pneq0$. In other words, the
        polynomial $P$ is nonzero.



        Finally, it remains to prove that $Ptup{F, G} =0$. In order to do
        this, we let $LL$ be the polynomial ring $KK ive{U}
        $
        in a new indeterminate $U$. We let $varphi:KK ive{X, Y}
        rightarrowLL$
        be the unique $KK$-algebra homomorphism that
        sends $X$ to $Ftup{U}$ and sends $Y$ to $Gtup{U}$.
        (This is well-defined by the universal property of the polynomial ring
        $KK ive{X, Y}$.)
        Note that $varphi$ is a $KK$-algebra homomorphism from
        $widetilde{KK}$ to $LL$ (since $KK ive{X, Y}
        =widetilde{KK}$
        ). Thus, $LL$ becomes a $widetilde{KK
        }$
        -algebra via this homomorphism $varphi$.



        Now, recall that the polynomial $tilF inwidetilde{KK}ive{T}$
        was defined by $tilF =F-X$. Hence, $tilF left(
        Uright) =Ftup{U} -varphitup{X}$
        . (Indeed, when we
        regard $X$ as an element of $widetilde{KK}ive{T}$, the
        polynomial $X$ is simply a constant, and thus evaluating it at $U$ yields the
        canonical image of $X$ in $LL$, which is $varphitup{X}$.)
        But $varphitup{X} =Ftup{U}$ (by the definition of
        $varphi$).
        Hence, $tilF tup{U} =Ftup{U} -varphitup{X} =0$ (since $varphitup{X} =Ftup{U}$). Similarly, $tilG tup{U} =0$.



        Thus, the element $UinLL$ satisfies $tilF tup{U}
        =0$
        and $tilG tup{U} =0$. Hence, Theorem 2 (applied to
        $widetilde{KK}$, $tilF$, $tilG$ and $U$ instead of $KK$, $P$, $Q$ and $w$) yields that
        $Res_{d,e}tup{tilF, tilG} = 0$ in $LL$.
        In other words, $varphitup{ Res_{d,e}tup{tilF, tilG} } =0$. In view of
        $Res_{d,e}tup{tilF, tilG} =P$, this rewrites as $varphitup{P} =0$.



        But recall that $varphi$ is the $KK$-algebra homomorphism that sends
        $X$ to $Ftup{U}$ and sends $Y$ to $Gtup{U}$. Hence, it
        sends any polynomial $QinKK ive{X, Y}$ to $Q tup{ Ftup{U}, Gtup{U} }$. Applying this to $Q=P$, we
        conclude that it sends $P$ to $P tup{ Ftup{U}, Gtup{U} }$.
        In other words, $varphitup{P} =P tup{ Ftup{U}, Gtup{U} }$;
        hence, $P tup{ Ftup{U}, Gtup{U} } =varphitup{P} =0$.



        Now, $Ftup{U}$ and $Gtup{U}$ are polynomials in the
        indeterminate $U$ over $KK$. If we rename the indeterminate $U$ as
        $T$, then these polynomials $Ftup{U}$ and $Gtup{U}$
        become $Ftup{T}$ and $Gtup{T}$, and therefore the
        polynomial $Pleft( Ftup{U} ,Gtup{U} right)$ becomes
        $Ptup{ Ftup{T}, Gtup{T} }$.
        Hence, $Ptup{ Ftup{T}, Gtup{T} } =0$ (since $Ptup{ Ftup{U}, Gtup{U} } =0$).
        In other words, $P tup{F, G} =0$ (since $Ftup{T} =F$ and $Gtup{T} =G$).
        This completes the proof of Lemma 3. $blacksquare$




        Lemma 4. (a) Theorem 1 holds when $d = 0$.



        (b) Theorem 1 holds when $e = 0$.




        Proof of Lemma 4. (a) Assume that $d = 0$.
        Thus, $d + e > 0$ rewrites as $e > 0$. Hence, $e geq 1$.
        But the polynomial $F$ is constant (since $deg F leq d = 0$).
        In other words, $F = f$ for some $f in KK$. Consider this $f$.
        Now, let $Q$ be the polynomial $X - f in KKive{X, Y}$.
        Then, $Q$ is nonzero and satisfies
        $deg_X Q = 1 leq e$ (since $e geq 1$) and
        $deg_Y Q = 0 leq d$ and
        $Qleft(F, Gright) = F - f = 0$ (since $F = f$).
        Hence, there exists a nonzero polynomial $PinKK ive{X, Y}$
        in two indeterminates $X$ and $Y$ such that $deg_X Pleq e$
        and $deg_Y Pleq d$ and $Ptup{F, G} =0$
        (namely, $P = Q$). In other words, Theorem 1 holds (under
        our assumption that $d = 0$). This proves Lemma 4 (a).



        (b) The proof of Lemma 4 (b) is analogous to
        our above proof of Lemma 4 (a). $blacksquare$



        Now, we can prove Theorem 1 at last:



        Proof of Theorem 1. We shall prove Theorem 1 by induction on $e$.



        The induction base is the case when $e = 0$; this case follows
        from Lemma 4 (b).



        For the induction step, we fix a positive integer $eps$.
        Assume (as the induction hypothesis) that Theorem 1 holds for $e = eps - 1$.
        We must now prove that Theorem 1 holds for $e = eps$.



        Let $KK$ be a commutative ring. Let $F$ and $G$ be two
        polynomials in the polynomial ring $KK ive{T}$. Let
        $d$ be a nonnegative integer such that $d+eps > 0$ and $deg F leq d$ and $deg G leq eps$.
        Our goal is now to prove that the claim of Theorem 1 holds for $e = eps$.
        In other words, our goal is to prove that there exists a nonzero polynomial $PinKK ive{X, Y}$
        in two indeterminates $X$ and $Y$ such that $deg_X Pleq eps$
        and $deg_Y Pleq d$ and $Ptup{F, G} =0$.



        Write the polynomial $G$ in the form $G=g_0 +g_1 T+g_2 T^2 +cdots
        +g_{eps}T^{eps}$
        , where $g_0 ,g_1 ,ldots,g_{eps}inKK$.
        (This can be done, since $deg G leq eps$.)
        If $g_{eps}^d neq 0$, then our goal follows immediately
        by applying Lemma 3 to $e = eps$.
        Thus, for the rest of this induction step, we WLOG assume that $g_{eps}^d = 0$.
        Hence, there exists a positive integer $m$ such that $g_{eps}^m = 0$ (namely, $m = eps$).
        Thus, there exists a smallest such $m$.
        Consider this smallest $m$.
        Then, $g_{eps}^m = 0$, but
        begin{align}
        text{every positive integer $ell < m$ satisfies $g_{eps}^{ell} neq 0$.}
        label{darij1.pf.t1.epsilon-ell}
        tag{4}
        end{align}



        We claim that $g_{eps}^{m-1} neq 0$. Indeed, if $m-1$ is a
        positive integer, then this follows from eqref{darij1.pf.t1.epsilon-ell} (applied to $ell = m-1$);
        otherwise, it follows from the fact that $g_{eps}^0 = 1 neq 0$
        (since the ring $KK$ is nontrivial).



        Now recall again that our goal is to prove that the claim of Theorem 1 holds for $e = eps$.
        If $d = 0$, then this goal follows from Lemma 4 (a).
        Hence, for the rest of this induction step, we WLOG assume that $d neq 0$.
        Hence, $d > 0$ (since $d$ is a nonnegative integer).



        We have $e geq 1$ (since $e$ is a positive integer), thus
        $e - 1 geq 0$. Hence, $d + left(e-1right) geq d > 0$.



        Let $I$ be the subset $left{x in KK mid g_{eps}^{m-1} x = 0 right}$ of $KK$.
        Then, $I$ is an ideal of $KK$ (namely, it is the
        annihilator of the subset
        $left{g_{eps}^{m-1}right}$ of $KK$);
        thus, $KK / I$ is a commutative $KK$-algebra.
        Denote this commutative $KK$-algebra $KK / I$ by $LL$.
        Let $pi$ be the canonical projection $KK to LL$.
        Of course, $pi$ is a surjective $KK$-algebra homomorphism.



        For any $a in KK$, we will denote the image of $a$ under $pi$ by $overline{a}$.



        The $KK$-algebra homomorphism $pi : KK to LL$ induces a canonical $KK$-algebra homomorphism $KKive{T} to LLive{T}$ (sending $T$ to $T$).
        For any $a in KKive{T}$, we will denote the image of $a$ under the latter homomorphism by $overline{a}$.



        The $KK$-algebra homomorphism $pi : KK to LL$ induces a canonical $KK$-algebra homomorphism $KKive{X, Y} to LLive{X, Y}$ (sending $X$ and $Y$ to $X$ and $Y$).
        For any $a in KKive{X, Y}$, we will denote the image of $a$ under the latter homomorphism by $overline{a}$.



        We have $g_{eps}^{m-1} g_{eps} = g_{eps}^m = 0$, so that $g_{eps} in I$ (by the definition of $I$);
        hence, the residue class $overline{g_{eps}}$ of $g_{eps}$ modulo the ideal $I$ is $0$.



        We have $g_{eps}^{m-1} cdot 1 = g_{eps}^{m-1} neq 0$ in $KK$,
        and thus $1 notin I$ (by the definition of $I$).
        Hence, the ideal $I$ is not the whole ring $KK$.
        Thus, the quotient ring $KK / I = LL$ is nontrivial.



        But $G=g_0 +g_1 T+g_2 T^2 +cdots +g_{eps}T^{eps}$
        and thus
        begin{align}
        overline{G}
        &= overline{g_0} + overline{g_1} T + overline{g_2} T^2 + cdots + overline{g_{eps}} T^{eps} \
        &= left( overline{g_0} + overline{g_1} T + overline{g_2} T^2 + cdots + overline{g_{eps-1}} T^{eps-1} right)
        + underbrace{overline{g_{eps}}}_{= 0} T^{eps} \
        &= overline{g_0} + overline{g_1} T + overline{g_2} T^2 + cdots + overline{g_{eps-1}} T^{eps-1} ,
        end{align}

        so that $deg overline{G} leq e-1$.
        Also, $deg overline{F} leq deg F leq d$.
        But the induction hypothesis tells us that Theorem 1 holds for $e = eps - 1$.
        Hence, we can apply Theorem 1 to $LL$, $overline{F}$, $overline{G}$ and $eps - 1$
        instead of $KK$, $F$, $G$ and $e$.
        We thus conclude that there exists a nonzero polynomial $Pin LLive{X, Y}$ in two indeterminates $X$ and $Y$ such that $deg_X P leq eps - 1$ and $deg_Y P leq d$ and $Pleft( overline{F}, overline{G} right) =0$.
        Consider this polynomial $P$, and denote it by $R$.
        Thus, $R in LL ive{X, Y}$ is a nonzero polynomial in two indeterminates $X$ and $Y$ and satisfies $deg_X R leq eps - 1$ and $deg_Y R leq d$ and $R left( overline{F}, overline{G} right) =0$.



        Clearly, there exists a polynomial $Q in KKive{X, Y}$ in two
        indeterminates $X$ and $Y$ that satisfies $deg_X Q = deg_X R$ and
        $deg_Y Q = deg_Y R$ and $overline{Q} = R$.
        (Indeed, we can construct such a $Q$ as follows: Write
        $R$ in the form
        $R = sumlimits_{i = 0}^{deg_X R} sumlimits_{j = 0}^{deg_Y R} r_{i, j} X^i Y^j$
        for some coefficients $r_{i, j} in LL$.
        For each pair $left(i, jright)$, pick some
        $p_{i, j} in KK$ such that $overline{p_{i, j}} = r_{i, j}$
        (this can be done, since the homomorphism $pi : KK to LL$ is surjective).
        Then, set $Q = sumlimits_{i = 0}^{deg_X R} sumlimits_{j = 0}^{deg_Y R} p_{i, j} X^i Y^j$.
        It is clear that this polynomial $Q$ satisfies $deg_X Q = deg_X R$ and
        $deg_Y Q = deg_Y R$ and $overline{Q} = R$.)



        We have $overline{Q left(F, Gright)}
        = underbrace{overline{Q}}_{=R} left( overline{F}, overline{G} right)
        = R left( overline{F}, overline{G} right) = 0$
        .
        In other words, the polynomial $Q left(F, Gright) in KKive{T}$
        lies in the kernel of the canonical
        $KK$-algebra homomorphism $KKive{T} to LLive{T}$.
        This means that each coefficient of this
        polynomial $Q left(F, Gright) in KKive{T}$
        lies in the kernel of the $KK$-algebra homomorphism $pi : KK to LL$.
        In other words, each coefficient of this
        polynomial $Q left(F, Gright) in KKive{T}$ lies in $I$
        (since the kernel of the $KK$-algebra homomorphism $pi : KK to LL$
        is $I$).
        Hence, each coefficient $c$ of this
        polynomial $Q left(F, Gright) in KKive{T}$
        satisfies $g_{eps}^{m-1} c = 0$ (by the definition of $I$).
        Therefore, $g_{eps}^{m-1} Q left(F, Gright) = 0$.



        On the other hand, $overline{Q} = R$ is nonzero.
        In other words, the polynomial $Q in KKive{X, Y}$ does not lie
        in the kernel of the canonical
        $KK$-algebra homomorphism $KKive{X, Y} to LLive{X, Y}$.
        This means that not every coefficient of this
        polynomial $Q in KKive{X, Y}$
        lies in the kernel of the $KK$-algebra homomorphism $pi : KK to LL$.
        In other words, not every coefficient of this
        polynomial $Q in KKive{X, Y}$ lies in $I$
        (since the kernel of the $KK$-algebra homomorphism $pi : KK to LL$
        is $I$).
        Hence, not every coefficient $c$ of this
        polynomial $Q in KKive{X, Y}$
        satisfies $g_{eps}^{m-1} c = 0$ (by the definition of $I$).
        Therefore, $g_{eps}^{m-1} Q neq 0$.
        So $g_{eps}^{m-1} Q in KKive{X, Y}$ is a nonzero polynomial
        in two indeterminates $X$ and $Y$ and satisfies
        $deg_X left( g_{eps}^{m-1} Q right) leq deg_X Q = deg_X R leq eps - 1 leq eps$
        and
        $deg_Y left( g_{eps}^{m-1} Q right) leq deg_Y Q = deg_Y R leq d$
        and $left(g_{eps}^{m-1} Q right) left(F, Gright) = g_{eps}^{m-1} Q left(F, Gright) = 0$.
        Hence, there exists a nonzero polynomial $PinKK ive{X, Y}$
        in two indeterminates $X$ and $Y$ such that $deg_X Pleq eps$
        and $deg_Y Pleq d$ and $Ptup{F, G} =0$
        (namely, $P = g_{eps}^{m-1} Q$).
        We have thus reached our goal.



        So we have proven that Theorem 1 holds for $e = eps$.
        This completes the induction step. Thus, Theorem 1 is proven by induction. $blacksquare$







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Nov 19 at 19:30

























        answered Oct 10 at 2:26









        darij grinberg

        10.2k33061




        10.2k33061






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2934580%2falgebraic-relation-between-polynomials%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Plaza Victoria

            Puebla de Zaragoza

            Musa