Is there a list of all typos in Hoffman and Kunze, Linear Algebra?
Where can I find a list of typos for Linear Algebra, 2nd Edition, by Hoffman and Kunze? I searched on Google, but to no avail.
linear-algebra reference-request
add a comment |
Where can I find a list of typos for Linear Algebra, 2nd Edition, by Hoffman and Kunze? I searched on Google, but to no avail.
linear-algebra reference-request
13
One thing you can try is to look for a well-used university library copy and flip through the pages to see what corrections might be penciled in.
– Dave L. Renfro
Mar 27 '15 at 16:27
add a comment |
Where can I find a list of typos for Linear Algebra, 2nd Edition, by Hoffman and Kunze? I searched on Google, but to no avail.
linear-algebra reference-request
Where can I find a list of typos for Linear Algebra, 2nd Edition, by Hoffman and Kunze? I searched on Google, but to no avail.
linear-algebra reference-request
linear-algebra reference-request
edited Jan 7 '18 at 18:43
community wiki
10 revs, 7 users 38%
minibuffer
13
One thing you can try is to look for a well-used university library copy and flip through the pages to see what corrections might be penciled in.
– Dave L. Renfro
Mar 27 '15 at 16:27
add a comment |
13
One thing you can try is to look for a well-used university library copy and flip through the pages to see what corrections might be penciled in.
– Dave L. Renfro
Mar 27 '15 at 16:27
13
13
One thing you can try is to look for a well-used university library copy and flip through the pages to see what corrections might be penciled in.
– Dave L. Renfro
Mar 27 '15 at 16:27
One thing you can try is to look for a well-used university library copy and flip through the pages to see what corrections might be penciled in.
– Dave L. Renfro
Mar 27 '15 at 16:27
add a comment |
6 Answers
6
active
oldest
votes
This list does not repeat the typos mentioned in the other answers.
Chapter 1
- Page 6, last paragraph.
An elementary row operation is thus a special type of function (rule) $e$ which associated with each $m times n$ matrix . . .
It should be "associates".
- Page 10, proof of Theorem 4, second paragraph.
say it occurs in column $k_r neq k$.
It should be $k' neq k$.
- Page 18, last paragraph.
If $B$ is an $n times p$ matrix, the columns of $B$ are the $1 times n$ matrices . . .
It should be $n times 1$.
- Page 24, statement of second corollary.
Let $text{A} = text{A}_1 text{A}_2 cdots A_k$, where $text{A}_1 dots,A_k$ are . . .
The formatting of $A_k$ is incorrect in both instances. Also, there should be a comma after $text{A}_1$. So, it should be "Let $text{A} = text{A}_1 text{A}_2 cdots text{A}_text{k}$, where $text{A}_1, dots,text{A}_text{k}$ are . . .".
- Page 26–27, Exercise 4.
For which $X$ does there exist a scalar $c$ such that $AX=cX$?
It would make more sense if it asked: “For which $X neq 0$ does there exist . . .”.
Chapter 2
- Page 52, below equation (2–16).
Thus from (2–16) and Theorem 7 of Chapter 1 . . .
It should be Theorem 13.
- Page 57, second last displayed equation.
$$ beta = (0,dots,0, b_{k_s},dots,b_n), quad b_{k_s} neq 0$$
The formatting on the right-hand side is not correct. There is too much space before $b_{k_s}$. It should be $$beta = (0,dots,0,b_{k_s},dots,b_n), quad b_{k_s} neq 0$$
instead.
- Page 57, last displayed equation.
$$ beta = (0,dots,0, b_t,dots,b_n), quad b_t neq 0.$$
The formatting on the right-hand side is not correct. There is too much space before $b_t$. It should instead be $$beta = (0,dots,0,b_t,dots,b_n), quad b_t neq 0.$$
- Page 62, second last paragraph.
So $beta = (b_1,b_2,b_3,b_4)$ is in $W$ if and only if $b_3 - 2b_1$. . . .
It should be $b_3 = 2b_1$.
Chapter 3
- Page 76, first paragraph.
let $A_{ij},dots,A_{mj}$ be the coordinates of the vector . . .
It should be $A_{1j},dots,A_{mj}$.
- Page 80, Example 11.
For example, if $U$ is the operation 'remove the constant term and divide by $x$': $$ U(c_0 + c_1 x + dots + c_n x^n) = c_1 + c_2 x + dots + c_n x^{n-1}$$ then . . .
There is a subtlety in the phrase within apostrophes: what if $x = 0$? Rather than having to specify for this case separately, the sentence can be worded more simply as, "For example, if $U$ is the operator defined by $$U(c_0 + c_1 x + dots + c_n x^n) = c_1 + c_2 x + dots + c_n x^{n-1}$$ then . . .".
- Page 81, last line.
(iv) If ${ alpha_1,dots,alpha_{text{n}}}$ is basis for $text{V}$, then ${text{T}alpha_1,dots,text{T}alpha_{text{n}}}$ is a basis for $text{W}$.
It should read "(iv) If ${ alpha_1,dots,alpha_{text{n}}}$ is a basis for $text{V}$, then . . .".
- Page 90, second last paragraph.
We should also point out that we proved a special case of Theorem 13 in Example 12.
It should be "in Example 10."
- Page 91, first paragraph.
For, the identity operator $I$ is represented by the identity matrix in any order basis, and thus . . .
It should be "ordered".
- Page 92, statement of Theorem 14.
Let $text{V}$ be a finite-dimensional vector space over the field $text{F}$ and let
$$mathscr{B} = { alpha_1,dots,alpha text{i} } quad textit{and} quad mathscr{B}'={ alpha'_1,dots,alpha'_text{n}}$$ be ordered bases . . .
It should be $mathscr{B} = { alpha_1,dots,alpha_text{n}}$.
- Page 100, first paragraph.
If $f$ is in $V^*$, and we let $f(alpha_i) = alpha_i$, then when . . .
It should be $f(alpha_i) = a_i$.
- Page 101, paragraph following the definition.
If $S = V$, then $S^0$ is the zero subspace of $V^*$. (This is easy to see when $V$ is finite dimensional.)
It is equally easy to see this when $V$ is infinite-dimensional, so the statement in the brackets is redundant. Perhaps the authors meant to say that ${ v in V : f(v) = 0 forall f in V^* }$ is the zero subspace of $V$. This question asks for details on this point.
- Page 102, proof of the second corollary.
By the previous corollaries (or the proof of Theorem 16) there is a linear functional $f$ such that $f(beta) = 0$ for all $beta$ in $W$, but $f(alpha) neq 0$. . . .
It should be "corollary", since there is only one previous corollary. Also, $W$ should be replaced by $W_1$.
- Page 112, statement of Theorem 22.
(i) rank $(T^t) = $ rank $(T)$
There should be a semi-colon at the end of the line.
Chapter 4
- Page 118, last displayed equation, third line.
$$=sum_{i=0}^n sum_{j=0}^i f_i g_{i-j} h_{n-i} $$
It should be $f_j$. It is also not immediately clear how to go from this line to the next line.
- Page 126, proof of Theorem 3.
By definition, the mapping is onto, and if $f$, $g$ belong to $F[x]$ it is evident that $$(cf+dg)^sim = df^sim + dg^sim$$ for all scalars $c$ and $d$. . . .
It should be $(cf+dg)^sim = cf^sim + dg^sim$.
- Page 126, proof of Theorem 3.
Suppose then that $f$ is a polynomial of degree $n$ such that $f' = 0$. . . .
It should be $f^sim = 0$.
- Page 128, statement of Theorem 4.
(i) $f = dq + r$.
The full stop should be a semi-colon.
Page 129, paragraph before statement of Theorem 5. The notation $D^0$ needs to be introduced, so the sentence, "We also use the notation $D^0 f = f$" can be added at the end of the paragraph.
Page 131, first displayed equation, second line.
$$ = sum_{m = 0}^{n-r} frac{(D^m g)}{m!}(x-c)^{r+m} $$
There should be a full stop at the end of the line.
- Page 135, proof of Theorem 8.
Since $(f,p) = 1$, there are polynomials . . .
It should be $text{g.c.d.}{(f,p)} = 1$.
- Page 137, first paragraph.
This decomposition is also clearly unique, and is called the primary decomposition of $f$. . . .
For the sake of clarity, the following sentence can be added after the quoted line: "Henceforth, whenever we refer to the prime factorization of a non-scalar monic polynomial we mean the primary decomposition of the polynomial."
Page 137, proof of Theorem 11. The chain rule for the formal derivative of a product of polynomials is used, but this needs proof.
Page 139, Exercise 7.
Use Exercise 7 to prove the following. . . .
It should be "Use Exercise 6 to prove the following. . . ."
Chapter 5
- Page 142, second last displayed equation.
$$begin{align} D(calpha_i + alpha'_{iz}) &= [cA(i,k_i) + A'(i,k_i)]b \ &= cD(alpha_i) + D(alpha'_i) end{align}$$
The left-hand side should be $D(calpha_i + alpha'_i)$.
- Page 166, first displayed equation.
$$begin{align*}L(alpha_1,dots,c alpha_i + beta_i,dots,alpha_r) &= cL(alpha_1,dots,alpha_i,dots,alpha_r {}+{} \ &qquad qquad qquad qquad L(alpha_1,dots,beta_i,dots,alpha_r)end{align*}$$
The first term on the right has a missing closing bracket, so it should be $cL(alpha_1,dots,alpha_i,dots,alpha_r)$.
- Page 167, second displayed equation, third line.
$${}={} sum_{j=1}^n A_{1j} Lleft( epsilon_j, sum_{j=1}^n A_{2k} epsilon_k, dots, alpha_r right) $$
The second summation should run over the index $k$ instead of $j$.
Page 170, proof of the lemma. To show that $pi_r L in Lambda^r(V)$, the authors show that $(pi_r L)_tau = (operatorname{sgn}{tau})(pi_rL)$ for every permutation $tau$ of ${1,dots,r}$. This implies that $pi_r L$ is alternating only when $K$ is a ring such that $1 + 1 neq 0$. A proof over arbitrary commutative rings with identity is still needed.
Page 170, first paragraph after proof of the lemma.
In (5–33) we showed that the determinant . . .
It should be (5–34).
- Page 171, equation (5–39).
$$begin{align} D_J &= sum_sigma (operatorname{sgn} sigma) f_{j_{sigma 1}} otimes dots otimes f_{j_{sigma r}} tag{5–39}\ &= pi_r (f_{j_1} otimes dots otimes f_{j_r}) end{align}$$
The equation tag should be centered instead of being aligned at the first line.
- Page 174, below the second displayed equation.
The proof of the lemma following equation (5–36) shows that for any $r$-linear form $L$ and any permutation $sigma$ of ${1,dots,r}$
$$
pi_r(L_sigma) = operatorname{sgn} sigma pi_r(L)
$$
The proof of the lemma actually shows $(pi_r L)_sigma = operatorname{sgn} sigma pi_r(L)$. This fact still needs proof. Also, there should be a full stop at the end of the displayed equation.
- Page 174, below the third displayed equation.
Hence, $D_{ij} cdot f_k = 2pi_3(f_i otimes f_j otimes f_k)$.
This is not immediate from just the preceding equations. The authors implicitly assume the identity $(f_{j_1} otimes dots otimes f_{j_r})_sigma = f_{j_{sigma^{-1} 1}}! otimes dots otimes f_{j_{sigma^{-1} r}}$. This identity needs proof.
- Page 174, sixth displayed equation.
$$(D_{ij} cdot f_k) cdot f_l = 6 pi_4(f_i otimes f_j otimes f_k otimes f_l)$$
The factor $6$ should be replaced by $12$.
- Page 174, last displayed equation.
$$ (L otimes M)_{(sigma,tau)} = L_sigma otimes L_tau$$
The right-hand side should be $L_sigma otimes M_tau$.
- Page 177, below the third displayed equation.
Therefore, since $(Nsigma)tau = Ntau sigma$ for any $(r+s)$-linear form . . .
It should be $(N_sigma)_tau = N_{tau sigma}$.
- Page 179, last displayed equation.
$$ (L wedge M)(alpha_1,dots,alpha_n) = sum (operatorname{sgn} sigma) L(alpha sigma_1,dots,alpha_{sigma r}) M(alpha_{sigma(r+1)},dots,alpha_{sigma_n}) $$
The right-hand side should have $L(alpha_{sigma 1},dots,alpha_{sigma r})$ and $M(alpha_{sigma (r+1)},dots,alpha_{sigma n})$.
Chapter 6
- Page 183, first paragraph.
If the underlying space $V$ is finite-dimensional, $(T-cI)$ fails to be $1 : 1$ precisely when its determinant is different from $0$.
It should instead be "precisely when its determinant is $0$."
- Page 186, proof of second lemma.
one expects that $dim W < dim W_1 + dots dim W_k$ because of linear relations . . .
It should be $dim W leq dim W_1 + dots + dim W_k$.
- Page 194, statement of Theorem 4 (Cayley-Hamilton).
Let $text{T}$ be a linear operator on a finite dimensional vector space $text{V}$. . . .
It should be "finite-dimensional".
- Page 195, first displayed equation.
$$Talpha_i = sum_{j=1}^n A_{ji} alpha_j,quad 1 leq j leq n.$$
It should be $1 leq i leq n$.
- Page 195, above the last paragraph.
since $f$ is the determinant of the matrix $xI - A$ whose entries are the polynomials $$(xI - A)_{ij} = delta_{ij} x - A_{ji}.$$
Here $xI-A$ should be replaced $(xI-A)^t$ in both places, and it could read "since $f$ is also the determinant of" for more clarity.
- Page 203, proof of Theorem 5, last paragraph.
The diagonal entries $a_{11},dots,a_{1n}$ are the characteristic values, . . .
It should be $a_{11},dots,a_{nn}$.
- Page 207, proof of Theorem 7.
this theorem has the same proof as does Theorem 5, if one replaces $T$ by $mathscr{F}$.
It would make more sense if it read "replaces $T$ by $T in mathscr{F}$."
- Page 207-208, proof of Theorem 8.
We could prove this theorem by adapting the lemma before Theorem 7 to the diagonalizable case, just as we adapted the lemma before Theorem 5 to the diagonalizable case in order to prove Theorem 6.
The adaptation of the lemma before Theorem 5 is not explicitly done. It is hidden in the proof of Theorem 6.
- Page 212, statement of Theorem 9.
and if we let $text{W}_text{i}$ be the range of $text{E}_text{i}$, then $text{V} = text{W}_text{i} oplus dots oplus text{W}_text{k}$.
It should be $text{V} = text{W}_1 oplus dots oplus text{W}_text{k}$.
- Page 216, last paragraph.
One part of Theorem 9 says that for a diagonalizable operator . . .
It should be Theorem 11.
- Page 220, statement of Theorem 12.
Let $text{p}$ be the minimal polynomial for $text{T}$, $$text{p} = text{p}_1^{text{r}_1} cdots text{p}_k^{r_k}$$ where the $text{p}_text{i}$ are distinct irreducible monic polynomials over $text{F}$ and the $text{r}_text{i}$ are positive integers. Let $text{W}_text{i}$ be the null space of $text{p}_text{i}(text{T})^{text{r}_j}$, $text{i} = 1,dots,text{k}$.
The displayed equation is improperly formatted. It should read $text{p} = text{p}_1^{text{r}_1} cdots text{p}_text{k}^{text{r}_text{k}}$. Also, in the second sentence it should be $text{p}_text{i}(text{T})^{text{r}_text{i}}$.
- Page 221, below the last displayed equation.
because $p^{r_i} f_i g_i$ is divisible by the minimal polynomial $p$.
It should be $p_i^{r_i} f_i g_i$.
Chapter 7
Page 233, proof of Theorem 3, last displayed equation in statement of Step 1. The formatting of "$alpha$ in $V$" underneath the "$max$" operator on the right-hand side is incorrect. It should be "$alpha$ in $text{V}$".
Page 233, proof of Theorem 3, displayed equation in statement of Step 2. The formatting of "$1 leq i < k$" underneath the $sum$ operator on the right-hand side is incorrect. It should be "$1 leq text{i} < text{k}$".
Page 238, paragraph following corollary.
If we have the operator $T$ and the direct-sum decomposition of Theorem 3, let $mathscr{B}_i$ be the ‘cyclic ordered basis’ . . .
It should be “of Theorem 3 with $W_0 = { 0 }$, . . .”.
- Page 239, Example 2.
If $T = cI$, then for any two linear independent vectors $alpha_1$ and $alpha_2$ in $V$ we have . . .
It should be "linearly".
- Page 240, second last displayed equation.
$$f = (x-c_1)^{d_1} cdots (x - c_k)^{d_k}$$
It should just be $(x-c_1)^{d_1} cdots (x - c_k)^{d_k}$ because later (on page 241) the letter $f$ is again used, this time to denote an arbitrary polynomial.
- Page 244, last paragraph.
where $f_i$ is a polynomial, the degree of which we may assume is less than $k_i$. Since $Nalpha = 0$, for each $i$ we have . . .
It should be “where $f_i$ is a polynomial, the degree of which we may assume is less than $k_i$ whenever $f_i neq 0$. Since $Nalpha = 0$, for each $i$ such that $f_i neq 0$ we have . . .”.
- Page 245, first paragraph.
Thus $xf_i$ is divisible by $x^{k_i}$, and since $deg (f_i) > k_i$ this means that $$f_i = c_i x^{k_i - 1}$$ where $c_i$ is some scalar.
It should be $deg (f_i) < k_i$. Also, the following sentence should be added at the end: "If $f_j = 0$, then we can take $c_j = 0$ so that $f_j = c_j x^{k_j - 1}$ in this case as well."
- Page 245, last paragraph.
Furthermore, the sizes of these matrices will decrease as one reads from left to right.
It should be “Furthermore, the sizes of these matrices will not increase as one reads from left to right.”
- Page 246, first paragraph.
Also, within each $A_i$, the sizes of the matrices $J_j^{(i)}$ decrease as $j$ increases.
It should be “Also, within each $A_i$, the sizes of the matrices $J_j^{(i)}$ do not increase as $j$ increases.”
- Page 246, third paragraph.
The uniqueness we see as follows.
This part is not clearly written. What the authors want to show is the following. Suppose that the linear operator $T$ is represented in some other ordered basis by the matrix $B$ in Jordan form, where $B$ is the direct sum of the matrices $B_1,dots,B_s$. Suppose each $B_i$ is an $e_i times e_i$ matrix that is a direct sum of elementary Jordan matrices with characteristic value $lambda_i$. Suppose the matrix $B$ induces the invariant direct-sum decomposition $V = U_1 oplus dots oplus U_s$. Then,
$s = k$, and there is a permutation $sigma$ of ${ 1,dots,k}$ such that $lambda_i = c_{sigma i}$, $e_i = d_{sigma i}$, $U_i = W_{sigma i}$, and $B_i = A_{sigma i}$ for each $1 leq i leq k$.
- Page 246, third paragraph.
The fact that $A$ is the direct sum of the matrices $text{A}_i$ gives us a direct sum decomposition . . .
The formatting of $text{A}_i$ is incorrect. It should be $A_i$.
- Page 246, third paragraph.
then the matrix $A_i$ is uniquely determined as the rational form for $(T_i - c_i I)$.
It should be "is uniquely determined by the rational form . . .".
- Page 248, Example 7.
Since $A$ is the direct sum of two $2 times 2$ matrices, it is clear that the minimal polynomial for $A$ is $(x-2)^2$.
It should read "Since $A$ is the direct sum of two $2 times 2$ matrices when $a neq 0$, and of one $2 times 2$ matrix and two $1 times 1$ matrices when $a = 0$, it is clear that the minimal polynomial for $A$ is $(x-2)^2$ in either case."
- Page 249, first paragraph.
Then as we noted in Example 15, Chapter 6 the primary decomposition theorem tells us that . . .
It should be Example 14.
- Page 249, last displayed equation
$$begin{align} Ng &= (r-1)x^{r-2}h \ vdots & qquad vdots \ N^{r-1}g &= (r-1)! h end{align}$$
There should be a full stop at the end.
- Page 257, definition.
(b) on the main diagonal of $text{N}$ there appear (in order) polynomials $text{f}_1,dots,text{f}_l$ such that $text{f}_text{k}$ divides $text{f}_{text{k}+1}$, $1 leq text{k} leq l - 1$.
The formatting of $l$ is incorrect in both instances. So, it should be $text{f}_1,dots,text{f}_text{l}$ and $1 leq text{k} leq text{l} - 1$.
- Page 259, paragraph following the proof of Theorem 9.
Two things we have seen provide clues as to how the polynomials $f_1,dots,f_{text{l}}$ in Theorem 9 are uniquely determined by $M$.
The formatting of $l$ is incorrect. It should be $f_1,dots,f_l$.
- Page 260, third paragraph.
For the case of a type (c) operation, notice that . . .
It should be (b).
- Page 260, statement of Corollary.
The polynomials $text{f}_1,dots,text{f}_l$ which occur on the main diagonal of $N$ are . . .
The formatting of $l$ is incorrect. It should be $text{f}_1,dots,text{f}_text{l}$.
- Page 265, first displayed equation, third line.
$$ = (W cap W_1) + dots + (W cap W_k) oplus V_1 oplus dots oplus V_k.$$
It should be $$ = (W cap W_1) oplus dots oplus (W cap W_k) oplus V_1 oplus dots oplus V_k.$$
- Page 266, proof of second lemma. The chain rule for the formal derivative of a product of polynomials is used, but this needs proof.
Chapter 8
- Page 274, last displayed equation, first line.
$$ (alpha | beta) = left( sum_k x_n alpha_k bigg|, beta right) $$
It should be $x_k$.
- Page 278, first line.
Now using (c) we find that . . .
It should be (iii).
- Page 282, second displayed equation, second last line.
$$ = (2,9,11) - 2(0,3,4) - -4,0,3) $$
The right-hand side should be $(2,9,11) - 2(0,3,4) - (-4,0,3)$.
- Page 284, first displayed equation.
$$ alpha = sum_k frac{(beta | alpha_k)}{| alpha_k |^2} alpha_k $$
This equation should be labelled (8–11).
- Page 285, paragraph following the first definition.
For $S$ is non-empty, since it contains $0$; . . .
It should be $S^perp$.
- Page 289, Exercise 7, displayed equation.
$$| (x_1,x_2 |^2 = (x_1 - x_2)^2 + 3x_2^2. $$
The left-hand side should be $| (x_1,x_2) |^2$.
- Page 316, first line.
matrix $text{A}$ of $text{T}$ in the basis $mathscr{B}$ is upper triangular. . . .
It should be "upper-triangular".
- Page 316, statement of Theorem 21.
Then there is an orthonormal basis for $text{V}$ in which the matrix of $text{T}$ is upper triangular.
It should be "upper-triangular".
Chapter 9
- Page 344, statement of Corollary.
Under the assumptions of the theorem, let $text{P}_text{j}$ be the orthogonal projection of $text{V}$ on $text{V}(text{r}_text{j})$, $(1 leq text{j} leq text{k})$. . . .
The parentheses around $1 leq text{j} leq text{k}$ should be removed.
And no one can find a list of errata?
– Michael McGovern
Jan 7 '18 at 2:19
@MichaelMcGovern what do you mean? I don’t quite understand.
– Brahadeesh
Jan 7 '18 at 3:43
He means your answer is a list of errata ;)
– Math_QED
Jan 7 '18 at 18:47
1
I was surprised that, given how many errors people were able to find just by looking through the book, the publisher hadn't already provided a list of errata.
– Michael McGovern
Jan 7 '18 at 20:25
Chapter 8, page 282: The vector $alpha_3=(0,9,0)$, and it is suggested that $|alpha_3|^2$ is $9$. But $|alpha_3|^2$ should be $81$. There are also errors stemming from this one.
– Al Jebr
Mar 31 '18 at 19:09
|
show 8 more comments
I'm using the second edition. I think that the definition before Theorem $9$ (Chapter $1$) should be
Definition. An $mtimes m$ matrix is said to be an elementary matrix if it can be obtained from the $mtimes m$ identity matrix by means of a single elementary row operation.
instead of
Definition. An $color{red}{mtimes n}$ matrix is said to be an elementary matrix if it can be obtained from the $mtimes m$ identity matrix by means of a single elementary row operation.
Check out this question for details.
1
It can be argued that any $mtimes n$ matrix "obtained from the $mtimes m$ identity matrix" by an elementary row operation will of necessity have $m=n$. So the "correction" you want to make here does not actually change the meaning of the definition.
– hardmath
Sep 28 '16 at 11:16
4
@hardmath Yes, of course, but I think it can help reduce confusion in the minds of people who are new to the topic.
– Aritra Das
Sep 28 '16 at 12:30
add a comment |
(Red highlight Typo in Linear Algebra by Hoffman and Kunze, page 23)
It should be $$A=E_1^{-1}E_2^{-1}...E_k^{-1}$$
Let $A=left[begin{matrix}2&3\4&5end{matrix}right]$
Elementary row operations:
$R_2leftrightarrow R_2-2R_1, R_1leftrightarrow R_1+3R_2, R_1 leftrightarrow R_1/2, R_2 leftrightarrow R_2*(-1)$ transforms $A$ into $I$
These four row operations on $I$ give
$E_1=left[begin{matrix}1&0\-2&1end{matrix}right]$,
$E_2=left[begin{matrix}1&3\0&1end{matrix}right]$,
$E_3=left[begin{matrix}1/2&0\0&1end{matrix}right]$,
$E_4=left[begin{matrix}1&0\0&-1end{matrix}right]$
$E_1^{-1}=left[begin{matrix}1&0\2&1end{matrix}right]$,
$E_2^{-1}=left[begin{matrix}1&-3\0&1end{matrix}right]$,
$E_3^{-1}=left[begin{matrix}2&0\0&1end{matrix}right]$,
$E_4^{-1}=left[begin{matrix}1&0\0&-1end{matrix}right]$
Now,
$ E_1^{-1}.E_2^{-1}.E_3^{-1}.E_4^{-1}=left[begin{matrix}2&3\4&5end{matrix}right]$
but, $ E_4^{-1}.E_3^{-1}.E_2^{-1}.E_1^{-1}=left[begin{matrix}-10&-6\-2&-1end{matrix}right]$
On page 18 "If $B$ is an n X p matrix, the columns of $B$ are the 1 X n matrices $B_1, . . . ,B_p$ defined by ..." in this line it should be rows instead of columns
– Vikram
Jun 16 '17 at 8:19
Rather, in this line it should read "...the columns of $B$ are the $n times 1$ matrices...".
– Brahadeesh
Sep 16 '17 at 12:08
add a comment |
I wanted to add two more observations which I believe are typos.
- Chapter 2, Example 16, Pg. 43
Example 16. We shall now give an example of an infinite basis. Let $F$ be a subfield of the complex numbers and let $V$ be the space of polynomial functions over $F.$ ($dots dots$)
Let $color{red}{ f_k(x)=x_k},k=0,1,2,dots.$ The infinite set ${f_0,f_1,f_2,dots }$ is a basis for $V.$
It should have been $color{red}{f_k(x)=x^k}$.
- Chapter 1, Theorem 8
$$[A(BC)_{ij}]=sum_r A_{ir}(BC)_{rj}=color{red}{sum_r A_{ir}sum_s B_{rj}C_{rj}}$$
It should have been
$$[A(BC)_{ij}]=sum_r A_{ir}(BC)_{rj}=color{red}{sum_r A_{ir}sum_s B_{rs}C_{sj}}$$
The second point is in fact correctly stated in my copy of the book (I am using the second edition).
– Brahadeesh
Dec 31 '17 at 13:08
add a comment |
More than a typo, there's a stated Corollary on page 356 in Section 9.6 that appears to be false. The details are here:
https://mathoverflow.net/questions/306759/error-in-hoffman-kunze-normal-operators-on-finite-dimensional-inner-product-spa
add a comment |
Chapter 1
Page 3, definition of characteristic.
. . . least n . . .
It should be least positive n.
Chapter 2
Page 39, Exercise 3.
. . . R$^5$ . . .
It should be R$^4$.
Chapter 3
Page 96, Exercise 9.
. . . and show that S = UTU$^{-1}$.
It should be S = U$^{-1}$TU.
Chapter 4
Page 129, Theorem 5, second sentence.
If f is a polynomial over f . . .
It should be If f is a polynomial over F . . .
Page 137, Example 11.
. . . is the g.c.d. of the polynomials.
Delete the period after polynomials. I include this seemingly trivial typo—that sentence-ending period causes the polynomials to refer to the preceding polynomials x – a, x – b, x – c, making the sentence obviously false—because it took me a non-trivial amount of time to figure out that the sentence does not actually end until four lines further down.
Chapter 6
Page 191, second full paragraph.
According to Theorem 5 of Chapter 4, . . .
It should be Theorem 7.
Page 198, Exercise 11.
. . . Section 6.1, . . .
It should be Section 6.2.
Page 219, Exercise 4(b).
. . . for f is the product of the characteristic polynomials for f$_1$, . . ., f$_k.$
Replace the three occurrences of f with T. Note also that the hint applies to all three parts of the exercise, not just part (c) as suggested by the formatting.
Page 219, Exercise 6.
. . . Example 6 . . .
It should be Example 5.
Page 219, Exercise 7.
. . . is spanned . . .
It should be is not spanned.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f437253%2fis-there-a-list-of-all-typos-in-hoffman-and-kunze-linear-algebra%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
6 Answers
6
active
oldest
votes
6 Answers
6
active
oldest
votes
active
oldest
votes
active
oldest
votes
This list does not repeat the typos mentioned in the other answers.
Chapter 1
- Page 6, last paragraph.
An elementary row operation is thus a special type of function (rule) $e$ which associated with each $m times n$ matrix . . .
It should be "associates".
- Page 10, proof of Theorem 4, second paragraph.
say it occurs in column $k_r neq k$.
It should be $k' neq k$.
- Page 18, last paragraph.
If $B$ is an $n times p$ matrix, the columns of $B$ are the $1 times n$ matrices . . .
It should be $n times 1$.
- Page 24, statement of second corollary.
Let $text{A} = text{A}_1 text{A}_2 cdots A_k$, where $text{A}_1 dots,A_k$ are . . .
The formatting of $A_k$ is incorrect in both instances. Also, there should be a comma after $text{A}_1$. So, it should be "Let $text{A} = text{A}_1 text{A}_2 cdots text{A}_text{k}$, where $text{A}_1, dots,text{A}_text{k}$ are . . .".
- Page 26–27, Exercise 4.
For which $X$ does there exist a scalar $c$ such that $AX=cX$?
It would make more sense if it asked: “For which $X neq 0$ does there exist . . .”.
Chapter 2
- Page 52, below equation (2–16).
Thus from (2–16) and Theorem 7 of Chapter 1 . . .
It should be Theorem 13.
- Page 57, second last displayed equation.
$$ beta = (0,dots,0, b_{k_s},dots,b_n), quad b_{k_s} neq 0$$
The formatting on the right-hand side is not correct. There is too much space before $b_{k_s}$. It should be $$beta = (0,dots,0,b_{k_s},dots,b_n), quad b_{k_s} neq 0$$
instead.
- Page 57, last displayed equation.
$$ beta = (0,dots,0, b_t,dots,b_n), quad b_t neq 0.$$
The formatting on the right-hand side is not correct. There is too much space before $b_t$. It should instead be $$beta = (0,dots,0,b_t,dots,b_n), quad b_t neq 0.$$
- Page 62, second last paragraph.
So $beta = (b_1,b_2,b_3,b_4)$ is in $W$ if and only if $b_3 - 2b_1$. . . .
It should be $b_3 = 2b_1$.
Chapter 3
- Page 76, first paragraph.
let $A_{ij},dots,A_{mj}$ be the coordinates of the vector . . .
It should be $A_{1j},dots,A_{mj}$.
- Page 80, Example 11.
For example, if $U$ is the operation 'remove the constant term and divide by $x$': $$ U(c_0 + c_1 x + dots + c_n x^n) = c_1 + c_2 x + dots + c_n x^{n-1}$$ then . . .
There is a subtlety in the phrase within apostrophes: what if $x = 0$? Rather than having to specify for this case separately, the sentence can be worded more simply as, "For example, if $U$ is the operator defined by $$U(c_0 + c_1 x + dots + c_n x^n) = c_1 + c_2 x + dots + c_n x^{n-1}$$ then . . .".
- Page 81, last line.
(iv) If ${ alpha_1,dots,alpha_{text{n}}}$ is basis for $text{V}$, then ${text{T}alpha_1,dots,text{T}alpha_{text{n}}}$ is a basis for $text{W}$.
It should read "(iv) If ${ alpha_1,dots,alpha_{text{n}}}$ is a basis for $text{V}$, then . . .".
- Page 90, second last paragraph.
We should also point out that we proved a special case of Theorem 13 in Example 12.
It should be "in Example 10."
- Page 91, first paragraph.
For, the identity operator $I$ is represented by the identity matrix in any order basis, and thus . . .
It should be "ordered".
- Page 92, statement of Theorem 14.
Let $text{V}$ be a finite-dimensional vector space over the field $text{F}$ and let
$$mathscr{B} = { alpha_1,dots,alpha text{i} } quad textit{and} quad mathscr{B}'={ alpha'_1,dots,alpha'_text{n}}$$ be ordered bases . . .
It should be $mathscr{B} = { alpha_1,dots,alpha_text{n}}$.
- Page 100, first paragraph.
If $f$ is in $V^*$, and we let $f(alpha_i) = alpha_i$, then when . . .
It should be $f(alpha_i) = a_i$.
- Page 101, paragraph following the definition.
If $S = V$, then $S^0$ is the zero subspace of $V^*$. (This is easy to see when $V$ is finite dimensional.)
It is equally easy to see this when $V$ is infinite-dimensional, so the statement in the brackets is redundant. Perhaps the authors meant to say that ${ v in V : f(v) = 0 forall f in V^* }$ is the zero subspace of $V$. This question asks for details on this point.
- Page 102, proof of the second corollary.
By the previous corollaries (or the proof of Theorem 16) there is a linear functional $f$ such that $f(beta) = 0$ for all $beta$ in $W$, but $f(alpha) neq 0$. . . .
It should be "corollary", since there is only one previous corollary. Also, $W$ should be replaced by $W_1$.
- Page 112, statement of Theorem 22.
(i) rank $(T^t) = $ rank $(T)$
There should be a semi-colon at the end of the line.
Chapter 4
- Page 118, last displayed equation, third line.
$$=sum_{i=0}^n sum_{j=0}^i f_i g_{i-j} h_{n-i} $$
It should be $f_j$. It is also not immediately clear how to go from this line to the next line.
- Page 126, proof of Theorem 3.
By definition, the mapping is onto, and if $f$, $g$ belong to $F[x]$ it is evident that $$(cf+dg)^sim = df^sim + dg^sim$$ for all scalars $c$ and $d$. . . .
It should be $(cf+dg)^sim = cf^sim + dg^sim$.
- Page 126, proof of Theorem 3.
Suppose then that $f$ is a polynomial of degree $n$ such that $f' = 0$. . . .
It should be $f^sim = 0$.
- Page 128, statement of Theorem 4.
(i) $f = dq + r$.
The full stop should be a semi-colon.
Page 129, paragraph before statement of Theorem 5. The notation $D^0$ needs to be introduced, so the sentence, "We also use the notation $D^0 f = f$" can be added at the end of the paragraph.
Page 131, first displayed equation, second line.
$$ = sum_{m = 0}^{n-r} frac{(D^m g)}{m!}(x-c)^{r+m} $$
There should be a full stop at the end of the line.
- Page 135, proof of Theorem 8.
Since $(f,p) = 1$, there are polynomials . . .
It should be $text{g.c.d.}{(f,p)} = 1$.
- Page 137, first paragraph.
This decomposition is also clearly unique, and is called the primary decomposition of $f$. . . .
For the sake of clarity, the following sentence can be added after the quoted line: "Henceforth, whenever we refer to the prime factorization of a non-scalar monic polynomial we mean the primary decomposition of the polynomial."
Page 137, proof of Theorem 11. The chain rule for the formal derivative of a product of polynomials is used, but this needs proof.
Page 139, Exercise 7.
Use Exercise 7 to prove the following. . . .
It should be "Use Exercise 6 to prove the following. . . ."
Chapter 5
- Page 142, second last displayed equation.
$$begin{align} D(calpha_i + alpha'_{iz}) &= [cA(i,k_i) + A'(i,k_i)]b \ &= cD(alpha_i) + D(alpha'_i) end{align}$$
The left-hand side should be $D(calpha_i + alpha'_i)$.
- Page 166, first displayed equation.
$$begin{align*}L(alpha_1,dots,c alpha_i + beta_i,dots,alpha_r) &= cL(alpha_1,dots,alpha_i,dots,alpha_r {}+{} \ &qquad qquad qquad qquad L(alpha_1,dots,beta_i,dots,alpha_r)end{align*}$$
The first term on the right has a missing closing bracket, so it should be $cL(alpha_1,dots,alpha_i,dots,alpha_r)$.
- Page 167, second displayed equation, third line.
$${}={} sum_{j=1}^n A_{1j} Lleft( epsilon_j, sum_{j=1}^n A_{2k} epsilon_k, dots, alpha_r right) $$
The second summation should run over the index $k$ instead of $j$.
Page 170, proof of the lemma. To show that $pi_r L in Lambda^r(V)$, the authors show that $(pi_r L)_tau = (operatorname{sgn}{tau})(pi_rL)$ for every permutation $tau$ of ${1,dots,r}$. This implies that $pi_r L$ is alternating only when $K$ is a ring such that $1 + 1 neq 0$. A proof over arbitrary commutative rings with identity is still needed.
Page 170, first paragraph after proof of the lemma.
In (5–33) we showed that the determinant . . .
It should be (5–34).
- Page 171, equation (5–39).
$$begin{align} D_J &= sum_sigma (operatorname{sgn} sigma) f_{j_{sigma 1}} otimes dots otimes f_{j_{sigma r}} tag{5–39}\ &= pi_r (f_{j_1} otimes dots otimes f_{j_r}) end{align}$$
The equation tag should be centered instead of being aligned at the first line.
- Page 174, below the second displayed equation.
The proof of the lemma following equation (5–36) shows that for any $r$-linear form $L$ and any permutation $sigma$ of ${1,dots,r}$
$$
pi_r(L_sigma) = operatorname{sgn} sigma pi_r(L)
$$
The proof of the lemma actually shows $(pi_r L)_sigma = operatorname{sgn} sigma pi_r(L)$. This fact still needs proof. Also, there should be a full stop at the end of the displayed equation.
- Page 174, below the third displayed equation.
Hence, $D_{ij} cdot f_k = 2pi_3(f_i otimes f_j otimes f_k)$.
This is not immediate from just the preceding equations. The authors implicitly assume the identity $(f_{j_1} otimes dots otimes f_{j_r})_sigma = f_{j_{sigma^{-1} 1}}! otimes dots otimes f_{j_{sigma^{-1} r}}$. This identity needs proof.
- Page 174, sixth displayed equation.
$$(D_{ij} cdot f_k) cdot f_l = 6 pi_4(f_i otimes f_j otimes f_k otimes f_l)$$
The factor $6$ should be replaced by $12$.
- Page 174, last displayed equation.
$$ (L otimes M)_{(sigma,tau)} = L_sigma otimes L_tau$$
The right-hand side should be $L_sigma otimes M_tau$.
- Page 177, below the third displayed equation.
Therefore, since $(Nsigma)tau = Ntau sigma$ for any $(r+s)$-linear form . . .
It should be $(N_sigma)_tau = N_{tau sigma}$.
- Page 179, last displayed equation.
$$ (L wedge M)(alpha_1,dots,alpha_n) = sum (operatorname{sgn} sigma) L(alpha sigma_1,dots,alpha_{sigma r}) M(alpha_{sigma(r+1)},dots,alpha_{sigma_n}) $$
The right-hand side should have $L(alpha_{sigma 1},dots,alpha_{sigma r})$ and $M(alpha_{sigma (r+1)},dots,alpha_{sigma n})$.
Chapter 6
- Page 183, first paragraph.
If the underlying space $V$ is finite-dimensional, $(T-cI)$ fails to be $1 : 1$ precisely when its determinant is different from $0$.
It should instead be "precisely when its determinant is $0$."
- Page 186, proof of second lemma.
one expects that $dim W < dim W_1 + dots dim W_k$ because of linear relations . . .
It should be $dim W leq dim W_1 + dots + dim W_k$.
- Page 194, statement of Theorem 4 (Cayley-Hamilton).
Let $text{T}$ be a linear operator on a finite dimensional vector space $text{V}$. . . .
It should be "finite-dimensional".
- Page 195, first displayed equation.
$$Talpha_i = sum_{j=1}^n A_{ji} alpha_j,quad 1 leq j leq n.$$
It should be $1 leq i leq n$.
- Page 195, above the last paragraph.
since $f$ is the determinant of the matrix $xI - A$ whose entries are the polynomials $$(xI - A)_{ij} = delta_{ij} x - A_{ji}.$$
Here $xI-A$ should be replaced $(xI-A)^t$ in both places, and it could read "since $f$ is also the determinant of" for more clarity.
- Page 203, proof of Theorem 5, last paragraph.
The diagonal entries $a_{11},dots,a_{1n}$ are the characteristic values, . . .
It should be $a_{11},dots,a_{nn}$.
- Page 207, proof of Theorem 7.
this theorem has the same proof as does Theorem 5, if one replaces $T$ by $mathscr{F}$.
It would make more sense if it read "replaces $T$ by $T in mathscr{F}$."
- Page 207-208, proof of Theorem 8.
We could prove this theorem by adapting the lemma before Theorem 7 to the diagonalizable case, just as we adapted the lemma before Theorem 5 to the diagonalizable case in order to prove Theorem 6.
The adaptation of the lemma before Theorem 5 is not explicitly done. It is hidden in the proof of Theorem 6.
- Page 212, statement of Theorem 9.
and if we let $text{W}_text{i}$ be the range of $text{E}_text{i}$, then $text{V} = text{W}_text{i} oplus dots oplus text{W}_text{k}$.
It should be $text{V} = text{W}_1 oplus dots oplus text{W}_text{k}$.
- Page 216, last paragraph.
One part of Theorem 9 says that for a diagonalizable operator . . .
It should be Theorem 11.
- Page 220, statement of Theorem 12.
Let $text{p}$ be the minimal polynomial for $text{T}$, $$text{p} = text{p}_1^{text{r}_1} cdots text{p}_k^{r_k}$$ where the $text{p}_text{i}$ are distinct irreducible monic polynomials over $text{F}$ and the $text{r}_text{i}$ are positive integers. Let $text{W}_text{i}$ be the null space of $text{p}_text{i}(text{T})^{text{r}_j}$, $text{i} = 1,dots,text{k}$.
The displayed equation is improperly formatted. It should read $text{p} = text{p}_1^{text{r}_1} cdots text{p}_text{k}^{text{r}_text{k}}$. Also, in the second sentence it should be $text{p}_text{i}(text{T})^{text{r}_text{i}}$.
- Page 221, below the last displayed equation.
because $p^{r_i} f_i g_i$ is divisible by the minimal polynomial $p$.
It should be $p_i^{r_i} f_i g_i$.
Chapter 7
Page 233, proof of Theorem 3, last displayed equation in statement of Step 1. The formatting of "$alpha$ in $V$" underneath the "$max$" operator on the right-hand side is incorrect. It should be "$alpha$ in $text{V}$".
Page 233, proof of Theorem 3, displayed equation in statement of Step 2. The formatting of "$1 leq i < k$" underneath the $sum$ operator on the right-hand side is incorrect. It should be "$1 leq text{i} < text{k}$".
Page 238, paragraph following corollary.
If we have the operator $T$ and the direct-sum decomposition of Theorem 3, let $mathscr{B}_i$ be the ‘cyclic ordered basis’ . . .
It should be “of Theorem 3 with $W_0 = { 0 }$, . . .”.
- Page 239, Example 2.
If $T = cI$, then for any two linear independent vectors $alpha_1$ and $alpha_2$ in $V$ we have . . .
It should be "linearly".
- Page 240, second last displayed equation.
$$f = (x-c_1)^{d_1} cdots (x - c_k)^{d_k}$$
It should just be $(x-c_1)^{d_1} cdots (x - c_k)^{d_k}$ because later (on page 241) the letter $f$ is again used, this time to denote an arbitrary polynomial.
- Page 244, last paragraph.
where $f_i$ is a polynomial, the degree of which we may assume is less than $k_i$. Since $Nalpha = 0$, for each $i$ we have . . .
It should be “where $f_i$ is a polynomial, the degree of which we may assume is less than $k_i$ whenever $f_i neq 0$. Since $Nalpha = 0$, for each $i$ such that $f_i neq 0$ we have . . .”.
- Page 245, first paragraph.
Thus $xf_i$ is divisible by $x^{k_i}$, and since $deg (f_i) > k_i$ this means that $$f_i = c_i x^{k_i - 1}$$ where $c_i$ is some scalar.
It should be $deg (f_i) < k_i$. Also, the following sentence should be added at the end: "If $f_j = 0$, then we can take $c_j = 0$ so that $f_j = c_j x^{k_j - 1}$ in this case as well."
- Page 245, last paragraph.
Furthermore, the sizes of these matrices will decrease as one reads from left to right.
It should be “Furthermore, the sizes of these matrices will not increase as one reads from left to right.”
- Page 246, first paragraph.
Also, within each $A_i$, the sizes of the matrices $J_j^{(i)}$ decrease as $j$ increases.
It should be “Also, within each $A_i$, the sizes of the matrices $J_j^{(i)}$ do not increase as $j$ increases.”
- Page 246, third paragraph.
The uniqueness we see as follows.
This part is not clearly written. What the authors want to show is the following. Suppose that the linear operator $T$ is represented in some other ordered basis by the matrix $B$ in Jordan form, where $B$ is the direct sum of the matrices $B_1,dots,B_s$. Suppose each $B_i$ is an $e_i times e_i$ matrix that is a direct sum of elementary Jordan matrices with characteristic value $lambda_i$. Suppose the matrix $B$ induces the invariant direct-sum decomposition $V = U_1 oplus dots oplus U_s$. Then,
$s = k$, and there is a permutation $sigma$ of ${ 1,dots,k}$ such that $lambda_i = c_{sigma i}$, $e_i = d_{sigma i}$, $U_i = W_{sigma i}$, and $B_i = A_{sigma i}$ for each $1 leq i leq k$.
- Page 246, third paragraph.
The fact that $A$ is the direct sum of the matrices $text{A}_i$ gives us a direct sum decomposition . . .
The formatting of $text{A}_i$ is incorrect. It should be $A_i$.
- Page 246, third paragraph.
then the matrix $A_i$ is uniquely determined as the rational form for $(T_i - c_i I)$.
It should be "is uniquely determined by the rational form . . .".
- Page 248, Example 7.
Since $A$ is the direct sum of two $2 times 2$ matrices, it is clear that the minimal polynomial for $A$ is $(x-2)^2$.
It should read "Since $A$ is the direct sum of two $2 times 2$ matrices when $a neq 0$, and of one $2 times 2$ matrix and two $1 times 1$ matrices when $a = 0$, it is clear that the minimal polynomial for $A$ is $(x-2)^2$ in either case."
- Page 249, first paragraph.
Then as we noted in Example 15, Chapter 6 the primary decomposition theorem tells us that . . .
It should be Example 14.
- Page 249, last displayed equation
$$begin{align} Ng &= (r-1)x^{r-2}h \ vdots & qquad vdots \ N^{r-1}g &= (r-1)! h end{align}$$
There should be a full stop at the end.
- Page 257, definition.
(b) on the main diagonal of $text{N}$ there appear (in order) polynomials $text{f}_1,dots,text{f}_l$ such that $text{f}_text{k}$ divides $text{f}_{text{k}+1}$, $1 leq text{k} leq l - 1$.
The formatting of $l$ is incorrect in both instances. So, it should be $text{f}_1,dots,text{f}_text{l}$ and $1 leq text{k} leq text{l} - 1$.
- Page 259, paragraph following the proof of Theorem 9.
Two things we have seen provide clues as to how the polynomials $f_1,dots,f_{text{l}}$ in Theorem 9 are uniquely determined by $M$.
The formatting of $l$ is incorrect. It should be $f_1,dots,f_l$.
- Page 260, third paragraph.
For the case of a type (c) operation, notice that . . .
It should be (b).
- Page 260, statement of Corollary.
The polynomials $text{f}_1,dots,text{f}_l$ which occur on the main diagonal of $N$ are . . .
The formatting of $l$ is incorrect. It should be $text{f}_1,dots,text{f}_text{l}$.
- Page 265, first displayed equation, third line.
$$ = (W cap W_1) + dots + (W cap W_k) oplus V_1 oplus dots oplus V_k.$$
It should be $$ = (W cap W_1) oplus dots oplus (W cap W_k) oplus V_1 oplus dots oplus V_k.$$
- Page 266, proof of second lemma. The chain rule for the formal derivative of a product of polynomials is used, but this needs proof.
Chapter 8
- Page 274, last displayed equation, first line.
$$ (alpha | beta) = left( sum_k x_n alpha_k bigg|, beta right) $$
It should be $x_k$.
- Page 278, first line.
Now using (c) we find that . . .
It should be (iii).
- Page 282, second displayed equation, second last line.
$$ = (2,9,11) - 2(0,3,4) - -4,0,3) $$
The right-hand side should be $(2,9,11) - 2(0,3,4) - (-4,0,3)$.
- Page 284, first displayed equation.
$$ alpha = sum_k frac{(beta | alpha_k)}{| alpha_k |^2} alpha_k $$
This equation should be labelled (8–11).
- Page 285, paragraph following the first definition.
For $S$ is non-empty, since it contains $0$; . . .
It should be $S^perp$.
- Page 289, Exercise 7, displayed equation.
$$| (x_1,x_2 |^2 = (x_1 - x_2)^2 + 3x_2^2. $$
The left-hand side should be $| (x_1,x_2) |^2$.
- Page 316, first line.
matrix $text{A}$ of $text{T}$ in the basis $mathscr{B}$ is upper triangular. . . .
It should be "upper-triangular".
- Page 316, statement of Theorem 21.
Then there is an orthonormal basis for $text{V}$ in which the matrix of $text{T}$ is upper triangular.
It should be "upper-triangular".
Chapter 9
- Page 344, statement of Corollary.
Under the assumptions of the theorem, let $text{P}_text{j}$ be the orthogonal projection of $text{V}$ on $text{V}(text{r}_text{j})$, $(1 leq text{j} leq text{k})$. . . .
The parentheses around $1 leq text{j} leq text{k}$ should be removed.
And no one can find a list of errata?
– Michael McGovern
Jan 7 '18 at 2:19
@MichaelMcGovern what do you mean? I don’t quite understand.
– Brahadeesh
Jan 7 '18 at 3:43
He means your answer is a list of errata ;)
– Math_QED
Jan 7 '18 at 18:47
1
I was surprised that, given how many errors people were able to find just by looking through the book, the publisher hadn't already provided a list of errata.
– Michael McGovern
Jan 7 '18 at 20:25
Chapter 8, page 282: The vector $alpha_3=(0,9,0)$, and it is suggested that $|alpha_3|^2$ is $9$. But $|alpha_3|^2$ should be $81$. There are also errors stemming from this one.
– Al Jebr
Mar 31 '18 at 19:09
|
show 8 more comments
This list does not repeat the typos mentioned in the other answers.
Chapter 1
- Page 6, last paragraph.
An elementary row operation is thus a special type of function (rule) $e$ which associated with each $m times n$ matrix . . .
It should be "associates".
- Page 10, proof of Theorem 4, second paragraph.
say it occurs in column $k_r neq k$.
It should be $k' neq k$.
- Page 18, last paragraph.
If $B$ is an $n times p$ matrix, the columns of $B$ are the $1 times n$ matrices . . .
It should be $n times 1$.
- Page 24, statement of second corollary.
Let $text{A} = text{A}_1 text{A}_2 cdots A_k$, where $text{A}_1 dots,A_k$ are . . .
The formatting of $A_k$ is incorrect in both instances. Also, there should be a comma after $text{A}_1$. So, it should be "Let $text{A} = text{A}_1 text{A}_2 cdots text{A}_text{k}$, where $text{A}_1, dots,text{A}_text{k}$ are . . .".
- Page 26–27, Exercise 4.
For which $X$ does there exist a scalar $c$ such that $AX=cX$?
It would make more sense if it asked: “For which $X neq 0$ does there exist . . .”.
Chapter 2
- Page 52, below equation (2–16).
Thus from (2–16) and Theorem 7 of Chapter 1 . . .
It should be Theorem 13.
- Page 57, second last displayed equation.
$$ beta = (0,dots,0, b_{k_s},dots,b_n), quad b_{k_s} neq 0$$
The formatting on the right-hand side is not correct. There is too much space before $b_{k_s}$. It should be $$beta = (0,dots,0,b_{k_s},dots,b_n), quad b_{k_s} neq 0$$
instead.
- Page 57, last displayed equation.
$$ beta = (0,dots,0, b_t,dots,b_n), quad b_t neq 0.$$
The formatting on the right-hand side is not correct. There is too much space before $b_t$. It should instead be $$beta = (0,dots,0,b_t,dots,b_n), quad b_t neq 0.$$
- Page 62, second last paragraph.
So $beta = (b_1,b_2,b_3,b_4)$ is in $W$ if and only if $b_3 - 2b_1$. . . .
It should be $b_3 = 2b_1$.
Chapter 3
- Page 76, first paragraph.
let $A_{ij},dots,A_{mj}$ be the coordinates of the vector . . .
It should be $A_{1j},dots,A_{mj}$.
- Page 80, Example 11.
For example, if $U$ is the operation 'remove the constant term and divide by $x$': $$ U(c_0 + c_1 x + dots + c_n x^n) = c_1 + c_2 x + dots + c_n x^{n-1}$$ then . . .
There is a subtlety in the phrase within apostrophes: what if $x = 0$? Rather than having to specify for this case separately, the sentence can be worded more simply as, "For example, if $U$ is the operator defined by $$U(c_0 + c_1 x + dots + c_n x^n) = c_1 + c_2 x + dots + c_n x^{n-1}$$ then . . .".
- Page 81, last line.
(iv) If ${ alpha_1,dots,alpha_{text{n}}}$ is basis for $text{V}$, then ${text{T}alpha_1,dots,text{T}alpha_{text{n}}}$ is a basis for $text{W}$.
It should read "(iv) If ${ alpha_1,dots,alpha_{text{n}}}$ is a basis for $text{V}$, then . . .".
- Page 90, second last paragraph.
We should also point out that we proved a special case of Theorem 13 in Example 12.
It should be "in Example 10."
- Page 91, first paragraph.
For, the identity operator $I$ is represented by the identity matrix in any order basis, and thus . . .
It should be "ordered".
- Page 92, statement of Theorem 14.
Let $text{V}$ be a finite-dimensional vector space over the field $text{F}$ and let
$$mathscr{B} = { alpha_1,dots,alpha text{i} } quad textit{and} quad mathscr{B}'={ alpha'_1,dots,alpha'_text{n}}$$ be ordered bases . . .
It should be $mathscr{B} = { alpha_1,dots,alpha_text{n}}$.
- Page 100, first paragraph.
If $f$ is in $V^*$, and we let $f(alpha_i) = alpha_i$, then when . . .
It should be $f(alpha_i) = a_i$.
- Page 101, paragraph following the definition.
If $S = V$, then $S^0$ is the zero subspace of $V^*$. (This is easy to see when $V$ is finite dimensional.)
It is equally easy to see this when $V$ is infinite-dimensional, so the statement in the brackets is redundant. Perhaps the authors meant to say that ${ v in V : f(v) = 0 forall f in V^* }$ is the zero subspace of $V$. This question asks for details on this point.
- Page 102, proof of the second corollary.
By the previous corollaries (or the proof of Theorem 16) there is a linear functional $f$ such that $f(beta) = 0$ for all $beta$ in $W$, but $f(alpha) neq 0$. . . .
It should be "corollary", since there is only one previous corollary. Also, $W$ should be replaced by $W_1$.
- Page 112, statement of Theorem 22.
(i) rank $(T^t) = $ rank $(T)$
There should be a semi-colon at the end of the line.
Chapter 4
- Page 118, last displayed equation, third line.
$$=sum_{i=0}^n sum_{j=0}^i f_i g_{i-j} h_{n-i} $$
It should be $f_j$. It is also not immediately clear how to go from this line to the next line.
- Page 126, proof of Theorem 3.
By definition, the mapping is onto, and if $f$, $g$ belong to $F[x]$ it is evident that $$(cf+dg)^sim = df^sim + dg^sim$$ for all scalars $c$ and $d$. . . .
It should be $(cf+dg)^sim = cf^sim + dg^sim$.
- Page 126, proof of Theorem 3.
Suppose then that $f$ is a polynomial of degree $n$ such that $f' = 0$. . . .
It should be $f^sim = 0$.
- Page 128, statement of Theorem 4.
(i) $f = dq + r$.
The full stop should be a semi-colon.
Page 129, paragraph before statement of Theorem 5. The notation $D^0$ needs to be introduced, so the sentence, "We also use the notation $D^0 f = f$" can be added at the end of the paragraph.
Page 131, first displayed equation, second line.
$$ = sum_{m = 0}^{n-r} frac{(D^m g)}{m!}(x-c)^{r+m} $$
There should be a full stop at the end of the line.
- Page 135, proof of Theorem 8.
Since $(f,p) = 1$, there are polynomials . . .
It should be $text{g.c.d.}{(f,p)} = 1$.
- Page 137, first paragraph.
This decomposition is also clearly unique, and is called the primary decomposition of $f$. . . .
For the sake of clarity, the following sentence can be added after the quoted line: "Henceforth, whenever we refer to the prime factorization of a non-scalar monic polynomial we mean the primary decomposition of the polynomial."
Page 137, proof of Theorem 11. The chain rule for the formal derivative of a product of polynomials is used, but this needs proof.
Page 139, Exercise 7.
Use Exercise 7 to prove the following. . . .
It should be "Use Exercise 6 to prove the following. . . ."
Chapter 5
- Page 142, second last displayed equation.
$$begin{align} D(calpha_i + alpha'_{iz}) &= [cA(i,k_i) + A'(i,k_i)]b \ &= cD(alpha_i) + D(alpha'_i) end{align}$$
The left-hand side should be $D(calpha_i + alpha'_i)$.
- Page 166, first displayed equation.
$$begin{align*}L(alpha_1,dots,c alpha_i + beta_i,dots,alpha_r) &= cL(alpha_1,dots,alpha_i,dots,alpha_r {}+{} \ &qquad qquad qquad qquad L(alpha_1,dots,beta_i,dots,alpha_r)end{align*}$$
The first term on the right has a missing closing bracket, so it should be $cL(alpha_1,dots,alpha_i,dots,alpha_r)$.
- Page 167, second displayed equation, third line.
$${}={} sum_{j=1}^n A_{1j} Lleft( epsilon_j, sum_{j=1}^n A_{2k} epsilon_k, dots, alpha_r right) $$
The second summation should run over the index $k$ instead of $j$.
Page 170, proof of the lemma. To show that $pi_r L in Lambda^r(V)$, the authors show that $(pi_r L)_tau = (operatorname{sgn}{tau})(pi_rL)$ for every permutation $tau$ of ${1,dots,r}$. This implies that $pi_r L$ is alternating only when $K$ is a ring such that $1 + 1 neq 0$. A proof over arbitrary commutative rings with identity is still needed.
Page 170, first paragraph after proof of the lemma.
In (5–33) we showed that the determinant . . .
It should be (5–34).
- Page 171, equation (5–39).
$$begin{align} D_J &= sum_sigma (operatorname{sgn} sigma) f_{j_{sigma 1}} otimes dots otimes f_{j_{sigma r}} tag{5–39}\ &= pi_r (f_{j_1} otimes dots otimes f_{j_r}) end{align}$$
The equation tag should be centered instead of being aligned at the first line.
- Page 174, below the second displayed equation.
The proof of the lemma following equation (5–36) shows that for any $r$-linear form $L$ and any permutation $sigma$ of ${1,dots,r}$
$$
pi_r(L_sigma) = operatorname{sgn} sigma pi_r(L)
$$
The proof of the lemma actually shows $(pi_r L)_sigma = operatorname{sgn} sigma pi_r(L)$. This fact still needs proof. Also, there should be a full stop at the end of the displayed equation.
- Page 174, below the third displayed equation.
Hence, $D_{ij} cdot f_k = 2pi_3(f_i otimes f_j otimes f_k)$.
This is not immediate from just the preceding equations. The authors implicitly assume the identity $(f_{j_1} otimes dots otimes f_{j_r})_sigma = f_{j_{sigma^{-1} 1}}! otimes dots otimes f_{j_{sigma^{-1} r}}$. This identity needs proof.
- Page 174, sixth displayed equation.
$$(D_{ij} cdot f_k) cdot f_l = 6 pi_4(f_i otimes f_j otimes f_k otimes f_l)$$
The factor $6$ should be replaced by $12$.
- Page 174, last displayed equation.
$$ (L otimes M)_{(sigma,tau)} = L_sigma otimes L_tau$$
The right-hand side should be $L_sigma otimes M_tau$.
- Page 177, below the third displayed equation.
Therefore, since $(Nsigma)tau = Ntau sigma$ for any $(r+s)$-linear form . . .
It should be $(N_sigma)_tau = N_{tau sigma}$.
- Page 179, last displayed equation.
$$ (L wedge M)(alpha_1,dots,alpha_n) = sum (operatorname{sgn} sigma) L(alpha sigma_1,dots,alpha_{sigma r}) M(alpha_{sigma(r+1)},dots,alpha_{sigma_n}) $$
The right-hand side should have $L(alpha_{sigma 1},dots,alpha_{sigma r})$ and $M(alpha_{sigma (r+1)},dots,alpha_{sigma n})$.
Chapter 6
- Page 183, first paragraph.
If the underlying space $V$ is finite-dimensional, $(T-cI)$ fails to be $1 : 1$ precisely when its determinant is different from $0$.
It should instead be "precisely when its determinant is $0$."
- Page 186, proof of second lemma.
one expects that $dim W < dim W_1 + dots dim W_k$ because of linear relations . . .
It should be $dim W leq dim W_1 + dots + dim W_k$.
- Page 194, statement of Theorem 4 (Cayley-Hamilton).
Let $text{T}$ be a linear operator on a finite dimensional vector space $text{V}$. . . .
It should be "finite-dimensional".
- Page 195, first displayed equation.
$$Talpha_i = sum_{j=1}^n A_{ji} alpha_j,quad 1 leq j leq n.$$
It should be $1 leq i leq n$.
- Page 195, above the last paragraph.
since $f$ is the determinant of the matrix $xI - A$ whose entries are the polynomials $$(xI - A)_{ij} = delta_{ij} x - A_{ji}.$$
Here $xI-A$ should be replaced $(xI-A)^t$ in both places, and it could read "since $f$ is also the determinant of" for more clarity.
- Page 203, proof of Theorem 5, last paragraph.
The diagonal entries $a_{11},dots,a_{1n}$ are the characteristic values, . . .
It should be $a_{11},dots,a_{nn}$.
- Page 207, proof of Theorem 7.
this theorem has the same proof as does Theorem 5, if one replaces $T$ by $mathscr{F}$.
It would make more sense if it read "replaces $T$ by $T in mathscr{F}$."
- Page 207-208, proof of Theorem 8.
We could prove this theorem by adapting the lemma before Theorem 7 to the diagonalizable case, just as we adapted the lemma before Theorem 5 to the diagonalizable case in order to prove Theorem 6.
The adaptation of the lemma before Theorem 5 is not explicitly done. It is hidden in the proof of Theorem 6.
- Page 212, statement of Theorem 9.
and if we let $text{W}_text{i}$ be the range of $text{E}_text{i}$, then $text{V} = text{W}_text{i} oplus dots oplus text{W}_text{k}$.
It should be $text{V} = text{W}_1 oplus dots oplus text{W}_text{k}$.
- Page 216, last paragraph.
One part of Theorem 9 says that for a diagonalizable operator . . .
It should be Theorem 11.
- Page 220, statement of Theorem 12.
Let $text{p}$ be the minimal polynomial for $text{T}$, $$text{p} = text{p}_1^{text{r}_1} cdots text{p}_k^{r_k}$$ where the $text{p}_text{i}$ are distinct irreducible monic polynomials over $text{F}$ and the $text{r}_text{i}$ are positive integers. Let $text{W}_text{i}$ be the null space of $text{p}_text{i}(text{T})^{text{r}_j}$, $text{i} = 1,dots,text{k}$.
The displayed equation is improperly formatted. It should read $text{p} = text{p}_1^{text{r}_1} cdots text{p}_text{k}^{text{r}_text{k}}$. Also, in the second sentence it should be $text{p}_text{i}(text{T})^{text{r}_text{i}}$.
- Page 221, below the last displayed equation.
because $p^{r_i} f_i g_i$ is divisible by the minimal polynomial $p$.
It should be $p_i^{r_i} f_i g_i$.
Chapter 7
Page 233, proof of Theorem 3, last displayed equation in statement of Step 1. The formatting of "$alpha$ in $V$" underneath the "$max$" operator on the right-hand side is incorrect. It should be "$alpha$ in $text{V}$".
Page 233, proof of Theorem 3, displayed equation in statement of Step 2. The formatting of "$1 leq i < k$" underneath the $sum$ operator on the right-hand side is incorrect. It should be "$1 leq text{i} < text{k}$".
Page 238, paragraph following corollary.
If we have the operator $T$ and the direct-sum decomposition of Theorem 3, let $mathscr{B}_i$ be the ‘cyclic ordered basis’ . . .
It should be “of Theorem 3 with $W_0 = { 0 }$, . . .”.
- Page 239, Example 2.
If $T = cI$, then for any two linear independent vectors $alpha_1$ and $alpha_2$ in $V$ we have . . .
It should be "linearly".
- Page 240, second last displayed equation.
$$f = (x-c_1)^{d_1} cdots (x - c_k)^{d_k}$$
It should just be $(x-c_1)^{d_1} cdots (x - c_k)^{d_k}$ because later (on page 241) the letter $f$ is again used, this time to denote an arbitrary polynomial.
- Page 244, last paragraph.
where $f_i$ is a polynomial, the degree of which we may assume is less than $k_i$. Since $Nalpha = 0$, for each $i$ we have . . .
It should be “where $f_i$ is a polynomial, the degree of which we may assume is less than $k_i$ whenever $f_i neq 0$. Since $Nalpha = 0$, for each $i$ such that $f_i neq 0$ we have . . .”.
- Page 245, first paragraph.
Thus $xf_i$ is divisible by $x^{k_i}$, and since $deg (f_i) > k_i$ this means that $$f_i = c_i x^{k_i - 1}$$ where $c_i$ is some scalar.
It should be $deg (f_i) < k_i$. Also, the following sentence should be added at the end: "If $f_j = 0$, then we can take $c_j = 0$ so that $f_j = c_j x^{k_j - 1}$ in this case as well."
- Page 245, last paragraph.
Furthermore, the sizes of these matrices will decrease as one reads from left to right.
It should be “Furthermore, the sizes of these matrices will not increase as one reads from left to right.”
- Page 246, first paragraph.
Also, within each $A_i$, the sizes of the matrices $J_j^{(i)}$ decrease as $j$ increases.
It should be “Also, within each $A_i$, the sizes of the matrices $J_j^{(i)}$ do not increase as $j$ increases.”
- Page 246, third paragraph.
The uniqueness we see as follows.
This part is not clearly written. What the authors want to show is the following. Suppose that the linear operator $T$ is represented in some other ordered basis by the matrix $B$ in Jordan form, where $B$ is the direct sum of the matrices $B_1,dots,B_s$. Suppose each $B_i$ is an $e_i times e_i$ matrix that is a direct sum of elementary Jordan matrices with characteristic value $lambda_i$. Suppose the matrix $B$ induces the invariant direct-sum decomposition $V = U_1 oplus dots oplus U_s$. Then,
$s = k$, and there is a permutation $sigma$ of ${ 1,dots,k}$ such that $lambda_i = c_{sigma i}$, $e_i = d_{sigma i}$, $U_i = W_{sigma i}$, and $B_i = A_{sigma i}$ for each $1 leq i leq k$.
- Page 246, third paragraph.
The fact that $A$ is the direct sum of the matrices $text{A}_i$ gives us a direct sum decomposition . . .
The formatting of $text{A}_i$ is incorrect. It should be $A_i$.
- Page 246, third paragraph.
then the matrix $A_i$ is uniquely determined as the rational form for $(T_i - c_i I)$.
It should be "is uniquely determined by the rational form . . .".
- Page 248, Example 7.
Since $A$ is the direct sum of two $2 times 2$ matrices, it is clear that the minimal polynomial for $A$ is $(x-2)^2$.
It should read "Since $A$ is the direct sum of two $2 times 2$ matrices when $a neq 0$, and of one $2 times 2$ matrix and two $1 times 1$ matrices when $a = 0$, it is clear that the minimal polynomial for $A$ is $(x-2)^2$ in either case."
- Page 249, first paragraph.
Then as we noted in Example 15, Chapter 6 the primary decomposition theorem tells us that . . .
It should be Example 14.
- Page 249, last displayed equation
$$begin{align} Ng &= (r-1)x^{r-2}h \ vdots & qquad vdots \ N^{r-1}g &= (r-1)! h end{align}$$
There should be a full stop at the end.
- Page 257, definition.
(b) on the main diagonal of $text{N}$ there appear (in order) polynomials $text{f}_1,dots,text{f}_l$ such that $text{f}_text{k}$ divides $text{f}_{text{k}+1}$, $1 leq text{k} leq l - 1$.
The formatting of $l$ is incorrect in both instances. So, it should be $text{f}_1,dots,text{f}_text{l}$ and $1 leq text{k} leq text{l} - 1$.
- Page 259, paragraph following the proof of Theorem 9.
Two things we have seen provide clues as to how the polynomials $f_1,dots,f_{text{l}}$ in Theorem 9 are uniquely determined by $M$.
The formatting of $l$ is incorrect. It should be $f_1,dots,f_l$.
- Page 260, third paragraph.
For the case of a type (c) operation, notice that . . .
It should be (b).
- Page 260, statement of Corollary.
The polynomials $text{f}_1,dots,text{f}_l$ which occur on the main diagonal of $N$ are . . .
The formatting of $l$ is incorrect. It should be $text{f}_1,dots,text{f}_text{l}$.
- Page 265, first displayed equation, third line.
$$ = (W cap W_1) + dots + (W cap W_k) oplus V_1 oplus dots oplus V_k.$$
It should be $$ = (W cap W_1) oplus dots oplus (W cap W_k) oplus V_1 oplus dots oplus V_k.$$
- Page 266, proof of second lemma. The chain rule for the formal derivative of a product of polynomials is used, but this needs proof.
Chapter 8
- Page 274, last displayed equation, first line.
$$ (alpha | beta) = left( sum_k x_n alpha_k bigg|, beta right) $$
It should be $x_k$.
- Page 278, first line.
Now using (c) we find that . . .
It should be (iii).
- Page 282, second displayed equation, second last line.
$$ = (2,9,11) - 2(0,3,4) - -4,0,3) $$
The right-hand side should be $(2,9,11) - 2(0,3,4) - (-4,0,3)$.
- Page 284, first displayed equation.
$$ alpha = sum_k frac{(beta | alpha_k)}{| alpha_k |^2} alpha_k $$
This equation should be labelled (8–11).
- Page 285, paragraph following the first definition.
For $S$ is non-empty, since it contains $0$; . . .
It should be $S^perp$.
- Page 289, Exercise 7, displayed equation.
$$| (x_1,x_2 |^2 = (x_1 - x_2)^2 + 3x_2^2. $$
The left-hand side should be $| (x_1,x_2) |^2$.
- Page 316, first line.
matrix $text{A}$ of $text{T}$ in the basis $mathscr{B}$ is upper triangular. . . .
It should be "upper-triangular".
- Page 316, statement of Theorem 21.
Then there is an orthonormal basis for $text{V}$ in which the matrix of $text{T}$ is upper triangular.
It should be "upper-triangular".
Chapter 9
- Page 344, statement of Corollary.
Under the assumptions of the theorem, let $text{P}_text{j}$ be the orthogonal projection of $text{V}$ on $text{V}(text{r}_text{j})$, $(1 leq text{j} leq text{k})$. . . .
The parentheses around $1 leq text{j} leq text{k}$ should be removed.
And no one can find a list of errata?
– Michael McGovern
Jan 7 '18 at 2:19
@MichaelMcGovern what do you mean? I don’t quite understand.
– Brahadeesh
Jan 7 '18 at 3:43
He means your answer is a list of errata ;)
– Math_QED
Jan 7 '18 at 18:47
1
I was surprised that, given how many errors people were able to find just by looking through the book, the publisher hadn't already provided a list of errata.
– Michael McGovern
Jan 7 '18 at 20:25
Chapter 8, page 282: The vector $alpha_3=(0,9,0)$, and it is suggested that $|alpha_3|^2$ is $9$. But $|alpha_3|^2$ should be $81$. There are also errors stemming from this one.
– Al Jebr
Mar 31 '18 at 19:09
|
show 8 more comments
This list does not repeat the typos mentioned in the other answers.
Chapter 1
- Page 6, last paragraph.
An elementary row operation is thus a special type of function (rule) $e$ which associated with each $m times n$ matrix . . .
It should be "associates".
- Page 10, proof of Theorem 4, second paragraph.
say it occurs in column $k_r neq k$.
It should be $k' neq k$.
- Page 18, last paragraph.
If $B$ is an $n times p$ matrix, the columns of $B$ are the $1 times n$ matrices . . .
It should be $n times 1$.
- Page 24, statement of second corollary.
Let $text{A} = text{A}_1 text{A}_2 cdots A_k$, where $text{A}_1 dots,A_k$ are . . .
The formatting of $A_k$ is incorrect in both instances. Also, there should be a comma after $text{A}_1$. So, it should be "Let $text{A} = text{A}_1 text{A}_2 cdots text{A}_text{k}$, where $text{A}_1, dots,text{A}_text{k}$ are . . .".
- Page 26–27, Exercise 4.
For which $X$ does there exist a scalar $c$ such that $AX=cX$?
It would make more sense if it asked: “For which $X neq 0$ does there exist . . .”.
Chapter 2
- Page 52, below equation (2–16).
Thus from (2–16) and Theorem 7 of Chapter 1 . . .
It should be Theorem 13.
- Page 57, second last displayed equation.
$$ beta = (0,dots,0, b_{k_s},dots,b_n), quad b_{k_s} neq 0$$
The formatting on the right-hand side is not correct. There is too much space before $b_{k_s}$. It should be $$beta = (0,dots,0,b_{k_s},dots,b_n), quad b_{k_s} neq 0$$
instead.
- Page 57, last displayed equation.
$$ beta = (0,dots,0, b_t,dots,b_n), quad b_t neq 0.$$
The formatting on the right-hand side is not correct. There is too much space before $b_t$. It should instead be $$beta = (0,dots,0,b_t,dots,b_n), quad b_t neq 0.$$
- Page 62, second last paragraph.
So $beta = (b_1,b_2,b_3,b_4)$ is in $W$ if and only if $b_3 - 2b_1$. . . .
It should be $b_3 = 2b_1$.
Chapter 3
- Page 76, first paragraph.
let $A_{ij},dots,A_{mj}$ be the coordinates of the vector . . .
It should be $A_{1j},dots,A_{mj}$.
- Page 80, Example 11.
For example, if $U$ is the operation 'remove the constant term and divide by $x$': $$ U(c_0 + c_1 x + dots + c_n x^n) = c_1 + c_2 x + dots + c_n x^{n-1}$$ then . . .
There is a subtlety in the phrase within apostrophes: what if $x = 0$? Rather than having to specify for this case separately, the sentence can be worded more simply as, "For example, if $U$ is the operator defined by $$U(c_0 + c_1 x + dots + c_n x^n) = c_1 + c_2 x + dots + c_n x^{n-1}$$ then . . .".
- Page 81, last line.
(iv) If ${ alpha_1,dots,alpha_{text{n}}}$ is basis for $text{V}$, then ${text{T}alpha_1,dots,text{T}alpha_{text{n}}}$ is a basis for $text{W}$.
It should read "(iv) If ${ alpha_1,dots,alpha_{text{n}}}$ is a basis for $text{V}$, then . . .".
- Page 90, second last paragraph.
We should also point out that we proved a special case of Theorem 13 in Example 12.
It should be "in Example 10."
- Page 91, first paragraph.
For, the identity operator $I$ is represented by the identity matrix in any order basis, and thus . . .
It should be "ordered".
- Page 92, statement of Theorem 14.
Let $text{V}$ be a finite-dimensional vector space over the field $text{F}$ and let
$$mathscr{B} = { alpha_1,dots,alpha text{i} } quad textit{and} quad mathscr{B}'={ alpha'_1,dots,alpha'_text{n}}$$ be ordered bases . . .
It should be $mathscr{B} = { alpha_1,dots,alpha_text{n}}$.
- Page 100, first paragraph.
If $f$ is in $V^*$, and we let $f(alpha_i) = alpha_i$, then when . . .
It should be $f(alpha_i) = a_i$.
- Page 101, paragraph following the definition.
If $S = V$, then $S^0$ is the zero subspace of $V^*$. (This is easy to see when $V$ is finite dimensional.)
It is equally easy to see this when $V$ is infinite-dimensional, so the statement in the brackets is redundant. Perhaps the authors meant to say that ${ v in V : f(v) = 0 forall f in V^* }$ is the zero subspace of $V$. This question asks for details on this point.
- Page 102, proof of the second corollary.
By the previous corollaries (or the proof of Theorem 16) there is a linear functional $f$ such that $f(beta) = 0$ for all $beta$ in $W$, but $f(alpha) neq 0$. . . .
It should be "corollary", since there is only one previous corollary. Also, $W$ should be replaced by $W_1$.
- Page 112, statement of Theorem 22.
(i) rank $(T^t) = $ rank $(T)$
There should be a semi-colon at the end of the line.
Chapter 4
- Page 118, last displayed equation, third line.
$$=sum_{i=0}^n sum_{j=0}^i f_i g_{i-j} h_{n-i} $$
It should be $f_j$. It is also not immediately clear how to go from this line to the next line.
- Page 126, proof of Theorem 3.
By definition, the mapping is onto, and if $f$, $g$ belong to $F[x]$ it is evident that $$(cf+dg)^sim = df^sim + dg^sim$$ for all scalars $c$ and $d$. . . .
It should be $(cf+dg)^sim = cf^sim + dg^sim$.
- Page 126, proof of Theorem 3.
Suppose then that $f$ is a polynomial of degree $n$ such that $f' = 0$. . . .
It should be $f^sim = 0$.
- Page 128, statement of Theorem 4.
(i) $f = dq + r$.
The full stop should be a semi-colon.
Page 129, paragraph before statement of Theorem 5. The notation $D^0$ needs to be introduced, so the sentence, "We also use the notation $D^0 f = f$" can be added at the end of the paragraph.
Page 131, first displayed equation, second line.
$$ = sum_{m = 0}^{n-r} frac{(D^m g)}{m!}(x-c)^{r+m} $$
There should be a full stop at the end of the line.
- Page 135, proof of Theorem 8.
Since $(f,p) = 1$, there are polynomials . . .
It should be $text{g.c.d.}{(f,p)} = 1$.
- Page 137, first paragraph.
This decomposition is also clearly unique, and is called the primary decomposition of $f$. . . .
For the sake of clarity, the following sentence can be added after the quoted line: "Henceforth, whenever we refer to the prime factorization of a non-scalar monic polynomial we mean the primary decomposition of the polynomial."
Page 137, proof of Theorem 11. The chain rule for the formal derivative of a product of polynomials is used, but this needs proof.
Page 139, Exercise 7.
Use Exercise 7 to prove the following. . . .
It should be "Use Exercise 6 to prove the following. . . ."
Chapter 5
- Page 142, second last displayed equation.
$$begin{align} D(calpha_i + alpha'_{iz}) &= [cA(i,k_i) + A'(i,k_i)]b \ &= cD(alpha_i) + D(alpha'_i) end{align}$$
The left-hand side should be $D(calpha_i + alpha'_i)$.
- Page 166, first displayed equation.
$$begin{align*}L(alpha_1,dots,c alpha_i + beta_i,dots,alpha_r) &= cL(alpha_1,dots,alpha_i,dots,alpha_r {}+{} \ &qquad qquad qquad qquad L(alpha_1,dots,beta_i,dots,alpha_r)end{align*}$$
The first term on the right has a missing closing bracket, so it should be $cL(alpha_1,dots,alpha_i,dots,alpha_r)$.
- Page 167, second displayed equation, third line.
$${}={} sum_{j=1}^n A_{1j} Lleft( epsilon_j, sum_{j=1}^n A_{2k} epsilon_k, dots, alpha_r right) $$
The second summation should run over the index $k$ instead of $j$.
Page 170, proof of the lemma. To show that $pi_r L in Lambda^r(V)$, the authors show that $(pi_r L)_tau = (operatorname{sgn}{tau})(pi_rL)$ for every permutation $tau$ of ${1,dots,r}$. This implies that $pi_r L$ is alternating only when $K$ is a ring such that $1 + 1 neq 0$. A proof over arbitrary commutative rings with identity is still needed.
Page 170, first paragraph after proof of the lemma.
In (5–33) we showed that the determinant . . .
It should be (5–34).
- Page 171, equation (5–39).
$$begin{align} D_J &= sum_sigma (operatorname{sgn} sigma) f_{j_{sigma 1}} otimes dots otimes f_{j_{sigma r}} tag{5–39}\ &= pi_r (f_{j_1} otimes dots otimes f_{j_r}) end{align}$$
The equation tag should be centered instead of being aligned at the first line.
- Page 174, below the second displayed equation.
The proof of the lemma following equation (5–36) shows that for any $r$-linear form $L$ and any permutation $sigma$ of ${1,dots,r}$
$$
pi_r(L_sigma) = operatorname{sgn} sigma pi_r(L)
$$
The proof of the lemma actually shows $(pi_r L)_sigma = operatorname{sgn} sigma pi_r(L)$. This fact still needs proof. Also, there should be a full stop at the end of the displayed equation.
- Page 174, below the third displayed equation.
Hence, $D_{ij} cdot f_k = 2pi_3(f_i otimes f_j otimes f_k)$.
This is not immediate from just the preceding equations. The authors implicitly assume the identity $(f_{j_1} otimes dots otimes f_{j_r})_sigma = f_{j_{sigma^{-1} 1}}! otimes dots otimes f_{j_{sigma^{-1} r}}$. This identity needs proof.
- Page 174, sixth displayed equation.
$$(D_{ij} cdot f_k) cdot f_l = 6 pi_4(f_i otimes f_j otimes f_k otimes f_l)$$
The factor $6$ should be replaced by $12$.
- Page 174, last displayed equation.
$$ (L otimes M)_{(sigma,tau)} = L_sigma otimes L_tau$$
The right-hand side should be $L_sigma otimes M_tau$.
- Page 177, below the third displayed equation.
Therefore, since $(Nsigma)tau = Ntau sigma$ for any $(r+s)$-linear form . . .
It should be $(N_sigma)_tau = N_{tau sigma}$.
- Page 179, last displayed equation.
$$ (L wedge M)(alpha_1,dots,alpha_n) = sum (operatorname{sgn} sigma) L(alpha sigma_1,dots,alpha_{sigma r}) M(alpha_{sigma(r+1)},dots,alpha_{sigma_n}) $$
The right-hand side should have $L(alpha_{sigma 1},dots,alpha_{sigma r})$ and $M(alpha_{sigma (r+1)},dots,alpha_{sigma n})$.
Chapter 6
- Page 183, first paragraph.
If the underlying space $V$ is finite-dimensional, $(T-cI)$ fails to be $1 : 1$ precisely when its determinant is different from $0$.
It should instead be "precisely when its determinant is $0$."
- Page 186, proof of second lemma.
one expects that $dim W < dim W_1 + dots dim W_k$ because of linear relations . . .
It should be $dim W leq dim W_1 + dots + dim W_k$.
- Page 194, statement of Theorem 4 (Cayley-Hamilton).
Let $text{T}$ be a linear operator on a finite dimensional vector space $text{V}$. . . .
It should be "finite-dimensional".
- Page 195, first displayed equation.
$$Talpha_i = sum_{j=1}^n A_{ji} alpha_j,quad 1 leq j leq n.$$
It should be $1 leq i leq n$.
- Page 195, above the last paragraph.
since $f$ is the determinant of the matrix $xI - A$ whose entries are the polynomials $$(xI - A)_{ij} = delta_{ij} x - A_{ji}.$$
Here $xI-A$ should be replaced $(xI-A)^t$ in both places, and it could read "since $f$ is also the determinant of" for more clarity.
- Page 203, proof of Theorem 5, last paragraph.
The diagonal entries $a_{11},dots,a_{1n}$ are the characteristic values, . . .
It should be $a_{11},dots,a_{nn}$.
- Page 207, proof of Theorem 7.
this theorem has the same proof as does Theorem 5, if one replaces $T$ by $mathscr{F}$.
It would make more sense if it read "replaces $T$ by $T in mathscr{F}$."
- Page 207-208, proof of Theorem 8.
We could prove this theorem by adapting the lemma before Theorem 7 to the diagonalizable case, just as we adapted the lemma before Theorem 5 to the diagonalizable case in order to prove Theorem 6.
The adaptation of the lemma before Theorem 5 is not explicitly done. It is hidden in the proof of Theorem 6.
- Page 212, statement of Theorem 9.
and if we let $text{W}_text{i}$ be the range of $text{E}_text{i}$, then $text{V} = text{W}_text{i} oplus dots oplus text{W}_text{k}$.
It should be $text{V} = text{W}_1 oplus dots oplus text{W}_text{k}$.
- Page 216, last paragraph.
One part of Theorem 9 says that for a diagonalizable operator . . .
It should be Theorem 11.
- Page 220, statement of Theorem 12.
Let $text{p}$ be the minimal polynomial for $text{T}$, $$text{p} = text{p}_1^{text{r}_1} cdots text{p}_k^{r_k}$$ where the $text{p}_text{i}$ are distinct irreducible monic polynomials over $text{F}$ and the $text{r}_text{i}$ are positive integers. Let $text{W}_text{i}$ be the null space of $text{p}_text{i}(text{T})^{text{r}_j}$, $text{i} = 1,dots,text{k}$.
The displayed equation is improperly formatted. It should read $text{p} = text{p}_1^{text{r}_1} cdots text{p}_text{k}^{text{r}_text{k}}$. Also, in the second sentence it should be $text{p}_text{i}(text{T})^{text{r}_text{i}}$.
- Page 221, below the last displayed equation.
because $p^{r_i} f_i g_i$ is divisible by the minimal polynomial $p$.
It should be $p_i^{r_i} f_i g_i$.
Chapter 7
Page 233, proof of Theorem 3, last displayed equation in statement of Step 1. The formatting of "$alpha$ in $V$" underneath the "$max$" operator on the right-hand side is incorrect. It should be "$alpha$ in $text{V}$".
Page 233, proof of Theorem 3, displayed equation in statement of Step 2. The formatting of "$1 leq i < k$" underneath the $sum$ operator on the right-hand side is incorrect. It should be "$1 leq text{i} < text{k}$".
Page 238, paragraph following corollary.
If we have the operator $T$ and the direct-sum decomposition of Theorem 3, let $mathscr{B}_i$ be the ‘cyclic ordered basis’ . . .
It should be “of Theorem 3 with $W_0 = { 0 }$, . . .”.
- Page 239, Example 2.
If $T = cI$, then for any two linear independent vectors $alpha_1$ and $alpha_2$ in $V$ we have . . .
It should be "linearly".
- Page 240, second last displayed equation.
$$f = (x-c_1)^{d_1} cdots (x - c_k)^{d_k}$$
It should just be $(x-c_1)^{d_1} cdots (x - c_k)^{d_k}$ because later (on page 241) the letter $f$ is again used, this time to denote an arbitrary polynomial.
- Page 244, last paragraph.
where $f_i$ is a polynomial, the degree of which we may assume is less than $k_i$. Since $Nalpha = 0$, for each $i$ we have . . .
It should be “where $f_i$ is a polynomial, the degree of which we may assume is less than $k_i$ whenever $f_i neq 0$. Since $Nalpha = 0$, for each $i$ such that $f_i neq 0$ we have . . .”.
- Page 245, first paragraph.
Thus $xf_i$ is divisible by $x^{k_i}$, and since $deg (f_i) > k_i$ this means that $$f_i = c_i x^{k_i - 1}$$ where $c_i$ is some scalar.
It should be $deg (f_i) < k_i$. Also, the following sentence should be added at the end: "If $f_j = 0$, then we can take $c_j = 0$ so that $f_j = c_j x^{k_j - 1}$ in this case as well."
- Page 245, last paragraph.
Furthermore, the sizes of these matrices will decrease as one reads from left to right.
It should be “Furthermore, the sizes of these matrices will not increase as one reads from left to right.”
- Page 246, first paragraph.
Also, within each $A_i$, the sizes of the matrices $J_j^{(i)}$ decrease as $j$ increases.
It should be “Also, within each $A_i$, the sizes of the matrices $J_j^{(i)}$ do not increase as $j$ increases.”
- Page 246, third paragraph.
The uniqueness we see as follows.
This part is not clearly written. What the authors want to show is the following. Suppose that the linear operator $T$ is represented in some other ordered basis by the matrix $B$ in Jordan form, where $B$ is the direct sum of the matrices $B_1,dots,B_s$. Suppose each $B_i$ is an $e_i times e_i$ matrix that is a direct sum of elementary Jordan matrices with characteristic value $lambda_i$. Suppose the matrix $B$ induces the invariant direct-sum decomposition $V = U_1 oplus dots oplus U_s$. Then,
$s = k$, and there is a permutation $sigma$ of ${ 1,dots,k}$ such that $lambda_i = c_{sigma i}$, $e_i = d_{sigma i}$, $U_i = W_{sigma i}$, and $B_i = A_{sigma i}$ for each $1 leq i leq k$.
- Page 246, third paragraph.
The fact that $A$ is the direct sum of the matrices $text{A}_i$ gives us a direct sum decomposition . . .
The formatting of $text{A}_i$ is incorrect. It should be $A_i$.
- Page 246, third paragraph.
then the matrix $A_i$ is uniquely determined as the rational form for $(T_i - c_i I)$.
It should be "is uniquely determined by the rational form . . .".
- Page 248, Example 7.
Since $A$ is the direct sum of two $2 times 2$ matrices, it is clear that the minimal polynomial for $A$ is $(x-2)^2$.
It should read "Since $A$ is the direct sum of two $2 times 2$ matrices when $a neq 0$, and of one $2 times 2$ matrix and two $1 times 1$ matrices when $a = 0$, it is clear that the minimal polynomial for $A$ is $(x-2)^2$ in either case."
- Page 249, first paragraph.
Then as we noted in Example 15, Chapter 6 the primary decomposition theorem tells us that . . .
It should be Example 14.
- Page 249, last displayed equation
$$begin{align} Ng &= (r-1)x^{r-2}h \ vdots & qquad vdots \ N^{r-1}g &= (r-1)! h end{align}$$
There should be a full stop at the end.
- Page 257, definition.
(b) on the main diagonal of $text{N}$ there appear (in order) polynomials $text{f}_1,dots,text{f}_l$ such that $text{f}_text{k}$ divides $text{f}_{text{k}+1}$, $1 leq text{k} leq l - 1$.
The formatting of $l$ is incorrect in both instances. So, it should be $text{f}_1,dots,text{f}_text{l}$ and $1 leq text{k} leq text{l} - 1$.
- Page 259, paragraph following the proof of Theorem 9.
Two things we have seen provide clues as to how the polynomials $f_1,dots,f_{text{l}}$ in Theorem 9 are uniquely determined by $M$.
The formatting of $l$ is incorrect. It should be $f_1,dots,f_l$.
- Page 260, third paragraph.
For the case of a type (c) operation, notice that . . .
It should be (b).
- Page 260, statement of Corollary.
The polynomials $text{f}_1,dots,text{f}_l$ which occur on the main diagonal of $N$ are . . .
The formatting of $l$ is incorrect. It should be $text{f}_1,dots,text{f}_text{l}$.
- Page 265, first displayed equation, third line.
$$ = (W cap W_1) + dots + (W cap W_k) oplus V_1 oplus dots oplus V_k.$$
It should be $$ = (W cap W_1) oplus dots oplus (W cap W_k) oplus V_1 oplus dots oplus V_k.$$
- Page 266, proof of second lemma. The chain rule for the formal derivative of a product of polynomials is used, but this needs proof.
Chapter 8
- Page 274, last displayed equation, first line.
$$ (alpha | beta) = left( sum_k x_n alpha_k bigg|, beta right) $$
It should be $x_k$.
- Page 278, first line.
Now using (c) we find that . . .
It should be (iii).
- Page 282, second displayed equation, second last line.
$$ = (2,9,11) - 2(0,3,4) - -4,0,3) $$
The right-hand side should be $(2,9,11) - 2(0,3,4) - (-4,0,3)$.
- Page 284, first displayed equation.
$$ alpha = sum_k frac{(beta | alpha_k)}{| alpha_k |^2} alpha_k $$
This equation should be labelled (8–11).
- Page 285, paragraph following the first definition.
For $S$ is non-empty, since it contains $0$; . . .
It should be $S^perp$.
- Page 289, Exercise 7, displayed equation.
$$| (x_1,x_2 |^2 = (x_1 - x_2)^2 + 3x_2^2. $$
The left-hand side should be $| (x_1,x_2) |^2$.
- Page 316, first line.
matrix $text{A}$ of $text{T}$ in the basis $mathscr{B}$ is upper triangular. . . .
It should be "upper-triangular".
- Page 316, statement of Theorem 21.
Then there is an orthonormal basis for $text{V}$ in which the matrix of $text{T}$ is upper triangular.
It should be "upper-triangular".
Chapter 9
- Page 344, statement of Corollary.
Under the assumptions of the theorem, let $text{P}_text{j}$ be the orthogonal projection of $text{V}$ on $text{V}(text{r}_text{j})$, $(1 leq text{j} leq text{k})$. . . .
The parentheses around $1 leq text{j} leq text{k}$ should be removed.
This list does not repeat the typos mentioned in the other answers.
Chapter 1
- Page 6, last paragraph.
An elementary row operation is thus a special type of function (rule) $e$ which associated with each $m times n$ matrix . . .
It should be "associates".
- Page 10, proof of Theorem 4, second paragraph.
say it occurs in column $k_r neq k$.
It should be $k' neq k$.
- Page 18, last paragraph.
If $B$ is an $n times p$ matrix, the columns of $B$ are the $1 times n$ matrices . . .
It should be $n times 1$.
- Page 24, statement of second corollary.
Let $text{A} = text{A}_1 text{A}_2 cdots A_k$, where $text{A}_1 dots,A_k$ are . . .
The formatting of $A_k$ is incorrect in both instances. Also, there should be a comma after $text{A}_1$. So, it should be "Let $text{A} = text{A}_1 text{A}_2 cdots text{A}_text{k}$, where $text{A}_1, dots,text{A}_text{k}$ are . . .".
- Page 26–27, Exercise 4.
For which $X$ does there exist a scalar $c$ such that $AX=cX$?
It would make more sense if it asked: “For which $X neq 0$ does there exist . . .”.
Chapter 2
- Page 52, below equation (2–16).
Thus from (2–16) and Theorem 7 of Chapter 1 . . .
It should be Theorem 13.
- Page 57, second last displayed equation.
$$ beta = (0,dots,0, b_{k_s},dots,b_n), quad b_{k_s} neq 0$$
The formatting on the right-hand side is not correct. There is too much space before $b_{k_s}$. It should be $$beta = (0,dots,0,b_{k_s},dots,b_n), quad b_{k_s} neq 0$$
instead.
- Page 57, last displayed equation.
$$ beta = (0,dots,0, b_t,dots,b_n), quad b_t neq 0.$$
The formatting on the right-hand side is not correct. There is too much space before $b_t$. It should instead be $$beta = (0,dots,0,b_t,dots,b_n), quad b_t neq 0.$$
- Page 62, second last paragraph.
So $beta = (b_1,b_2,b_3,b_4)$ is in $W$ if and only if $b_3 - 2b_1$. . . .
It should be $b_3 = 2b_1$.
Chapter 3
- Page 76, first paragraph.
let $A_{ij},dots,A_{mj}$ be the coordinates of the vector . . .
It should be $A_{1j},dots,A_{mj}$.
- Page 80, Example 11.
For example, if $U$ is the operation 'remove the constant term and divide by $x$': $$ U(c_0 + c_1 x + dots + c_n x^n) = c_1 + c_2 x + dots + c_n x^{n-1}$$ then . . .
There is a subtlety in the phrase within apostrophes: what if $x = 0$? Rather than having to specify for this case separately, the sentence can be worded more simply as, "For example, if $U$ is the operator defined by $$U(c_0 + c_1 x + dots + c_n x^n) = c_1 + c_2 x + dots + c_n x^{n-1}$$ then . . .".
- Page 81, last line.
(iv) If ${ alpha_1,dots,alpha_{text{n}}}$ is basis for $text{V}$, then ${text{T}alpha_1,dots,text{T}alpha_{text{n}}}$ is a basis for $text{W}$.
It should read "(iv) If ${ alpha_1,dots,alpha_{text{n}}}$ is a basis for $text{V}$, then . . .".
- Page 90, second last paragraph.
We should also point out that we proved a special case of Theorem 13 in Example 12.
It should be "in Example 10."
- Page 91, first paragraph.
For, the identity operator $I$ is represented by the identity matrix in any order basis, and thus . . .
It should be "ordered".
- Page 92, statement of Theorem 14.
Let $text{V}$ be a finite-dimensional vector space over the field $text{F}$ and let
$$mathscr{B} = { alpha_1,dots,alpha text{i} } quad textit{and} quad mathscr{B}'={ alpha'_1,dots,alpha'_text{n}}$$ be ordered bases . . .
It should be $mathscr{B} = { alpha_1,dots,alpha_text{n}}$.
- Page 100, first paragraph.
If $f$ is in $V^*$, and we let $f(alpha_i) = alpha_i$, then when . . .
It should be $f(alpha_i) = a_i$.
- Page 101, paragraph following the definition.
If $S = V$, then $S^0$ is the zero subspace of $V^*$. (This is easy to see when $V$ is finite dimensional.)
It is equally easy to see this when $V$ is infinite-dimensional, so the statement in the brackets is redundant. Perhaps the authors meant to say that ${ v in V : f(v) = 0 forall f in V^* }$ is the zero subspace of $V$. This question asks for details on this point.
- Page 102, proof of the second corollary.
By the previous corollaries (or the proof of Theorem 16) there is a linear functional $f$ such that $f(beta) = 0$ for all $beta$ in $W$, but $f(alpha) neq 0$. . . .
It should be "corollary", since there is only one previous corollary. Also, $W$ should be replaced by $W_1$.
- Page 112, statement of Theorem 22.
(i) rank $(T^t) = $ rank $(T)$
There should be a semi-colon at the end of the line.
Chapter 4
- Page 118, last displayed equation, third line.
$$=sum_{i=0}^n sum_{j=0}^i f_i g_{i-j} h_{n-i} $$
It should be $f_j$. It is also not immediately clear how to go from this line to the next line.
- Page 126, proof of Theorem 3.
By definition, the mapping is onto, and if $f$, $g$ belong to $F[x]$ it is evident that $$(cf+dg)^sim = df^sim + dg^sim$$ for all scalars $c$ and $d$. . . .
It should be $(cf+dg)^sim = cf^sim + dg^sim$.
- Page 126, proof of Theorem 3.
Suppose then that $f$ is a polynomial of degree $n$ such that $f' = 0$. . . .
It should be $f^sim = 0$.
- Page 128, statement of Theorem 4.
(i) $f = dq + r$.
The full stop should be a semi-colon.
Page 129, paragraph before statement of Theorem 5. The notation $D^0$ needs to be introduced, so the sentence, "We also use the notation $D^0 f = f$" can be added at the end of the paragraph.
Page 131, first displayed equation, second line.
$$ = sum_{m = 0}^{n-r} frac{(D^m g)}{m!}(x-c)^{r+m} $$
There should be a full stop at the end of the line.
- Page 135, proof of Theorem 8.
Since $(f,p) = 1$, there are polynomials . . .
It should be $text{g.c.d.}{(f,p)} = 1$.
- Page 137, first paragraph.
This decomposition is also clearly unique, and is called the primary decomposition of $f$. . . .
For the sake of clarity, the following sentence can be added after the quoted line: "Henceforth, whenever we refer to the prime factorization of a non-scalar monic polynomial we mean the primary decomposition of the polynomial."
Page 137, proof of Theorem 11. The chain rule for the formal derivative of a product of polynomials is used, but this needs proof.
Page 139, Exercise 7.
Use Exercise 7 to prove the following. . . .
It should be "Use Exercise 6 to prove the following. . . ."
Chapter 5
- Page 142, second last displayed equation.
$$begin{align} D(calpha_i + alpha'_{iz}) &= [cA(i,k_i) + A'(i,k_i)]b \ &= cD(alpha_i) + D(alpha'_i) end{align}$$
The left-hand side should be $D(calpha_i + alpha'_i)$.
- Page 166, first displayed equation.
$$begin{align*}L(alpha_1,dots,c alpha_i + beta_i,dots,alpha_r) &= cL(alpha_1,dots,alpha_i,dots,alpha_r {}+{} \ &qquad qquad qquad qquad L(alpha_1,dots,beta_i,dots,alpha_r)end{align*}$$
The first term on the right has a missing closing bracket, so it should be $cL(alpha_1,dots,alpha_i,dots,alpha_r)$.
- Page 167, second displayed equation, third line.
$${}={} sum_{j=1}^n A_{1j} Lleft( epsilon_j, sum_{j=1}^n A_{2k} epsilon_k, dots, alpha_r right) $$
The second summation should run over the index $k$ instead of $j$.
Page 170, proof of the lemma. To show that $pi_r L in Lambda^r(V)$, the authors show that $(pi_r L)_tau = (operatorname{sgn}{tau})(pi_rL)$ for every permutation $tau$ of ${1,dots,r}$. This implies that $pi_r L$ is alternating only when $K$ is a ring such that $1 + 1 neq 0$. A proof over arbitrary commutative rings with identity is still needed.
Page 170, first paragraph after proof of the lemma.
In (5–33) we showed that the determinant . . .
It should be (5–34).
- Page 171, equation (5–39).
$$begin{align} D_J &= sum_sigma (operatorname{sgn} sigma) f_{j_{sigma 1}} otimes dots otimes f_{j_{sigma r}} tag{5–39}\ &= pi_r (f_{j_1} otimes dots otimes f_{j_r}) end{align}$$
The equation tag should be centered instead of being aligned at the first line.
- Page 174, below the second displayed equation.
The proof of the lemma following equation (5–36) shows that for any $r$-linear form $L$ and any permutation $sigma$ of ${1,dots,r}$
$$
pi_r(L_sigma) = operatorname{sgn} sigma pi_r(L)
$$
The proof of the lemma actually shows $(pi_r L)_sigma = operatorname{sgn} sigma pi_r(L)$. This fact still needs proof. Also, there should be a full stop at the end of the displayed equation.
- Page 174, below the third displayed equation.
Hence, $D_{ij} cdot f_k = 2pi_3(f_i otimes f_j otimes f_k)$.
This is not immediate from just the preceding equations. The authors implicitly assume the identity $(f_{j_1} otimes dots otimes f_{j_r})_sigma = f_{j_{sigma^{-1} 1}}! otimes dots otimes f_{j_{sigma^{-1} r}}$. This identity needs proof.
- Page 174, sixth displayed equation.
$$(D_{ij} cdot f_k) cdot f_l = 6 pi_4(f_i otimes f_j otimes f_k otimes f_l)$$
The factor $6$ should be replaced by $12$.
- Page 174, last displayed equation.
$$ (L otimes M)_{(sigma,tau)} = L_sigma otimes L_tau$$
The right-hand side should be $L_sigma otimes M_tau$.
- Page 177, below the third displayed equation.
Therefore, since $(Nsigma)tau = Ntau sigma$ for any $(r+s)$-linear form . . .
It should be $(N_sigma)_tau = N_{tau sigma}$.
- Page 179, last displayed equation.
$$ (L wedge M)(alpha_1,dots,alpha_n) = sum (operatorname{sgn} sigma) L(alpha sigma_1,dots,alpha_{sigma r}) M(alpha_{sigma(r+1)},dots,alpha_{sigma_n}) $$
The right-hand side should have $L(alpha_{sigma 1},dots,alpha_{sigma r})$ and $M(alpha_{sigma (r+1)},dots,alpha_{sigma n})$.
Chapter 6
- Page 183, first paragraph.
If the underlying space $V$ is finite-dimensional, $(T-cI)$ fails to be $1 : 1$ precisely when its determinant is different from $0$.
It should instead be "precisely when its determinant is $0$."
- Page 186, proof of second lemma.
one expects that $dim W < dim W_1 + dots dim W_k$ because of linear relations . . .
It should be $dim W leq dim W_1 + dots + dim W_k$.
- Page 194, statement of Theorem 4 (Cayley-Hamilton).
Let $text{T}$ be a linear operator on a finite dimensional vector space $text{V}$. . . .
It should be "finite-dimensional".
- Page 195, first displayed equation.
$$Talpha_i = sum_{j=1}^n A_{ji} alpha_j,quad 1 leq j leq n.$$
It should be $1 leq i leq n$.
- Page 195, above the last paragraph.
since $f$ is the determinant of the matrix $xI - A$ whose entries are the polynomials $$(xI - A)_{ij} = delta_{ij} x - A_{ji}.$$
Here $xI-A$ should be replaced $(xI-A)^t$ in both places, and it could read "since $f$ is also the determinant of" for more clarity.
- Page 203, proof of Theorem 5, last paragraph.
The diagonal entries $a_{11},dots,a_{1n}$ are the characteristic values, . . .
It should be $a_{11},dots,a_{nn}$.
- Page 207, proof of Theorem 7.
this theorem has the same proof as does Theorem 5, if one replaces $T$ by $mathscr{F}$.
It would make more sense if it read "replaces $T$ by $T in mathscr{F}$."
- Page 207-208, proof of Theorem 8.
We could prove this theorem by adapting the lemma before Theorem 7 to the diagonalizable case, just as we adapted the lemma before Theorem 5 to the diagonalizable case in order to prove Theorem 6.
The adaptation of the lemma before Theorem 5 is not explicitly done. It is hidden in the proof of Theorem 6.
- Page 212, statement of Theorem 9.
and if we let $text{W}_text{i}$ be the range of $text{E}_text{i}$, then $text{V} = text{W}_text{i} oplus dots oplus text{W}_text{k}$.
It should be $text{V} = text{W}_1 oplus dots oplus text{W}_text{k}$.
- Page 216, last paragraph.
One part of Theorem 9 says that for a diagonalizable operator . . .
It should be Theorem 11.
- Page 220, statement of Theorem 12.
Let $text{p}$ be the minimal polynomial for $text{T}$, $$text{p} = text{p}_1^{text{r}_1} cdots text{p}_k^{r_k}$$ where the $text{p}_text{i}$ are distinct irreducible monic polynomials over $text{F}$ and the $text{r}_text{i}$ are positive integers. Let $text{W}_text{i}$ be the null space of $text{p}_text{i}(text{T})^{text{r}_j}$, $text{i} = 1,dots,text{k}$.
The displayed equation is improperly formatted. It should read $text{p} = text{p}_1^{text{r}_1} cdots text{p}_text{k}^{text{r}_text{k}}$. Also, in the second sentence it should be $text{p}_text{i}(text{T})^{text{r}_text{i}}$.
- Page 221, below the last displayed equation.
because $p^{r_i} f_i g_i$ is divisible by the minimal polynomial $p$.
It should be $p_i^{r_i} f_i g_i$.
Chapter 7
Page 233, proof of Theorem 3, last displayed equation in statement of Step 1. The formatting of "$alpha$ in $V$" underneath the "$max$" operator on the right-hand side is incorrect. It should be "$alpha$ in $text{V}$".
Page 233, proof of Theorem 3, displayed equation in statement of Step 2. The formatting of "$1 leq i < k$" underneath the $sum$ operator on the right-hand side is incorrect. It should be "$1 leq text{i} < text{k}$".
Page 238, paragraph following corollary.
If we have the operator $T$ and the direct-sum decomposition of Theorem 3, let $mathscr{B}_i$ be the ‘cyclic ordered basis’ . . .
It should be “of Theorem 3 with $W_0 = { 0 }$, . . .”.
- Page 239, Example 2.
If $T = cI$, then for any two linear independent vectors $alpha_1$ and $alpha_2$ in $V$ we have . . .
It should be "linearly".
- Page 240, second last displayed equation.
$$f = (x-c_1)^{d_1} cdots (x - c_k)^{d_k}$$
It should just be $(x-c_1)^{d_1} cdots (x - c_k)^{d_k}$ because later (on page 241) the letter $f$ is again used, this time to denote an arbitrary polynomial.
- Page 244, last paragraph.
where $f_i$ is a polynomial, the degree of which we may assume is less than $k_i$. Since $Nalpha = 0$, for each $i$ we have . . .
It should be “where $f_i$ is a polynomial, the degree of which we may assume is less than $k_i$ whenever $f_i neq 0$. Since $Nalpha = 0$, for each $i$ such that $f_i neq 0$ we have . . .”.
- Page 245, first paragraph.
Thus $xf_i$ is divisible by $x^{k_i}$, and since $deg (f_i) > k_i$ this means that $$f_i = c_i x^{k_i - 1}$$ where $c_i$ is some scalar.
It should be $deg (f_i) < k_i$. Also, the following sentence should be added at the end: "If $f_j = 0$, then we can take $c_j = 0$ so that $f_j = c_j x^{k_j - 1}$ in this case as well."
- Page 245, last paragraph.
Furthermore, the sizes of these matrices will decrease as one reads from left to right.
It should be “Furthermore, the sizes of these matrices will not increase as one reads from left to right.”
- Page 246, first paragraph.
Also, within each $A_i$, the sizes of the matrices $J_j^{(i)}$ decrease as $j$ increases.
It should be “Also, within each $A_i$, the sizes of the matrices $J_j^{(i)}$ do not increase as $j$ increases.”
- Page 246, third paragraph.
The uniqueness we see as follows.
This part is not clearly written. What the authors want to show is the following. Suppose that the linear operator $T$ is represented in some other ordered basis by the matrix $B$ in Jordan form, where $B$ is the direct sum of the matrices $B_1,dots,B_s$. Suppose each $B_i$ is an $e_i times e_i$ matrix that is a direct sum of elementary Jordan matrices with characteristic value $lambda_i$. Suppose the matrix $B$ induces the invariant direct-sum decomposition $V = U_1 oplus dots oplus U_s$. Then,
$s = k$, and there is a permutation $sigma$ of ${ 1,dots,k}$ such that $lambda_i = c_{sigma i}$, $e_i = d_{sigma i}$, $U_i = W_{sigma i}$, and $B_i = A_{sigma i}$ for each $1 leq i leq k$.
- Page 246, third paragraph.
The fact that $A$ is the direct sum of the matrices $text{A}_i$ gives us a direct sum decomposition . . .
The formatting of $text{A}_i$ is incorrect. It should be $A_i$.
- Page 246, third paragraph.
then the matrix $A_i$ is uniquely determined as the rational form for $(T_i - c_i I)$.
It should be "is uniquely determined by the rational form . . .".
- Page 248, Example 7.
Since $A$ is the direct sum of two $2 times 2$ matrices, it is clear that the minimal polynomial for $A$ is $(x-2)^2$.
It should read "Since $A$ is the direct sum of two $2 times 2$ matrices when $a neq 0$, and of one $2 times 2$ matrix and two $1 times 1$ matrices when $a = 0$, it is clear that the minimal polynomial for $A$ is $(x-2)^2$ in either case."
- Page 249, first paragraph.
Then as we noted in Example 15, Chapter 6 the primary decomposition theorem tells us that . . .
It should be Example 14.
- Page 249, last displayed equation
$$begin{align} Ng &= (r-1)x^{r-2}h \ vdots & qquad vdots \ N^{r-1}g &= (r-1)! h end{align}$$
There should be a full stop at the end.
- Page 257, definition.
(b) on the main diagonal of $text{N}$ there appear (in order) polynomials $text{f}_1,dots,text{f}_l$ such that $text{f}_text{k}$ divides $text{f}_{text{k}+1}$, $1 leq text{k} leq l - 1$.
The formatting of $l$ is incorrect in both instances. So, it should be $text{f}_1,dots,text{f}_text{l}$ and $1 leq text{k} leq text{l} - 1$.
- Page 259, paragraph following the proof of Theorem 9.
Two things we have seen provide clues as to how the polynomials $f_1,dots,f_{text{l}}$ in Theorem 9 are uniquely determined by $M$.
The formatting of $l$ is incorrect. It should be $f_1,dots,f_l$.
- Page 260, third paragraph.
For the case of a type (c) operation, notice that . . .
It should be (b).
- Page 260, statement of Corollary.
The polynomials $text{f}_1,dots,text{f}_l$ which occur on the main diagonal of $N$ are . . .
The formatting of $l$ is incorrect. It should be $text{f}_1,dots,text{f}_text{l}$.
- Page 265, first displayed equation, third line.
$$ = (W cap W_1) + dots + (W cap W_k) oplus V_1 oplus dots oplus V_k.$$
It should be $$ = (W cap W_1) oplus dots oplus (W cap W_k) oplus V_1 oplus dots oplus V_k.$$
- Page 266, proof of second lemma. The chain rule for the formal derivative of a product of polynomials is used, but this needs proof.
Chapter 8
- Page 274, last displayed equation, first line.
$$ (alpha | beta) = left( sum_k x_n alpha_k bigg|, beta right) $$
It should be $x_k$.
- Page 278, first line.
Now using (c) we find that . . .
It should be (iii).
- Page 282, second displayed equation, second last line.
$$ = (2,9,11) - 2(0,3,4) - -4,0,3) $$
The right-hand side should be $(2,9,11) - 2(0,3,4) - (-4,0,3)$.
- Page 284, first displayed equation.
$$ alpha = sum_k frac{(beta | alpha_k)}{| alpha_k |^2} alpha_k $$
This equation should be labelled (8–11).
- Page 285, paragraph following the first definition.
For $S$ is non-empty, since it contains $0$; . . .
It should be $S^perp$.
- Page 289, Exercise 7, displayed equation.
$$| (x_1,x_2 |^2 = (x_1 - x_2)^2 + 3x_2^2. $$
The left-hand side should be $| (x_1,x_2) |^2$.
- Page 316, first line.
matrix $text{A}$ of $text{T}$ in the basis $mathscr{B}$ is upper triangular. . . .
It should be "upper-triangular".
- Page 316, statement of Theorem 21.
Then there is an orthonormal basis for $text{V}$ in which the matrix of $text{T}$ is upper triangular.
It should be "upper-triangular".
Chapter 9
- Page 344, statement of Corollary.
Under the assumptions of the theorem, let $text{P}_text{j}$ be the orthogonal projection of $text{V}$ on $text{V}(text{r}_text{j})$, $(1 leq text{j} leq text{k})$. . . .
The parentheses around $1 leq text{j} leq text{k}$ should be removed.
edited Nov 14 '18 at 8:51
community wiki
67 revs, 2 users 99%
Brahadeesh
And no one can find a list of errata?
– Michael McGovern
Jan 7 '18 at 2:19
@MichaelMcGovern what do you mean? I don’t quite understand.
– Brahadeesh
Jan 7 '18 at 3:43
He means your answer is a list of errata ;)
– Math_QED
Jan 7 '18 at 18:47
1
I was surprised that, given how many errors people were able to find just by looking through the book, the publisher hadn't already provided a list of errata.
– Michael McGovern
Jan 7 '18 at 20:25
Chapter 8, page 282: The vector $alpha_3=(0,9,0)$, and it is suggested that $|alpha_3|^2$ is $9$. But $|alpha_3|^2$ should be $81$. There are also errors stemming from this one.
– Al Jebr
Mar 31 '18 at 19:09
|
show 8 more comments
And no one can find a list of errata?
– Michael McGovern
Jan 7 '18 at 2:19
@MichaelMcGovern what do you mean? I don’t quite understand.
– Brahadeesh
Jan 7 '18 at 3:43
He means your answer is a list of errata ;)
– Math_QED
Jan 7 '18 at 18:47
1
I was surprised that, given how many errors people were able to find just by looking through the book, the publisher hadn't already provided a list of errata.
– Michael McGovern
Jan 7 '18 at 20:25
Chapter 8, page 282: The vector $alpha_3=(0,9,0)$, and it is suggested that $|alpha_3|^2$ is $9$. But $|alpha_3|^2$ should be $81$. There are also errors stemming from this one.
– Al Jebr
Mar 31 '18 at 19:09
And no one can find a list of errata?
– Michael McGovern
Jan 7 '18 at 2:19
And no one can find a list of errata?
– Michael McGovern
Jan 7 '18 at 2:19
@MichaelMcGovern what do you mean? I don’t quite understand.
– Brahadeesh
Jan 7 '18 at 3:43
@MichaelMcGovern what do you mean? I don’t quite understand.
– Brahadeesh
Jan 7 '18 at 3:43
He means your answer is a list of errata ;)
– Math_QED
Jan 7 '18 at 18:47
He means your answer is a list of errata ;)
– Math_QED
Jan 7 '18 at 18:47
1
1
I was surprised that, given how many errors people were able to find just by looking through the book, the publisher hadn't already provided a list of errata.
– Michael McGovern
Jan 7 '18 at 20:25
I was surprised that, given how many errors people were able to find just by looking through the book, the publisher hadn't already provided a list of errata.
– Michael McGovern
Jan 7 '18 at 20:25
Chapter 8, page 282: The vector $alpha_3=(0,9,0)$, and it is suggested that $|alpha_3|^2$ is $9$. But $|alpha_3|^2$ should be $81$. There are also errors stemming from this one.
– Al Jebr
Mar 31 '18 at 19:09
Chapter 8, page 282: The vector $alpha_3=(0,9,0)$, and it is suggested that $|alpha_3|^2$ is $9$. But $|alpha_3|^2$ should be $81$. There are also errors stemming from this one.
– Al Jebr
Mar 31 '18 at 19:09
|
show 8 more comments
I'm using the second edition. I think that the definition before Theorem $9$ (Chapter $1$) should be
Definition. An $mtimes m$ matrix is said to be an elementary matrix if it can be obtained from the $mtimes m$ identity matrix by means of a single elementary row operation.
instead of
Definition. An $color{red}{mtimes n}$ matrix is said to be an elementary matrix if it can be obtained from the $mtimes m$ identity matrix by means of a single elementary row operation.
Check out this question for details.
1
It can be argued that any $mtimes n$ matrix "obtained from the $mtimes m$ identity matrix" by an elementary row operation will of necessity have $m=n$. So the "correction" you want to make here does not actually change the meaning of the definition.
– hardmath
Sep 28 '16 at 11:16
4
@hardmath Yes, of course, but I think it can help reduce confusion in the minds of people who are new to the topic.
– Aritra Das
Sep 28 '16 at 12:30
add a comment |
I'm using the second edition. I think that the definition before Theorem $9$ (Chapter $1$) should be
Definition. An $mtimes m$ matrix is said to be an elementary matrix if it can be obtained from the $mtimes m$ identity matrix by means of a single elementary row operation.
instead of
Definition. An $color{red}{mtimes n}$ matrix is said to be an elementary matrix if it can be obtained from the $mtimes m$ identity matrix by means of a single elementary row operation.
Check out this question for details.
1
It can be argued that any $mtimes n$ matrix "obtained from the $mtimes m$ identity matrix" by an elementary row operation will of necessity have $m=n$. So the "correction" you want to make here does not actually change the meaning of the definition.
– hardmath
Sep 28 '16 at 11:16
4
@hardmath Yes, of course, but I think it can help reduce confusion in the minds of people who are new to the topic.
– Aritra Das
Sep 28 '16 at 12:30
add a comment |
I'm using the second edition. I think that the definition before Theorem $9$ (Chapter $1$) should be
Definition. An $mtimes m$ matrix is said to be an elementary matrix if it can be obtained from the $mtimes m$ identity matrix by means of a single elementary row operation.
instead of
Definition. An $color{red}{mtimes n}$ matrix is said to be an elementary matrix if it can be obtained from the $mtimes m$ identity matrix by means of a single elementary row operation.
Check out this question for details.
I'm using the second edition. I think that the definition before Theorem $9$ (Chapter $1$) should be
Definition. An $mtimes m$ matrix is said to be an elementary matrix if it can be obtained from the $mtimes m$ identity matrix by means of a single elementary row operation.
instead of
Definition. An $color{red}{mtimes n}$ matrix is said to be an elementary matrix if it can be obtained from the $mtimes m$ identity matrix by means of a single elementary row operation.
Check out this question for details.
edited Dec 31 '17 at 12:53
community wiki
3 revs, 2 users 90%
Aritra Das
1
It can be argued that any $mtimes n$ matrix "obtained from the $mtimes m$ identity matrix" by an elementary row operation will of necessity have $m=n$. So the "correction" you want to make here does not actually change the meaning of the definition.
– hardmath
Sep 28 '16 at 11:16
4
@hardmath Yes, of course, but I think it can help reduce confusion in the minds of people who are new to the topic.
– Aritra Das
Sep 28 '16 at 12:30
add a comment |
1
It can be argued that any $mtimes n$ matrix "obtained from the $mtimes m$ identity matrix" by an elementary row operation will of necessity have $m=n$. So the "correction" you want to make here does not actually change the meaning of the definition.
– hardmath
Sep 28 '16 at 11:16
4
@hardmath Yes, of course, but I think it can help reduce confusion in the minds of people who are new to the topic.
– Aritra Das
Sep 28 '16 at 12:30
1
1
It can be argued that any $mtimes n$ matrix "obtained from the $mtimes m$ identity matrix" by an elementary row operation will of necessity have $m=n$. So the "correction" you want to make here does not actually change the meaning of the definition.
– hardmath
Sep 28 '16 at 11:16
It can be argued that any $mtimes n$ matrix "obtained from the $mtimes m$ identity matrix" by an elementary row operation will of necessity have $m=n$. So the "correction" you want to make here does not actually change the meaning of the definition.
– hardmath
Sep 28 '16 at 11:16
4
4
@hardmath Yes, of course, but I think it can help reduce confusion in the minds of people who are new to the topic.
– Aritra Das
Sep 28 '16 at 12:30
@hardmath Yes, of course, but I think it can help reduce confusion in the minds of people who are new to the topic.
– Aritra Das
Sep 28 '16 at 12:30
add a comment |
(Red highlight Typo in Linear Algebra by Hoffman and Kunze, page 23)
It should be $$A=E_1^{-1}E_2^{-1}...E_k^{-1}$$
Let $A=left[begin{matrix}2&3\4&5end{matrix}right]$
Elementary row operations:
$R_2leftrightarrow R_2-2R_1, R_1leftrightarrow R_1+3R_2, R_1 leftrightarrow R_1/2, R_2 leftrightarrow R_2*(-1)$ transforms $A$ into $I$
These four row operations on $I$ give
$E_1=left[begin{matrix}1&0\-2&1end{matrix}right]$,
$E_2=left[begin{matrix}1&3\0&1end{matrix}right]$,
$E_3=left[begin{matrix}1/2&0\0&1end{matrix}right]$,
$E_4=left[begin{matrix}1&0\0&-1end{matrix}right]$
$E_1^{-1}=left[begin{matrix}1&0\2&1end{matrix}right]$,
$E_2^{-1}=left[begin{matrix}1&-3\0&1end{matrix}right]$,
$E_3^{-1}=left[begin{matrix}2&0\0&1end{matrix}right]$,
$E_4^{-1}=left[begin{matrix}1&0\0&-1end{matrix}right]$
Now,
$ E_1^{-1}.E_2^{-1}.E_3^{-1}.E_4^{-1}=left[begin{matrix}2&3\4&5end{matrix}right]$
but, $ E_4^{-1}.E_3^{-1}.E_2^{-1}.E_1^{-1}=left[begin{matrix}-10&-6\-2&-1end{matrix}right]$
On page 18 "If $B$ is an n X p matrix, the columns of $B$ are the 1 X n matrices $B_1, . . . ,B_p$ defined by ..." in this line it should be rows instead of columns
– Vikram
Jun 16 '17 at 8:19
Rather, in this line it should read "...the columns of $B$ are the $n times 1$ matrices...".
– Brahadeesh
Sep 16 '17 at 12:08
add a comment |
(Red highlight Typo in Linear Algebra by Hoffman and Kunze, page 23)
It should be $$A=E_1^{-1}E_2^{-1}...E_k^{-1}$$
Let $A=left[begin{matrix}2&3\4&5end{matrix}right]$
Elementary row operations:
$R_2leftrightarrow R_2-2R_1, R_1leftrightarrow R_1+3R_2, R_1 leftrightarrow R_1/2, R_2 leftrightarrow R_2*(-1)$ transforms $A$ into $I$
These four row operations on $I$ give
$E_1=left[begin{matrix}1&0\-2&1end{matrix}right]$,
$E_2=left[begin{matrix}1&3\0&1end{matrix}right]$,
$E_3=left[begin{matrix}1/2&0\0&1end{matrix}right]$,
$E_4=left[begin{matrix}1&0\0&-1end{matrix}right]$
$E_1^{-1}=left[begin{matrix}1&0\2&1end{matrix}right]$,
$E_2^{-1}=left[begin{matrix}1&-3\0&1end{matrix}right]$,
$E_3^{-1}=left[begin{matrix}2&0\0&1end{matrix}right]$,
$E_4^{-1}=left[begin{matrix}1&0\0&-1end{matrix}right]$
Now,
$ E_1^{-1}.E_2^{-1}.E_3^{-1}.E_4^{-1}=left[begin{matrix}2&3\4&5end{matrix}right]$
but, $ E_4^{-1}.E_3^{-1}.E_2^{-1}.E_1^{-1}=left[begin{matrix}-10&-6\-2&-1end{matrix}right]$
On page 18 "If $B$ is an n X p matrix, the columns of $B$ are the 1 X n matrices $B_1, . . . ,B_p$ defined by ..." in this line it should be rows instead of columns
– Vikram
Jun 16 '17 at 8:19
Rather, in this line it should read "...the columns of $B$ are the $n times 1$ matrices...".
– Brahadeesh
Sep 16 '17 at 12:08
add a comment |
(Red highlight Typo in Linear Algebra by Hoffman and Kunze, page 23)
It should be $$A=E_1^{-1}E_2^{-1}...E_k^{-1}$$
Let $A=left[begin{matrix}2&3\4&5end{matrix}right]$
Elementary row operations:
$R_2leftrightarrow R_2-2R_1, R_1leftrightarrow R_1+3R_2, R_1 leftrightarrow R_1/2, R_2 leftrightarrow R_2*(-1)$ transforms $A$ into $I$
These four row operations on $I$ give
$E_1=left[begin{matrix}1&0\-2&1end{matrix}right]$,
$E_2=left[begin{matrix}1&3\0&1end{matrix}right]$,
$E_3=left[begin{matrix}1/2&0\0&1end{matrix}right]$,
$E_4=left[begin{matrix}1&0\0&-1end{matrix}right]$
$E_1^{-1}=left[begin{matrix}1&0\2&1end{matrix}right]$,
$E_2^{-1}=left[begin{matrix}1&-3\0&1end{matrix}right]$,
$E_3^{-1}=left[begin{matrix}2&0\0&1end{matrix}right]$,
$E_4^{-1}=left[begin{matrix}1&0\0&-1end{matrix}right]$
Now,
$ E_1^{-1}.E_2^{-1}.E_3^{-1}.E_4^{-1}=left[begin{matrix}2&3\4&5end{matrix}right]$
but, $ E_4^{-1}.E_3^{-1}.E_2^{-1}.E_1^{-1}=left[begin{matrix}-10&-6\-2&-1end{matrix}right]$
(Red highlight Typo in Linear Algebra by Hoffman and Kunze, page 23)
It should be $$A=E_1^{-1}E_2^{-1}...E_k^{-1}$$
Let $A=left[begin{matrix}2&3\4&5end{matrix}right]$
Elementary row operations:
$R_2leftrightarrow R_2-2R_1, R_1leftrightarrow R_1+3R_2, R_1 leftrightarrow R_1/2, R_2 leftrightarrow R_2*(-1)$ transforms $A$ into $I$
These four row operations on $I$ give
$E_1=left[begin{matrix}1&0\-2&1end{matrix}right]$,
$E_2=left[begin{matrix}1&3\0&1end{matrix}right]$,
$E_3=left[begin{matrix}1/2&0\0&1end{matrix}right]$,
$E_4=left[begin{matrix}1&0\0&-1end{matrix}right]$
$E_1^{-1}=left[begin{matrix}1&0\2&1end{matrix}right]$,
$E_2^{-1}=left[begin{matrix}1&-3\0&1end{matrix}right]$,
$E_3^{-1}=left[begin{matrix}2&0\0&1end{matrix}right]$,
$E_4^{-1}=left[begin{matrix}1&0\0&-1end{matrix}right]$
Now,
$ E_1^{-1}.E_2^{-1}.E_3^{-1}.E_4^{-1}=left[begin{matrix}2&3\4&5end{matrix}right]$
but, $ E_4^{-1}.E_3^{-1}.E_2^{-1}.E_1^{-1}=left[begin{matrix}-10&-6\-2&-1end{matrix}right]$
answered Jun 16 '17 at 6:36
community wiki
Vikram
On page 18 "If $B$ is an n X p matrix, the columns of $B$ are the 1 X n matrices $B_1, . . . ,B_p$ defined by ..." in this line it should be rows instead of columns
– Vikram
Jun 16 '17 at 8:19
Rather, in this line it should read "...the columns of $B$ are the $n times 1$ matrices...".
– Brahadeesh
Sep 16 '17 at 12:08
add a comment |
On page 18 "If $B$ is an n X p matrix, the columns of $B$ are the 1 X n matrices $B_1, . . . ,B_p$ defined by ..." in this line it should be rows instead of columns
– Vikram
Jun 16 '17 at 8:19
Rather, in this line it should read "...the columns of $B$ are the $n times 1$ matrices...".
– Brahadeesh
Sep 16 '17 at 12:08
On page 18 "If $B$ is an n X p matrix, the columns of $B$ are the 1 X n matrices $B_1, . . . ,B_p$ defined by ..." in this line it should be rows instead of columns
– Vikram
Jun 16 '17 at 8:19
On page 18 "If $B$ is an n X p matrix, the columns of $B$ are the 1 X n matrices $B_1, . . . ,B_p$ defined by ..." in this line it should be rows instead of columns
– Vikram
Jun 16 '17 at 8:19
Rather, in this line it should read "...the columns of $B$ are the $n times 1$ matrices...".
– Brahadeesh
Sep 16 '17 at 12:08
Rather, in this line it should read "...the columns of $B$ are the $n times 1$ matrices...".
– Brahadeesh
Sep 16 '17 at 12:08
add a comment |
I wanted to add two more observations which I believe are typos.
- Chapter 2, Example 16, Pg. 43
Example 16. We shall now give an example of an infinite basis. Let $F$ be a subfield of the complex numbers and let $V$ be the space of polynomial functions over $F.$ ($dots dots$)
Let $color{red}{ f_k(x)=x_k},k=0,1,2,dots.$ The infinite set ${f_0,f_1,f_2,dots }$ is a basis for $V.$
It should have been $color{red}{f_k(x)=x^k}$.
- Chapter 1, Theorem 8
$$[A(BC)_{ij}]=sum_r A_{ir}(BC)_{rj}=color{red}{sum_r A_{ir}sum_s B_{rj}C_{rj}}$$
It should have been
$$[A(BC)_{ij}]=sum_r A_{ir}(BC)_{rj}=color{red}{sum_r A_{ir}sum_s B_{rs}C_{sj}}$$
The second point is in fact correctly stated in my copy of the book (I am using the second edition).
– Brahadeesh
Dec 31 '17 at 13:08
add a comment |
I wanted to add two more observations which I believe are typos.
- Chapter 2, Example 16, Pg. 43
Example 16. We shall now give an example of an infinite basis. Let $F$ be a subfield of the complex numbers and let $V$ be the space of polynomial functions over $F.$ ($dots dots$)
Let $color{red}{ f_k(x)=x_k},k=0,1,2,dots.$ The infinite set ${f_0,f_1,f_2,dots }$ is a basis for $V.$
It should have been $color{red}{f_k(x)=x^k}$.
- Chapter 1, Theorem 8
$$[A(BC)_{ij}]=sum_r A_{ir}(BC)_{rj}=color{red}{sum_r A_{ir}sum_s B_{rj}C_{rj}}$$
It should have been
$$[A(BC)_{ij}]=sum_r A_{ir}(BC)_{rj}=color{red}{sum_r A_{ir}sum_s B_{rs}C_{sj}}$$
The second point is in fact correctly stated in my copy of the book (I am using the second edition).
– Brahadeesh
Dec 31 '17 at 13:08
add a comment |
I wanted to add two more observations which I believe are typos.
- Chapter 2, Example 16, Pg. 43
Example 16. We shall now give an example of an infinite basis. Let $F$ be a subfield of the complex numbers and let $V$ be the space of polynomial functions over $F.$ ($dots dots$)
Let $color{red}{ f_k(x)=x_k},k=0,1,2,dots.$ The infinite set ${f_0,f_1,f_2,dots }$ is a basis for $V.$
It should have been $color{red}{f_k(x)=x^k}$.
- Chapter 1, Theorem 8
$$[A(BC)_{ij}]=sum_r A_{ir}(BC)_{rj}=color{red}{sum_r A_{ir}sum_s B_{rj}C_{rj}}$$
It should have been
$$[A(BC)_{ij}]=sum_r A_{ir}(BC)_{rj}=color{red}{sum_r A_{ir}sum_s B_{rs}C_{sj}}$$
I wanted to add two more observations which I believe are typos.
- Chapter 2, Example 16, Pg. 43
Example 16. We shall now give an example of an infinite basis. Let $F$ be a subfield of the complex numbers and let $V$ be the space of polynomial functions over $F.$ ($dots dots$)
Let $color{red}{ f_k(x)=x_k},k=0,1,2,dots.$ The infinite set ${f_0,f_1,f_2,dots }$ is a basis for $V.$
It should have been $color{red}{f_k(x)=x^k}$.
- Chapter 1, Theorem 8
$$[A(BC)_{ij}]=sum_r A_{ir}(BC)_{rj}=color{red}{sum_r A_{ir}sum_s B_{rj}C_{rj}}$$
It should have been
$$[A(BC)_{ij}]=sum_r A_{ir}(BC)_{rj}=color{red}{sum_r A_{ir}sum_s B_{rs}C_{sj}}$$
edited Jan 21 '18 at 9:15
community wiki
2 revs, 2 users 80%
Bijesh K.S
The second point is in fact correctly stated in my copy of the book (I am using the second edition).
– Brahadeesh
Dec 31 '17 at 13:08
add a comment |
The second point is in fact correctly stated in my copy of the book (I am using the second edition).
– Brahadeesh
Dec 31 '17 at 13:08
The second point is in fact correctly stated in my copy of the book (I am using the second edition).
– Brahadeesh
Dec 31 '17 at 13:08
The second point is in fact correctly stated in my copy of the book (I am using the second edition).
– Brahadeesh
Dec 31 '17 at 13:08
add a comment |
More than a typo, there's a stated Corollary on page 356 in Section 9.6 that appears to be false. The details are here:
https://mathoverflow.net/questions/306759/error-in-hoffman-kunze-normal-operators-on-finite-dimensional-inner-product-spa
add a comment |
More than a typo, there's a stated Corollary on page 356 in Section 9.6 that appears to be false. The details are here:
https://mathoverflow.net/questions/306759/error-in-hoffman-kunze-normal-operators-on-finite-dimensional-inner-product-spa
add a comment |
More than a typo, there's a stated Corollary on page 356 in Section 9.6 that appears to be false. The details are here:
https://mathoverflow.net/questions/306759/error-in-hoffman-kunze-normal-operators-on-finite-dimensional-inner-product-spa
More than a typo, there's a stated Corollary on page 356 in Section 9.6 that appears to be false. The details are here:
https://mathoverflow.net/questions/306759/error-in-hoffman-kunze-normal-operators-on-finite-dimensional-inner-product-spa
answered Jul 24 '18 at 23:46
community wiki
Spiro Karigiannis
add a comment |
add a comment |
Chapter 1
Page 3, definition of characteristic.
. . . least n . . .
It should be least positive n.
Chapter 2
Page 39, Exercise 3.
. . . R$^5$ . . .
It should be R$^4$.
Chapter 3
Page 96, Exercise 9.
. . . and show that S = UTU$^{-1}$.
It should be S = U$^{-1}$TU.
Chapter 4
Page 129, Theorem 5, second sentence.
If f is a polynomial over f . . .
It should be If f is a polynomial over F . . .
Page 137, Example 11.
. . . is the g.c.d. of the polynomials.
Delete the period after polynomials. I include this seemingly trivial typo—that sentence-ending period causes the polynomials to refer to the preceding polynomials x – a, x – b, x – c, making the sentence obviously false—because it took me a non-trivial amount of time to figure out that the sentence does not actually end until four lines further down.
Chapter 6
Page 191, second full paragraph.
According to Theorem 5 of Chapter 4, . . .
It should be Theorem 7.
Page 198, Exercise 11.
. . . Section 6.1, . . .
It should be Section 6.2.
Page 219, Exercise 4(b).
. . . for f is the product of the characteristic polynomials for f$_1$, . . ., f$_k.$
Replace the three occurrences of f with T. Note also that the hint applies to all three parts of the exercise, not just part (c) as suggested by the formatting.
Page 219, Exercise 6.
. . . Example 6 . . .
It should be Example 5.
Page 219, Exercise 7.
. . . is spanned . . .
It should be is not spanned.
add a comment |
Chapter 1
Page 3, definition of characteristic.
. . . least n . . .
It should be least positive n.
Chapter 2
Page 39, Exercise 3.
. . . R$^5$ . . .
It should be R$^4$.
Chapter 3
Page 96, Exercise 9.
. . . and show that S = UTU$^{-1}$.
It should be S = U$^{-1}$TU.
Chapter 4
Page 129, Theorem 5, second sentence.
If f is a polynomial over f . . .
It should be If f is a polynomial over F . . .
Page 137, Example 11.
. . . is the g.c.d. of the polynomials.
Delete the period after polynomials. I include this seemingly trivial typo—that sentence-ending period causes the polynomials to refer to the preceding polynomials x – a, x – b, x – c, making the sentence obviously false—because it took me a non-trivial amount of time to figure out that the sentence does not actually end until four lines further down.
Chapter 6
Page 191, second full paragraph.
According to Theorem 5 of Chapter 4, . . .
It should be Theorem 7.
Page 198, Exercise 11.
. . . Section 6.1, . . .
It should be Section 6.2.
Page 219, Exercise 4(b).
. . . for f is the product of the characteristic polynomials for f$_1$, . . ., f$_k.$
Replace the three occurrences of f with T. Note also that the hint applies to all three parts of the exercise, not just part (c) as suggested by the formatting.
Page 219, Exercise 6.
. . . Example 6 . . .
It should be Example 5.
Page 219, Exercise 7.
. . . is spanned . . .
It should be is not spanned.
add a comment |
Chapter 1
Page 3, definition of characteristic.
. . . least n . . .
It should be least positive n.
Chapter 2
Page 39, Exercise 3.
. . . R$^5$ . . .
It should be R$^4$.
Chapter 3
Page 96, Exercise 9.
. . . and show that S = UTU$^{-1}$.
It should be S = U$^{-1}$TU.
Chapter 4
Page 129, Theorem 5, second sentence.
If f is a polynomial over f . . .
It should be If f is a polynomial over F . . .
Page 137, Example 11.
. . . is the g.c.d. of the polynomials.
Delete the period after polynomials. I include this seemingly trivial typo—that sentence-ending period causes the polynomials to refer to the preceding polynomials x – a, x – b, x – c, making the sentence obviously false—because it took me a non-trivial amount of time to figure out that the sentence does not actually end until four lines further down.
Chapter 6
Page 191, second full paragraph.
According to Theorem 5 of Chapter 4, . . .
It should be Theorem 7.
Page 198, Exercise 11.
. . . Section 6.1, . . .
It should be Section 6.2.
Page 219, Exercise 4(b).
. . . for f is the product of the characteristic polynomials for f$_1$, . . ., f$_k.$
Replace the three occurrences of f with T. Note also that the hint applies to all three parts of the exercise, not just part (c) as suggested by the formatting.
Page 219, Exercise 6.
. . . Example 6 . . .
It should be Example 5.
Page 219, Exercise 7.
. . . is spanned . . .
It should be is not spanned.
Chapter 1
Page 3, definition of characteristic.
. . . least n . . .
It should be least positive n.
Chapter 2
Page 39, Exercise 3.
. . . R$^5$ . . .
It should be R$^4$.
Chapter 3
Page 96, Exercise 9.
. . . and show that S = UTU$^{-1}$.
It should be S = U$^{-1}$TU.
Chapter 4
Page 129, Theorem 5, second sentence.
If f is a polynomial over f . . .
It should be If f is a polynomial over F . . .
Page 137, Example 11.
. . . is the g.c.d. of the polynomials.
Delete the period after polynomials. I include this seemingly trivial typo—that sentence-ending period causes the polynomials to refer to the preceding polynomials x – a, x – b, x – c, making the sentence obviously false—because it took me a non-trivial amount of time to figure out that the sentence does not actually end until four lines further down.
Chapter 6
Page 191, second full paragraph.
According to Theorem 5 of Chapter 4, . . .
It should be Theorem 7.
Page 198, Exercise 11.
. . . Section 6.1, . . .
It should be Section 6.2.
Page 219, Exercise 4(b).
. . . for f is the product of the characteristic polynomials for f$_1$, . . ., f$_k.$
Replace the three occurrences of f with T. Note also that the hint applies to all three parts of the exercise, not just part (c) as suggested by the formatting.
Page 219, Exercise 6.
. . . Example 6 . . .
It should be Example 5.
Page 219, Exercise 7.
. . . is spanned . . .
It should be is not spanned.
edited 13 hours ago
community wiki
12 revs
Maurice P
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f437253%2fis-there-a-list-of-all-typos-in-hoffman-and-kunze-linear-algebra%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
13
One thing you can try is to look for a well-used university library copy and flip through the pages to see what corrections might be penciled in.
– Dave L. Renfro
Mar 27 '15 at 16:27