Cheating ones way through linear algebra

Here are a few arguments that are possibly useful to prove things when one hasn’t set up all the usual machinery of vector spaces, subspaces and dimensions, presented in decreasing order of how seriously that could be used in a first course on matrix algebra. Since it fits well with the syllabus for the first year course I’m lecturing this term, I’m going to assume reduced row-echelon form is available. Throughout all matrices have coefficients in a field F.

Invertible if and only if injective by row operations

Given a matrix B in reduced row-echelon form it is almost trivial to write down all solutions to B x = 0. There are non-zero solutions if and only if not every column of B contains a pivot. For example, the solutions to

\left( \begin{matrix} 1 & 2 & 0 & 3 \\ 0 & 0 & 1 & 4 \\ 0 & 0 & 0 & 0 \end{matrix}\right) \left( \begin{matrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{matrix} \right) = \left( \begin{matrix} 0 \\ 0 \\ 0 \end{matrix} \right)


\left\{ \left( \begin{matrix} -2x_2-3x_4 \\ x_2 \\ -4x_4 \\ x_4 \end{matrix} \right) : x_2, x_4 \in F \right \};

the pivot column variables are determined by the non-pivot variables, which may be chosen freely.

Let f : F^n \rightarrow F^n be an invertible linear map represented by the matrix A. Obviously if f is invertible then f is injective. Conversely, suppose that f is injective and that A has reduced row-echelon form B, so B = QA for some invertible matrix Q. If B has a zero row then it has at most n-1 pivots, so by the remark above, B has a non-zero kernel. Therefore B has n pivots and so is the identity matrix. Hence I = QA. Now this only shows that A has a left inverse, but since Q is a product of elementary matrices (corresponding to row operations), and each such elementary matrix is easily seen to be invertible (in the usual sense), Q is invertible and so A = Q^{-1} is invertible.

Very similarly one can show that f is invertible if and only if f is surjective.

Linearly dependent if too many vectors

If A is the (n+1) \times n matrix whose rows are the components of n+1 vectors in F^n (expressed in the standard basis) then the reduced row echelon form of A has at most n pivot columns, and so the bottom row is zero. Correspondingly an F-linear combination of the rows of A is zero.

Invertible if and only if injective by disguised minimal polynomial and Primary Decomposition Theorem

The vector space of all linear endomorphisms of F^n is n^2 dimensional. By the previous result there is a non-trivial linear dependency between the powers of A. Let c_0 I + c_1 A +\cdots + c_d A^d be a relation with d minimal. If c_0\not=0 then formally multiplying through by c_0^{-1}A^{-1} gives

-c_0^{-1}c_1 I - c_0^{-1}c_2 A - \cdots - c_0^{-1}c_dA^{d-1}

as a plausible (and clearly correct) guess for the inverse of A. If c_0 = 0 then A^r g(A) = 0 where r \in \mathbb{N} and g has degree d-r. By minimality of d, we have g(A) \not=0; if w = g(A)v \not=0 then A^r w = 0, so, choosing s \in \mathbb{N}_0 least such that A^s w\not =0, we find that \ker A \not= 0. So either f is invertible, or \ker f \not=0, and so f is not injective.

Row rank is column rank

William P. Wardlaw published a very short proof in Mathematics Magazine (2005) 78 316—318. Let A be a non-zero n \times n matrix. Let r be minimal such that there exists an n \times r matrix P and an r \times n matrix Q such that PQ = A. By minimality the rows of Q are linearly independent, and clearly the row span of Q contains the row span of A. Hence r is the row rank of A. Dually, r is the column rank of A.

Linear dependencies by pigeonhole principle

Take n+1 vectors v_1, \ldots, v_{n+1} in \mathbb{F}_p^n. There are p^{n+1} linear combinations a_1v_1 + \cdots + a_{n+1}v_{n+1} of these vectors, but only p^n elements of \mathbb{F}_p^n, so two linear combinations with different coefficients are equal.

As I learned from this MathOverflow answer, one can make this argument work in \mathbb{Z}^n, and so in \mathbb{Q}^n by scaling to remove denominators. Let n+1 non-zero elements of \mathbb{Z}^n be given, and let N be the largest component in absolute value. There are (2M+1)^{n+1} \mathbb{Z}-linear combinations with coefficients in \{-M,\ldots,M\}, each of which has components in \{-MN, \ldots, MN\}. The pigeonhole principle strikes as soon as (2MN+1)^n < (2M+1)^{n+1}; clearly this is true for large enough M. For instance, since (2MN+1)^n < 3^n N^n M^n and 2^n M^n M < (2M+1)^{n+1}, it is sufficient if (3/2)^n N^n < M.

Question. Is there a trick to extend this argument to arbitrary fields?

Modules over general rings

Yiftach Barnea pointed out to me that if A is an n \times n matrix with entries in a principal ideal domain R then, regarded as a linear map R^n \rightarrow R^n, if A is surjective then A is injective. One proof (to my eyes, essentially the same as using the structure theorem for modules over a PID) is to use row and column operations to reduce to the case where A is a diagonal matrix: then all the diagonal entries of A must be invertible in R, so A is itself invertible, and so injective. Clearly the converse holds if and only if R is a field.

Suppose that R is a unital ring containing a non-zero element b with a left inverse. Then multiplication by b, i.e. r \mapsto br, is an injective linear map. This map is surjective if and only if b has a right inverse. So for general rings, neither implication holds.


Is there a nice characterization of rings R such that if f : R^n \rightarrow R^n is a surjective R-linear map then f is injective?

Partial answer.

My Ph.D student Bill O’Donovan told me about Hopfian objects. By definition, an R-module M is Hopfian if and only if it has the property in the question, i.e. any surjective R-module endomorphism of M is injective.

A related property is that M is strongly Hopfian if and only if whenever f : M \rightarrow M is an R-module homomorphism, the kernel chain

\mathrm{ker} f \subseteq \mathrm{ker} f^2 \subseteq \mathrm{ker} f^3 \ldots

stabilises. The relevant implication follows from the standard proof of Fitting’s lemma.

Claim. Strongly Hopfian implies Hopfian.

Proof. Suppose that \mathrm{ker} f^n = \mathrm{ker} f^{n+1}. Since f restricts to a map \mathrm{ker} f^i \rightarrow \mathrm{ker} f^{i-1} for any i \ge 1, we see that f preserves \mathrm{ker} f^n, and acts on this space as a nilpotent map of degree n. Let u \in \mathrm{ker} f^n. Since f is surjective, u = f^n v for some v \in M, and by hypothesis, f^n u = 0. Hence f^{2n} v = 0. But \mathrm{ker} f^{2n} = \mathrm{ker} f^n, so f^n v = 0, hence u=0. Therefore \ker f \subseteq \mathrm{ker} f^n = \{ 0 \} and f is injective.

In particular, any Noetherian ring is Hopfian as a module for itself. There are Hopfian modules that are not strongly Hopfian modules, for example Proposition 2.7(4) in this paper by A. Hmaimou, A. Kaidi and E. Sánchez Campos gives the example of the \mathbb{Z}-module \oplus (\mathbb{Z}/p_n \mathbb{Z}^n), where p_n is the nth prime; the kernel chain for the endomorphism acting nilpotently with degree n on the nth summand does not stabilise, but since any endomorphism preserves the summands, the module is Hopfian.

I asked on MathOverflow is there was a ring R and a Hopfian R-module M such that M \oplus M was not Hopfian, and my worst suspicions were confirmed.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: