## Burnside’s method

Burnside proved in 1901 that if $p$ is an odd prime then a permutation group containing a regular subgroup isomorphic to $C_{p^2}$ is either imprimitive or 2-transitive. His proof was an early application of character theory to permutation groups. Groups with this property are now called B-groups.

Burnside attempted to generalize his 1901 result in two later papers: in 1911, he claimed a proof that $C_{p^n}$ is a B-group for any prime $p$ and any $n \ge 2$, and in 1921, he claimed a proof that all abelian groups, except for elementary abelian groups, are B-groups. The first claim is correct, but his proof has a serious gap. This error appears to have been unobserved (or, just possibly, observed but ignored, since the result was soon proved in another way using Schur’s theory of $S$-rings) until 1994 when it was noted by Peter Neumann, whose explication may be found in his introduction to Burnside’s collected works. In 1995, Knapp extended Burnside’s argument to give a correct proof. Burnside’s second claim is simply false: for example, $S_4 \wr S_2$ acts primitively on $\{1,2,3,4\}^2$, and has a regular subgroup isomorphic to $C_4 \times C_4$. In one of my current projects, I’ve simplified Knapp’s proof and adapted Burnside’s character-theoretic methods to show, more generally, that any cyclic group of composite order is a B-group.

The purpose of this post is to record some proofs omitted for reasons of space from the draft paper. This companion post has some notes on B-groups that may be of more general interest.

### Sums over roots of unity

Let $\xi$ be a primitive $n$th root of unity. Define

$R(r) = \{ r, r + p^{n-1}, \ldots, r+ (p-1)p^{n-1} \}$

for $0 < r < p^{n-1}$. Define a subset $Z$ of $\{1,\ldots,p^n-1\}$ to be null if there exists $s \in \mathbb{N}_0$ and distinct $r_{ij} \in \{1,\ldots, p^{n-1}-1\}$ for $0 \le i \le p-1$ and $1 \le j \le s$ such that $r_{ij} \equiv i$ mod $p$ for each $i$ and $j$ and

$Z = \bigcup_{i=0}^{p-1} \bigcup_{j=1}^s R(r_{ij}).$

Proposition 6.2 Let $\omega = \zeta^{p^{n-1} c}$ where $c$ is not divisible by $p$. Let $\mathcal{O} \subseteq \{1,\ldots,p^n-1\}$. Then

$\sum_{i \in \mathcal{O}} \zeta^i = \sum_{i \in \mathcal{O}} \omega^i$

if and only if either

1. $\mathcal{O}$ is null; or
2. $\mathcal{O} = \{p^{n-1}, \ldots, (p-1)p^{n-1} \} \; \cup \; \bigcup_{i=1}^{p-1} R(r_i) \; \cup \; Z$ where $Z$ is a null set, the $r_i$ are distinct elements of $\{1,\ldots,p^{n-1}-1\}\backslash Z$ and $r_i \equiv i$ mod $p$ for each $i$.

Proof. Since the minimum polynomial of $\zeta$ is

$1+X^{p^{n-1}} + \cdots + X^{(p-1)p^{n-1}}$

we have $\sum_{i \in R(r)} \zeta^i = 0$. Since $\xi^{p^{n-1}} = \omega$, we have $\sum_{i \in R(r)} \omega^i = p\omega^r$. It follows that $\sum_{i \in Z} \xi^i = \sum_{i \in Z} \omega^i = 0$ if $Z$ is a null set. (For the second equality, note the contributions from the $R(r_{ij})$ for fixed $j$ combine to give $p + p\omega + \cdots + p\omega^{p-1} = 0$.) For (2) we have $\sum_{i=1}^{p-1} \xi^{i p^{n-1}} = \omega + \cdots + \omega^{p-1} = -1$, $\sum_{i=1}^{p-1} \omega^{i p^{n-1}} = (p-1)$ and $\sum_{i=1}^{p-1} p \omega^i = - p$. This proves the ‘if’ direction.

Conversely, by Lemma 2.1 in the paper, $\mathcal{O} \backslash \{p^{n-1},\ldots,(p-1)p^{n-1} \}$ is a union of some of some of the sets $R(r)$. There exists a unique subset $A$ of $\{1,\ldots,p-1\}$ and unique $j_i \in \mathbb{N}$ for $0 \le i \le p-1$ and unique $r_{ij} \in \{1,\ldots, p^{n-1}-1\}$ for $1 \le j \le s$ and $0 \le i \le j_i$ such that

$\mathcal{O} = \{ p^{n-1} i : i \in A \} \cup \bigcup_{i=0}^{p-1}\bigcup_{j=1}^{i_j} R(r_{ij}).$

We have $\sum_{i \in \mathcal{O}} \zeta^i = \sum_{i \in A} \omega^i$ and $\sum_{i \in \mathcal{O}} \omega^i = |A| + \sum_{i=0}^{p-1} pj_i \omega^i$. Therefore

$|A| + \sum_{i=0}^{p-1} (pj_i - [i \in A]) X^i$

has $\omega$ as a root. Since this polynomial has degree at most $p-1$ and the minimal polynomial of $\omega$ is $1+X +\cdots + X^{p-1}$, it follows that the coefficients are constant. Hence

$|A| + pj_0 = pj_i - [i \in A]$

for $1\le i \le p-1$. If $A = \varnothing$ then $j_0 = j_1 = \ldots = j_{p-1}$ and $\mathcal{O}$ is null. Otherwise, taking the previous displayed equation mod $p$ we see that $A = \{1,\ldots,p-1\}$, and, moreover, the $j_i$ are constant for $1 \le i \le p-1$. (This holds even if $p=2$.) Set

$s = j_1 = \ldots = j_{p-1}.$

We have $j_0 = s-1$. Hence, $s \in \mathbb{N}$ and choosing in any way $r_i \in \{1,\ldots,p^{n-1}-1\}$ for $1 \le i \le p-1$ such that $r_i \in \mathcal{O}$, we see that $\mathcal{O}$ is the union of $\{p^{n-1}, \ldots, (p-1)p^{n-1}\}$, the sets $R(r_i)$ for $1 \le i \le p-1$ and a null set. $\Box$.

### Ramanujan matrices.

For $p$ prime and $n \in \mathbb{N}$ we define $R(p^n)$ to be the matrix

$\left( \begin{matrix} \scriptstyle 1 & \scriptstyle1 & \scriptstyle 1 & \scriptstyle\ldots & \scriptstyle 1 & \scriptstyle1 & \scriptstyle1 \\ \scriptstyle-1 & \scriptstyle p-1 & \scriptstyle p-1 & \scriptstyle \ldots & \scriptstyle p-1 & \scriptstyle p-1 & \scriptstyle p-1 \\ \scriptstyle0 & \scriptstyle -p & \scriptstyle p(p-1) & \scriptstyle \ldots & \scriptstyle p(p-1) & \scriptstyle p(p-1) & \scriptstyle p(p-1) \\ \scriptstyle 0 & \scriptstyle 0 & \scriptstyle -p^2 & \scriptstyle \ldots & \scriptstyle p^2(p-1) & \scriptstyle p^2(p-1) & \scriptstyle p^2(p-1) \\ \scriptstyle\vdots & \scriptstyle \vdots & \scriptstyle \vdots & \scriptstyle \ddots & \scriptstyle \vdots & \scriptstyle \vdots & \scriptstyle \vdots \\ \scriptstyle 0 & \scriptstyle 0 & \scriptstyle 0 & \scriptstyle \ldots & \scriptstyle -p^{n-2} & \scriptstyle p^{n-2}(p-1) & \scriptstyle p^{n-2}(p-1) \\ \scriptstyle 0 & \scriptstyle 0 & \scriptstyle 0 & \scriptstyle \ldots & \scriptstyle 0 & \scriptstyle -p^{n-1} & \scriptstyle p^{n-1}(p-1) \end{matrix} \right).$

More generally, if $d$ has prime factorization $p_1^{a_1} \ldots p_s^{a_s}$, we define $R(d) = R(p_1^{a_1}) \otimes \cdots \otimes R(p_s^{a_s})$. The rows and columns of $R(d)$ are labelled by the divisors $D$ of $2^m p^n$, as indicated below for the case $d = 2^3 p$, with $p$ and odd prime.

$R(2^3 p) = \begin{matrix} 1 \\ 2 \\ 2^2 \\ 2^3 \\ p \\ 2p \\ 2^2p \\ 2^3p \end{matrix} \left( \begin{array}{cccc|cccc} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ -1 & 1 & 1 & 1 & -1 & 1 & 1 & 1 \\ 0 & -2 & 2 &2 & 0 & -2 & 2 & 2 \\ 0 & 0 & -2^2 & 2 & 0 & 0 & -2^2 & 2^2\\ \hline -1 & -1 & -1 & -1 & 1 & 1 & 1 & 1 \\ 1 & -1 & -1 & -1 & -1 & 1 & 1 & 1 \\ 0 & 2 & -2 & -2 & 0 & -2 & 2 & 2 \\ 0 & 0 & 2^2 & -2^2 & 0 & 0 & -2^2 & 2^2 \end{array} \right).$

Say that a partition of $D \backslash \{ d\}$ is coprime if the highest common factor of the numbers in each part is $1$. The aim of the game is to find a non-empty set of rows, $X$ say, of $R(d)$ such that the subsets of columns (excluding column $d$) on which the sum of the rows in $X$ are equal form a coprime partition of the divisors.

There is an application to B-groups only when $d$ is even, in which case we may assume that $2 \in X$. Proposition 6.7 in the paper states that in this case, the only way to win this game when $d = 2^n$, $d = 2^n p$ or $d = 2 p^n$, where $p$ is an odd prime, is to put every single row in $X$. This implies that $C_{2^m p^n}$ is a B-group when $n \le 1$ or $m \le 1$. However the result on the game may well hold more generally.

#### Constant sums for $p^n$

The only coprime partition of $D \backslash \{p^n\}$ has a singleton part, so the row sums are all equal. By adding $p^{e-1}$ to each entry in row $p^e$ for $e \in \{1,\ldots, n\}$, we obtain the matrix below.

$\left( \begin{matrix} \scriptstyle 1 & \scriptstyle1 & \scriptstyle 1 & \scriptstyle\ldots & \scriptstyle 1 & \scriptstyle1 & \scriptstyle1 \\ \scriptstyle 0 & \scriptstyle p & \scriptstyle p & \scriptstyle \ldots & \scriptstyle p & \scriptstyle p & \scriptstyle p \\ \scriptstyle p & \scriptstyle 0 & \scriptstyle p^2 & \scriptstyle \ldots & \scriptstyle p^2 & \scriptstyle p^2 & \scriptstyle p^2 \\ \scriptstyle p^2 & \scriptstyle p^2 & \scriptstyle 0 & \scriptstyle \ldots & \scriptstyle p^3 & \scriptstyle p^3 & \scriptstyle p^3 \\ \scriptstyle\vdots & \scriptstyle \vdots & \scriptstyle \vdots & \scriptstyle \ddots & \scriptstyle \vdots & \scriptstyle \vdots & \scriptstyle \vdots \\ \scriptstyle p^{n-2} & \scriptstyle p^{n-2} & \scriptstyle p^{n-2} & \scriptstyle \ldots & \scriptstyle 0 & \scriptstyle p^{n-1} & \scriptstyle p^{n-1} \\ \scriptstyle p^{n-1} & \scriptstyle p^{n-1} & \scriptstyle p^{n-1} & \scriptstyle \ldots & \scriptstyle p^{n-1} & \scriptstyle 0 & \scriptstyle p^n \end{matrix} \right)$

which still has constant sums over the rows in $X$. Let $x_i = 1$ if $p^i \in X$ and let $x_i = 0$ otherwise. Writing numbers in base $p$, the row sums over $X$ are, for each column,

\begin{aligned} 1 \quad &\quad x_nx_{n-1} \ldots x_3x_2x_0 \\ p \quad &\quad x_n x_{n-1} \ldots x_3x_1x_0 \\ p^2\quad &\quad x_n x_{n-1} \ldots x_2x_1x_0 \\ \vdots \; \quad &\quad \qquad \vdots \\ p^{n-2} \quad &\quad x_nx_{n-2} \ldots x_2x_1x_0 \\ p^{n-1} \quad &\quad x_{n-1}x_{n-2} \ldots x_2x_1x_2. \end{aligned}

(Note in each case there are $n$ digits, the most significant corresponding to $p^{n-1}$.) The numbers in the right column are equal, hence all the $x_i$ are equal and $X = D$, as required. Taking $p=2$ this gives an alternative proof of Proposition 6.7(i).

#### Proofs of further claims on the Ramanujan matrices

Let $R(d)^{\circ}$ denote $R(d)$ rotated by a half turn. The following result was stated without proof in the paper; it implies that each $R(d)$ is invertible, with inverse $\frac{1}{d}R(d)^\circ$. (Update: I later found a much better proof, now outlined in the paper, using a lower-upper decomposition of $R(d)$.)

Proposition. We have $R(p^n)^{-1} = \frac{1}{p^n} R(p^n)^\circ$ and $\det R(p^n) = p^{n(n+1)}/2$.

Proof. Since $R(p^n)^\circ_{p^sp^t} = R(p^n)_{p^{n-s}p^{n-t}}$, we have $R(p^n)^\circ_{p^np^t} = 1$ for all $t \in \{0,\ldots, n\}$ and

\begin{aligned} R(p^n)^\circ_{p^sp^t} &= \begin{cases} (p-1)p^{n-s-1} & n-t \ge n-s \\ -p^{n-s-1} & n-t=n-s-1 \\ 0 & n-t \le n-s-2 \end{cases} \\ &= \begin{cases} (p-1)p^{n-s-1} & s \ge t \\ -p^{n-s-1} & s = t-1 \\ 0 & s \le t-2 \end{cases} \end{aligned}

for $s \in \{0,\ldots, n-1\}$ and $t \in \{0,\ldots,n\}$. We use this to show that

$\sum_{c=0}^n R(p^n)_{p^rp^c} R(p^n)^{\circ}_{p^c p^{r'}} = p^n [r=r']$

for $r,r' \in \{0,\ldots, n\}$. When $r=0$ the first term in each product is $1$, so the left-hand side is the column sum of the $n-r'$-th of $R(p^n)$; this is $p^n$ if $r'=0$ and $0$ otherwise, as required. Now suppose that $r \ge 1$. Since $R(p^n)_{p^rp^c}$ vanishes when $c \le r-2$, the left-hand side is

$-p^{r-1}R(p^n)^\circ_{p^{r-1}p^{r'}} + \sum_{c=r}^n p^{r-1}(p-1) R(p^n)_{p^cp^{r'}}.$

Take out a factor $p^{r-1}$ to define $L$. Substitute for $R(p^n)^\circ_{p^cp^{r'}}$, and split off the summand from $R(p^n)^\circ_{p^np^{r'}} = 1$ to get

\begin{aligned} L &= \begin{cases} (p-1)p^{n-r} & r' \le r-1 \\ -p^{n-r} & r' = r \\ 0 & r' \ge r+1 \end{cases} \\ & \qquad + (p-1) \sum_{c=r}^{n-1} (p-1) \begin{cases} (p-1)p^{n-c-1} & c \ge r' \\ -p^{n-c-1} & c = r'-1 \\ 0 & c \le r'-2 \end{cases} + p-1.\end{aligned}

We must now consider three cases. When $r=r'$ we get

\begin{aligned} L &= p^{n-r} + (p-1)\sum_{c=r}^{n-1} (p-1)p^{n-c-1} + p-1 \\ &= p^{n-r} + (p-1)(p^{n-r}-1) + p-1 \\ &= p^{n-r+1} \end{aligned}

as required. When $r \ge r'+1$ we have $c > r'$ in all summands so

\begin{aligned} L &= -(p-1)p^{n-r} + (p-1)\sum_{c=r}^{n-1} (p-1)p^{n-c-1} + p^{r-1}(p-1) \\ &= (p-1) \bigl( -p^{n-r} + (p^{n-r}-1) + 1 \bigr) \\ &= 0. \end{aligned}

When $r \le r'-1$ the first non-zero summand occurs for $c=r'-1$ so we have

\begin{aligned} L &= - (p-1)p^{n-r'} + (p-1) \sum_{c=r'}^{n-1} (p-1)p^{n-c-1} + p^{r-1}(p-1) \\ &= (p-1) \bigl( -p^{n-r'} + (p^{n-r'}-1) + 1 \bigr) \\ &=0. \end{aligned}

Now taking determinants, using that $\det R(p^n)^\circ = \det R(p^n)$ (the matrices are conjugate by the matrix with $1$s on its antidiagonal and $0$s elsewhere), we get $1 / \det R(p^n) = (p^n)^{n+1} \det R(p^n)$. Hence $\det R(p^n) = p^{n(n+1)/2}$, as required.

### Jordan block matrices.

Let $J$ be the $m \times m$ unipotent upper-triangular Jordan block matrix over $\mathbb{F}_p$. We have $(J-I)^m = 0$, hence $(J-I)^{p^r} = 0$ whenever $p^r \ge m$. On the other hand if $k \le m-1$ then $J^k$ has a $1$ in position $(1,k)$. Therefore $J^{p^r} = I$ if and only if $p^r \ge m$. We also need a result on the relative trace matrix $I + J + \cdots + J^{p^{r}-1}$. Note that $\binom{p}{k}$ is divisible by $p$ for $k \in \{1,\ldots, p-1\}$. (For instance, a $p$-cycle acts freely on the set of $k$-subsets of $\{1,\ldots, p\}$.) An easy inductive argument using $\binom{p-1}{k-1} + \binom{p-1}{k} = \binom{p}{k}$ shows that $\binom{p-1}{k} \equiv (-1)^k$ mod $p$ for all $k$. Now Lucas’ Theorem implies that $\binom{p^r-1}{k} \equiv (-1)^k$ mod $p$. Hence

$(J-I)^{p^r-1} = I + J + \cdots + J^{p^r-1}.$

It follows that the right-hand side is $0$ if and only if $p^r > m$.