## Signed binomial matrices of small order

December 28, 2020

Let $B(n)$ denote the $n \times n$ Pascal’s Triangle matrix defined by $B(n)_{xy} = \binom{x}{y}$, where, as in the motivating paper, we number rows and columns of matrices from 0. More generally, let $B_a(n)$ be the shifted Pascal’s Triangle matrix defined by $B_a(n)_{xy} = \binom{x+a}{y+a}$. Let $J(n)_{xy} = [x+y=n-1]$ be the involution reversing the order of rows (or columns) of a matrix, and let

$D^\pm(n) = \mathrm{Diag}(1,-1,\ldots, (-1)^{n-1}).$

Since $n$ is fixed throughout, we shall write $B$, $B_a$, $J$ and $D^\pm$ for readability in proofs.

Claim 1. For any $a \in \mathbb{N}$, and any $n \ge 2$, the matrix $D^\pm(n)B_a(n)$ is an involution.

Proof. Since $n\ge 2$, the matrix is not the identity. Since $\bigl( D^\pm B_a \bigr)_{xy} = (-1)^x \binom{x+a}{y+a}$ we have

\begin{aligned} \bigl( D^\pm B_a D^\pm B_a \bigr)_{xz} &= (-1)^x \sum_y (-1)^y \binom{x+a}{y+a} \binom{y+a}{z+a} \\ &= (-1)^x \sum_y \binom{x+a}{z+a}\binom{x-z}{y-z}(-1)^y \\ &= (-1)^{x+z} \binom{x+a}{z+a} \sum_r \binom{x-z}{r}(-1)^r \\ &= (-1)^{x+z} \binom{x+a}{z+a} [x=z] (-1)^{x+z} \\ &= [x=z] \end{aligned}

as required. $\quad\Box$

Claim 2. We have $B(n)^{-1} J(n) B(n) = D^\pm(n) J(n) B(n) J(n)$ and $B(n) J(n) B(n)^{-1} = J(n) B(n) J(n) D^\pm(n)$.

Proof. The alternating sum used to prove Claim 1 can be used to show that $B(n)^{-1}_{xy} = (-1)^{x+y} \binom{x}{y}$. Hence

\begin{aligned} \bigl(B(n)&J(n)B(n)^{-1} \bigr)_{xz} \\ &= \sum_{y} (-1)^{x+y} \binom{x}{y} \binom{n-1-y}{z} \\ &= (-1)^x\sum_y (-1)^y \binom{x}{y} (-1)^{n-1-y-z} \binom{-z-1}{n-1-y-z} \\ &= (-1)^{x+n-1-z} \binom{x-z-1}{n-1-z} \\ &= (-1)^{x+n-1-z} \binom{n-1-x}{n-1-z} (-1)^{n-1-z} \\ &= (-1)^x \binom{n-1-x}{n-1-z} \\ &= \bigl( D^\pm(n)J(n)B(n)J(n) \bigr)_{xz}\end{aligned}

as required. The second identity is proved very similarly. $\quad\Box$

Claim 3. For any $n \ge 2$, the matrix $D^\pm(n)J(n)B(n)$ has order $3$ if $n$ is odd and order $6$ if $n$ is even. $\quad\Box$

Proof. Since $\bigl( D^\pm J B \bigr)_{xy} = (-1)^x \binom{n-1-x}{y}$ we have

\begin{aligned} \bigl( D^\pm & J B D^\pm J B D^\pm J B \bigr)_{xw} \\ &= (-1)^x \sum_y \sum_z (-1)^{y+z} \binom{n-1-x}{y}\binom{n-1-y}{z} \\ & \hspace*{3in} \binom{n-1-z}{w} \\ &= (-1)^{x+n-1} \sum_z \Bigl( \sum_y \binom{n-1-x}{y} \binom{-z-1}{n-1-y-z} \Bigr) \\& \hspace*{3in} \binom{n-1-z}{w} \\ &= (-1)^{x+n-1} \sum_z \binom{n-2-x-z}{n-1-z}\binom{n-1-z}{w} \\ &= (-1)^{x} \sum_z (-1)^{z}\binom{x}{n-1-z}\binom{n-1-z}{w} \\ &= (-1)^{x+n-1} \sum_r (-1)^r \binom{x}{r} \binom{r}{w} \\&= (-1)^{x+n-1} (-1)^x [x=w] = (-1)^{n-1}. \end{aligned}

It is easily seen that the matrix is not a scalar multiple of the identity, therefore it has the claimed order. $\quad\Box$

Alternative proof. Another proof uses Claims 1 and 2, as follows

\begin{aligned} (JBD^\pm)^3 &= JBD^\pm JBD^\pm (D^\pm J)B D^\pm \\ &= JB\bigl( D^\pm JBJ\bigr) D^\pm (-1)^{n-1}BD^\pm \\ &= (-1)^{n-1} JB((B^{-1}JB ) JD^\pm BD^\pm \\ &= (-1)^{n-1}BD^\pm BD^\pm \\ &= (-1)^{n-1}I\end{aligned}

with the end as before. $\quad\Box$

Let $X^\circ$ denote the half-turn rotation of an $n \times n$ matrix $X$, as defined by $X^\circ = J(n)XJ(n)$. By Claim 3,

$B(n)D^\pm(n)B(n)^\circ D^\pm = BD^\pm J B J D^\pm$

is conjugate to $JD^\pm BD^\pm JB$ and so to $(-1)^{n-1} (D^\pm JB)(D^\pm JB)$. Hence this matrix has order $3$ when $n$ is odd and order $6$ when $n$ is even. We state without proof some further identities along these lines.

Claim 4. The matrix $B(n)D^\pm(n)B_n(n)^\circ D^\pm(n)$ has order $3$. If $n$ is even then $B(n)D^\pm(n)B_1(n)^\circ D^\pm(n)$ has order $12$. If $n$ is odd then $B_1(n)D^\pm(n)B_1(n)^\circ D^\pm(n)$ has order $3$.

The second case seems particularly remarkable. There are some obstructions (related to root systems) to integer matrices having finite order. These identities were discovered as a spin-off of a somewhat involved construction with linear algebra; I have no idea how to motivate them in any other way. For instance, how looking at

$B(6)D^\pm(6)B_1(6)D^\pm(6) = \left( \begin{matrix} 1 & -6 & 15 & -20 & 15 & -6 \\ 1 & -5 & 10 & -10 & 5 & -1 \\ 1 & -4 & 6 & -4 & 1 & 0 \\ 1 & -3 & 3 & -1 & 0 & 0 \\ 1 & -2 & 1 & 0 & 0 & 0 \\ 1 & -1 & 0 & 0 & 0 & 0 \end{matrix} \right)$

would one guess that it has order $12$? A computer search found many more examples involving more lengthy products of the signed binomial matrices and their rotations, for instance

$B(5)D^\pm B_4(5)^\circ D^\pm B(5)D^\pm B_8(5)^\circ D^\pm$

has order $12$.

## Completely monotone sequences

December 20, 2020

A sequence $\lambda_0, \lambda_1, \ldots, \lambda_n$ is monotone decreasing if $\lambda_0 \ge \lambda_1 \ge \lambda_2 \ge \ldots$, or equivalently, if $\lambda_j - \lambda_{j+1} \ge 0$ for all $j \in \mathbb{N}_0$. We will decide once and for all to deal with decreasing sequences only, and say that such a sequence is completely monotone if all its iterated difference, including the zero differences, are non-negative. That is $\lambda_j \ge 0$, $\lambda_j - \lambda_{j+1} \ge 0$, $\lambda_j - 2\lambda_{j+1} + \lambda_{j+2} \ge 0$, $\lambda_j - 3\lambda_{j+1} + 3\lambda_{j+2} - \lambda_{j+3} \ge 0$ for all $j \in \mathbb{N}_0$, and so on. Equivalently,

$\displaystyle \sum_{i=0}^k (-1)^i \binom{k}{i}\lambda_{j+i} \ge 0$

for all $j, k \in \mathbb{N}_0$. One family of examples I stumbled across, in what seemed like a completely unrelated context of random walks defined on posets with involution, is the two-parameter family

$\displaystyle \lambda^{(a,b)}_j = \frac{\binom{a+b}{a}(a+b+1)}{\binom{a+b+j}{b}(a+b+j+1)}.$

For instance, $\lambda^{(0,0)}_j = \frac{1}{j+1}$ for each $j \in \mathbb{N}_0$. For a direct proof that this sequence is completely monotone, we use the $\beta$-integral $\int_0^1 x^a(1-x)^b \mathrm{d}x = \frac{1}{\binom{a+b}{a}(a+b+1)}$ to write

\displaystyle\begin{aligned}\sum_{i=0}^k& (-1)^i \binom{k}{i} \frac{\lambda^{(a,b)}_j}{\binom{a+b}{a}(a+b+1)} \\ &= \sum_{i=0}^k (-1)^i \binom{k}{i} \int_0^1 x^{a+j+i}(1-x)^b \mathrm{d}x \\ &= \sum_{i=0}^k \int_0^1 x^{a+j}(1-x)^{b+k} \\ &= \frac{1}{\binom{a+b+j+k}{b+k}(a+b+j+k+1} \\ &= \frac{\lambda^{(a,b+k)}_j}{\binom{a+b+k}{a}(a+b+k+1)}.\end{aligned}

In the same context, I came across some multivariable polynomials that I conjecture are always positive when evaluated at a completely monotone sequence. These polynomials include

\displaystyle \begin{aligned}f_1(x_0,x_1) &= x_0-x_1 \\ f_2(x_0,x_1,x_2) &= x_0^2 - 2x_0x_1 + 2x_0x_2 - x_1x_2 \\ f_3(x_0,x_1,x_2,x_3) &= x_0^3 - 3 x_0^2 x_1 + 5 x_0^2 x_2 - 3 x_0 x_1 x_2\\ & \quad - 3 x_0^2 x_3 + 5 x_0 x_1 x_3 - 3 x_0 x_2 x_3 + x_1 x_2 x_3. \end{aligned}

Their positivity is obvious for $f_1$, because $f_1$ is equal to a difference. For $f_2$ positivity follows from

$\displaystyle f_2(x_0,x_1,x_2) = x_0(x_0-2x_1+x_2) + (x_0-x_1)x_2.$

Despite much struggling, I was unable to find a similar expression for $f_3$ as a sum of products of positive differences. The main purpose of this post is to show that such expressions exist for linear functions, but not in general. I therefore may well have been on a wild-goose chase, hardly for the first time.

#### Linear polynomials

Fix $n \in \mathbb{N}$. Let $\mathcal{C} \subseteq \mathbb{R}^n$ be the cone of all completely monotone sequences, as defined by the inequalities at the top of this post. Let $\mathcal{D} \subseteq \mathbb{R}^n$ be the cone spanned by the coefficients in these inequalities.

To make these cones more explicit, the following notation is helpful. Let $\Delta^k \lambda_j = \sum_{i=0}^k (-1)^i \binom{k}{i}\lambda_{j+i}$. Then $\Delta^{k-1} \lambda_j - \Delta^{k-1} \lambda_{j+1} = \Delta^{k} \lambda_j$, and so $\Delta^k \lambda_{n-1-k} + \Delta^{k-1} \lambda_{n-k} = \Delta^{k-1}\lambda_{n-1-k}$. It follows inductively that $\mathcal{D}$ is the set of non-negative linear combinations of the vectors $v^{(k)}$ defined by

$v^{(k)}_{n-1-i} = (-1)^i\binom{k}{i}$

For instance, if $n=5$ then $v^{(0)}, v^{(1)}, v^{(2)}, v^{(3)}, v^{(4)}$ are the rows of the $5 \times 5$ matrix below

$\left( \begin{matrix} \cdot & \cdot & \cdot & \cdot & 1 \\ \cdot & \cdot & \cdot & 1 & -1 \\ \cdot & \cdot & 1 & -2 & 1 \\ \cdot & 1 & -3 & 3 & -1 \\ 1 & -4 & 6 & -4 & 1 \end{matrix} \right)$

Suppose that $\sum_{i=0}^{n-1} a_i \lambda_i \ge 0$ for all completely monotone sequences. That is, $a \cdot \lambda \ge 0$ for all $\lambda \in \mathcal{C}$; we write this as $a \cdot \mathcal{C} \ge 0$. By Farkas’ Lemma applied to the cone $\mathcal{D}$, either

• $a \in \mathcal{D}$, or
• there exists $\lambda \in \mathbb{R}^n$ such that $\lambda \cdot \mathcal{D} \ge 0$ and $a \cdot \lambda < 0$.

In the either’ case, since $\lambda \cdot \mathcal{D} \ge 0$, the sequence $\lambda$ is completely monotone. But then $a \cdot \lambda < 0$ contradicts the hypothesis on $a$. Therefore $a \in \mathcal{D}$. So we have a simple necessary and sufficient condition for a linear polynomial to be non-negative on all completely monotone sequences: this is the case if and only if the coefficients are a linear coefficients of the vectors $v^{(0)}, v^{(1)}, \ldots, v^{(n-1)}$.

I had hoped that something similar might work for quadratic and higher degree polynomials based on analogously defined cones coming from the coefficients of products in the differences. Here is an counterexample to this hope.

Take $n=3$ and order monomials lexicographically $x_0^2, x_0x_1, x_0x_2, x_1^2, x_1x_2, x_2^2$. The homogeneous quadratics in the difference polynomials $x_0, x_1,x_2, x_0-x_1,x_1-x_2,x_0-2x_1+x_2$ span the same subspace as the rows of the matrix below.

$\left( \begin{matrix} 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & -1 \\ 0 & 0 & 0 & 1 & -2 & 1 \\ 0 & 0 & 1 & 0 & -2 & 1 \\ 0 & 1 & -1 & -2 & 3 & -1 \\ 1 & -4 & 2 & 4 & -4 & 1 \end{matrix}\right)$

For instance, the penultimate row has the coefficients for $(\lambda_0-2\lambda_1+\lambda_2)(\lambda_1-\lambda_2)$. Define
$g(x_0,x_1,x_2) = (x_0-x_1)(x_0-x_1-x_2)+x_1^2$.

Claim. $g(\lambda_0, \lambda_1, \lambda_2) \ge 0$ for all completely monotonic sequences $(\lambda_0,\lambda_1,\lambda_2)$.

Proof. Since the polynomial is homogeneous, we may assume that $\lambda_0 = 1$. Since $g$ is linear in $x_2$, when evaluated at $1,\lambda_1,\lambda_2$, it is negative if and only if

$\displaystyle \lambda_2 \ge \frac{(1-\lambda_1)^2+\lambda_1^2}{1-\lambda_1} = \frac{1-2\lambda_1 + 2\lambda_1^2}{1-\lambda_1}$

This inequality implies that

$\displaystyle \lambda_1 - \lambda_2 \le \frac{-1 + 3\lambda_1 - 3\lambda_1^2}{1-\lambda_1}$

but the quadratic $1 - 3y + 3y^2 - 3y = 3(y-1/2)^2 + \frac{1}{4}$ is always at least $\frac{1}{4}$. Hence $\lambda_1 -\lambda_2 \le -\frac{1}{4}$, contradicting complete monotonicity. $\qquad\Box$

Claim. $g$ is not a positive linear combination of homogeneous quadratics in the differences $x_0,x_1,x_2$, $x_0-x_1,x_1-x_2$, $x_0-2x_1+x_2$.

Proof. It is equivalent to show that the coefficients of $g$, namely $(1, -2, -1, 2, 1, 0)$ are not in the cone of positive linear combinations of the $6 \times 6$ matrix above. Using the computer algebra Magma, one can compute its dual cone, which turns out to be the set of positive linear combinations of the rows of the matrix below.

$\left(\begin{matrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 4 & 1 & 0 & 0 & 0 & 0 \\ 2 & 1 & 1 & 0 & 0 & 0 \\ 4 & 2 & 0 & 1 & 0 & 0 \\ 4& 3 & 2 & 2 & 1 & 0 \\ 1 & 1 & 1 & 1 & 1 & 1 \end{matrix}\right)$

In particular, the dual cone contains $(2,1,1,0,0,0)$, and since $(1, -2, -1, 2, 1, 0) \cdot (2,1,1,0,0,0) = -1$, $(1,-2,-1,2,1,0)$ is not in the dual of the dual cone, as required.$\qquad\Box$

I suspect that there is no easy way (or maybe even no algorithmic way?) to decide if a homogeneous polynomial in a completely monotone sequence is non-negative, but would be very happy to be proved wrong on this.

## Immanants and representations of the infinite symmetric group

October 12, 2020

This is a reminder to the author of a wild question. (I won’t even call it an idea.) Given a $n \times n$ matrix $X$ and a symmetric group character $\chi$, the immanant $d_\chi(X)$ is defined by

$\displaystyle \sum_{\sigma \in \mathrm{Sym}_n} \chi(\sigma) \prod_{i=1}^n X_{(i,\sigma(i))}.$

Thus when $\chi$ is the sign character the immanant is the familiar determinant, and when $\chi$ is the trivial character, it is the permanent, of interest partly because of the Permanent dominance conjecture and its starring role in Valiant’s programme to prove the algebraic analogue of the P $\not=$ NP conjecture. Very roughly stated, a more general conjecture is that all immanants, except for the determinant (or multiples of it), are hard to compute.

My wild question is: can one generalize immanants to representations of infinite symmetric groups, and does this give any extra freedom to in some sense interpolate’ from the trivial to the sign character?

## A game on tournaments with multiple edges

August 21, 2020

Here is a quick post to record a combinatorial game that I think might be of interest, but I don’t have time to think about seriously now.

When Person A plays Person B on the online chess server on which I spend far too much time, the server looks to see who has the longest run of consecutive games as black, and chooses that person to be white. Ties are broken randomly. This is a simple but effective rule: for instance it guarantees that in a series of games, the players alternate colours.

Now suppose a team of people want to manipulate the server into giving one of their members a long run of consecutive games as white. (This would of course be strictly against the server policies.) How big must the team be to guarantee that some member gets $n$ consecutive games as white?

A quick look at the small cases reveals the answer. With three people, having arbitrary past histories (this matters below), A plays B and the black player, B say, then plays C. If B is white, then when A plays B again, either A or B establishes a run of two games as white. If B is black (for instance this always happens if C has had a run of two black games), then when A plays C, either A or C gets the same run. Note that in the first case, A never plays C, but instead plays B twice.

With four people, we use the strategy above to give A a run of two white games, and then using B, C, D, give B say, another run of two white games. Now when A and B play, a run of three white games is established. Continuing inductively we see that $n$ people can force $n-1$ consecutive white games.

One might also ask for the analogous result where multiple games between the same people are not allowed. If the aim is instead to maximize the number of games as white, then we obtain the combinatorial game where the ‘Picker’ picks an edge of the complete graph $K_n$ and the ‘Chooser’ chooses its orientation. Since in the end all edges are picked exactly once, the role of the ‘Picker’ is defunct: the Chooser can choose an $n$ vertex tournament before play begins, and follow its recommendations. By Theorem 1.2 in this article of Alspach and Gavlas, when $n$ is odd, $K_n$ can be decomposed into $n(n-1)/2m$ edge cycles of length $m$ if and only if $m$ divides $n$. Directing each cycle in an arbitrary way then gives a tournament where every vertex has the same in- and out-degree. A simple solution when $n$ is prime is to take $m=n$ and use the cycles $(0,d,2d,\ldots,(n-1)d)$ for $1 \le d \le (n-1)/2$ on the vertex set $\{0,1,\ldots, n-1\}$, working modulo $n$. When $n$ is even, a discrepancy of $1$ between the in- and out-degrees of every vertex is inevitable, and it’s not hard to use the construction for $n$ odd to see that this can be achieved.

Returning to the original setting where we care about runs of white games, this shows that the Chooser can guarantee the longest run is at most the maximum out-degree, namely $\lfloor n/2 \rfloor$.

I’ll end with two natural questions suggested by this quick analysis.

#### Question 1

Let $G$ be a finite graph. In each step the Picker chooses an edge of $G$, and the Chooser directs it; multiple picks are allowed, and the Chooser may choose a different orientation on different picks. Say that vertex $v$ has a run of length $m$ if in the most recent $m$ picks involving this vertex, it was always the source. What is the maximum run length that the Picker can force somewhere in this graph?

#### Question 2

As Question 1, but now each edge may only be picked once.

For the first in the case of the complete graph, the analysis above shows that the Picker can force a run of length $n-1$, and since the chooser can make the next directed edge involving this vertex an in-edge, this is best possible. For the second in the case of the complete graph, the Picker cannot do better than $\lfloor n/2 \rfloor$, but I do not know if this is always possible.

## Flipped classroom and online learning: Personal FAQ

June 7, 2020

I spent most of June 2 and 3 attending two meetings about online learning: Flexible education (organized by Royal Holloway), at which a College wide teaching model was unveiled, and Teaching and Learning Mathematics Online (organized by Michael Grove, Rachel Hilliam and Kevin Houston). A recurring theme was ‘Active Blended Learning’ or the ‘Flipped Classroom’.  The purpose of this post is to record some observations made by speakers at these talks and my reflections on the research literature on these pedagogic models. Please may I emphasis that this post has absolutely no official status.

#### Does the College model require the flipped classroom?

This isn’t said explicitly, but I think it is implicit throughout. For example in the seven step programme that we are supposed to follow for each ‘teaching week’, we read

• 3. Engage with learning material. Provides time for students to independently or in groups to engage with learning materials.
• 4. Learning activity. Enables the practical and critical application of learning through individual or group activities.
• 5. Learning Check. Allows the student and lecturer(s) to check that the student learned the content and achieved the learning outcome(s) for the week through an activity or formative/summative assessment.

All this seems to sit fairly happily in a framework where students are expected to work off-line and then come together for synchronous problem solving sessions, maybe checking their understanding before/after by an online quiz. It does not, as as I see it, fit with the traditional model of three live lectures per week.

#### What is the research evidence for the flipped classroom and active blended learning?

Freeman et al Active learning increases student performance in science, engineering, and mathematics, PNAS 111 (2014) 410–8415 is a meta-analysis of 225 studies that compared student performance in courses lectured in the traditional style and student performance courses with ‘at least some active learning’. The conclusion is clear: active learning improved student performance by about 1/2 of a standard deviation.

More strikingly, the average failure rate was 21.8% under active learning compared to 33.8% under traditional lecturing. Measures of student engagement and satisfaction were not considered in this meta-analysis; my own guess is that the reduction in failure rate is correlated with greater engagement by the weaker students.

A survey talk by Robert Talbot, updating the conclusions of his 2017 book Flipped Learning: A Guide for Higher Education, also has clear conclusions. From the linked slides (emphasis preserved):

• Students in FL courses typically show either greater gains in measures of learning than students in traditional courses or else the differences are not statistically significant (slide 16).
• Students show higher satisfaction with FL and the active learning techniques once FL is in place (slide 18).
• Students often highly negative about FL when first introduced, ‘even while acknowledging benefits of increased group work, more instructor attention, and better grades’ (slide 18).

In Jacqueline O’Flaherty and Craig Phlips The use of flipped classrooms in higher education, Internet and Higher Education 25 (2015) 85–95 the authors conclude in Section 4.4:

Our review indicates a number of positive student learning outcomes from this pedagogical approach,

This review found very few studies that actually demonstrated robust evidence to support that the flipped learning approach is more effective than conventional teaching methods. Only one study used empirical validation to show that a structured flipped classroom in comparison to the traditional one could effectively engage students in deep learning [Hung, Flipping the classroom for English language learners to foster active learning]. Whilst some studies referred to a modest improvement of academic performance, through outcomes of increased examination scores or improved student satisfaction, further research is required in this area …

The cited study was a randomised controlled trial comparing two flipped models with the traditional lecture model on the same content on 75 students. Taking this as the minimum standard for a reliable study is clearly a high (but not unreasonable) barrier when collecting evidence.

Finally I’ll mention a study by James McEvoy, Interactive problem-solving sessions in an introductory bioscience course engaged students and gave them feedback, but did not increase their exam scores. James ‘partially-flipped’ a Royal Holloway biology course by replacing one of two weekly lectures with an interactive problem solving class (see page 5 for details of how this was run). After the change there was a significant improvement in student responses to the two questions ‘The teaching style was engaging‘ and ‘I received feedback on my progress during the course‘. However other measures of student engagement, and (as the title makes clear) exam performance were not significantly affected; there was however a reduction in failure rate.

McEvoy’s findings are consistent with the Freeman et al study and my belief that the flipped classroom may benefit weakest students the most, by helping them to get somewhere, rather than nowhere (as alas, is too often the case in mathematics courses).

#### What is ‘flipped learning’ anyway?

Talbot’s suggested definition (see about half way down) is

Flipped Learning is a pedagogical approach in which first contact with new concepts moves from the group learning space to the individual learning space in the form of structured activity, and the resulting group space is transformed into a dynamic, interactive learning environment where the educator guides students as they apply concepts and engage creatively in the subject matter.

He is clear that flipped learning does not necessarily require videos, and that a methodological flaw in some metastudies is that they exclude teaching models where videos were not used. Neither does flipped learning necessarily require lecturing. Instead a key feature is that the first contact with new concepts comes from a structured activity, done on one’s own. On my reading, the instructors’s duty is to plan this activity, and then enable students to apply the ideas creatively in individual and group work. Reassuringly (see slide 20), he reports

Implementation matters but not as much as simply offloading direct instruction into structured student activity and using the class time for active learning.

#### How does one make the flipped classroom work?

Here are some points that I jotted down from the various talks. Sue Pawley’s talk at TALMO Is there anyone out there? A guide to interactive activities in the online environment was especially useful.

• Make all live sessions worth attending. Don’t deliver the lecture (again), but instead get students to work in groups, learning from each other and you. Make live sessions motivating. Plan your teaching. Think context not content.
• Getting feedback from students is difficult: do not expect many students to speak in front of their peers. They will never speak unless recording is switched off. Quizzes are good and technologically easy. I want a system where students can collaborate in small groups on a ‘digital whiteboard’; at the moment this seems to require high-end tablets.
• Give students a ‘scaffold’. This was stressed by speakers at both events. Do not just say ‘read this’. Do not bombard students with content. Instead say ‘read this, watch that, do this quiz to check your (basic) understanding’.
• Do not use peer review since this just adds to the fear of being wrong. But critiquing anonymous wrong answers (we could even fabricate them) can be useful.
• As an example of what can be done, when everyone involved has a good internet connection and a latest-model iPad Pro, this video has three of the Royal Holloway mathematics staff (two of them pretending to be students) collaborating on a simple geometry problem.
• One thought from me: in all the online talks, the convener collated questions in the chat and then selected the most useful (or most upvoted) to ask. This seems a clear improvement on the traditional model where the most assertive person gets to set the initial subject and tone of the questions.

#### What ideas are there for quiz questions?

George Kinnear’s talk at TALMO Using quizzes to deliver a course online had many useful ideas, going beyond the usual multiple choice model.

• Faded worked examples: leave gaps in a model proof or solution for students to fill in.
• Invite students to create examples (I think they will need a lot of hand-holding).
• The ‘test effect’: recalling information from memory is beneficial (in the context of the always-connected internet generation, I think this deserves saying). For instance, before a section on integration, he got students to make a table of important functions (left column) and their derivatives (right column).

I very much hope we will be able to use the STACK computer algebra system (which integrates with Moodle) to set questions. It can validate quite complicated algebraic answers and give immediate feedback. For instance, it was used to mark the integration quiz above, and even suggest important functions the students had not used.

My own view is that quiz questions used to review material should be very basic, while questions in interactive workshops should be much tougher and focus on potential misconceptions and common errors.

All the talks I’ve been to were on fairly basic courses: this includes Kevin Houston’s talk at an LMS Education Day in 2016. There is little research on flipping advanced courses. I believe it can work, e.g. Kinnear’s idea of the faded worked example can be adapted to the faded proof. And advanced courses offer many opportunities for non-trivial (for students) questions about relaxing hypotheses, or strengthening conclusion, as well as reversing implications, chaining implications, spotting contrapositive restatements that are helpful, and so on.

#### Is this consistent with academic freedom?

I hope so, but at a recent EPMS meeting, the Head of School admitted some freedom would be lost. Personally I’m keen on moving to a flipped model, but I will defend to the (academic) death the freedom of colleagues to teach as they see fit, even when I firmly believe what they are proposing will be ineffective. As the slide below from James McEvoy’s talk shows, the lecturer’s preference for the flipped/traditional model is correlated with student performance.

I’m also concerned about the lack of consultation over the College’s model: I dislike having it patiently explained to me that what they’ve decided is for the best. (Of course consultation is not the same as listening, as we all know well.) By contrast, the Maths model has been extensively consulted on and we are still considering how some parts, e.g. group work, will work in the light of concerns of colleagues.

## Athena SWAN: Personal FAQ

May 5, 2020

Yesterday I finished writing the draft Athena SWAN submission for the Royal Holloway Mathematics and Information Security Group and sent it to members of the two departments for comments.

In this post I’ll collect the gist of those replies I’ve made to comments that might be of general interest (and can be made public), and a few extra ‘non-FAQS’, that I hope will still be of interest. The post ends with the references from the bid with clickable hyperlinks.

#### Why is the bid so focused on women and gender? What about disability or LGBT+ people?

The primary purpose of Athena SWAN is to address gender inequalities. In Mathematics and Information Security this means tackling the under-representation of women in almost everything we do.

Royal Holloway was one of the first HEIs to get a Race Equality Charter Mark and some of the proposed actions are aimed at BAME (Black, Asian and Minority Ethnic) students. Another action will promote the College workshop ‘How to be an LGBT ally’ and the (excellent) external SafeZone training. Yes, we should do more, but this is a Bronze bid so only the start of a long process to address inequalities.

#### Are women really under-represented?

I think yes. Mathematics is ahead of many sector norms: for example 40.6% of our undergraduate intake is female, compared to the sector mean of 35.7%; 38.8% of A-level Mathematics students are women. Of our staff 25.0% are women, compared to a sector mean of 20.4%; of professors, 23.1% are women, compared to an appalling sector mean of 12.6%. But all that said, women are half the population, and about 40% of new mathematics graduates are women, so we have very far to go.

#### Isn’t it discrimination to focus so many actions on women?

I argue very firmly no. The question is a ‘non-FAQ’ that I’ve deliberately worded in a pejorative way. (It is impossible to use the word ‘discrimination’ in this context in the positive sense ‘X has a fine discriminating palate for wine’.) By improving our policies and procedures and thinking particularly about women, we very often make life better for everyone. It is not a zero-sum game. It is legitimate to target actions at women to address under-representation. This does not imply that critical decisions, such as recruitment and promotion, will then be biased in favour of women.

#### Why the focus on unconscious bias?

Unconscious bias training was the most frequently requested form of training when we surveyed all staff and Ph.D. students. There is strong evidence that unconscious bias exists and prevents women from achieving their potential. An important early study is Science faculty’s subtle gender biases favor male students by Moss-Racusin et al, which asked scientists to evaluate two CVs for a job as a lab manager. The CVs were identical except in one the candidate’s first name was ‘John’, and in the other ‘Jennifer’. Both men and women rated Jennifer as less competent than John, and recommended a lower starting salary.

There is evidence that unconscious bias training can be effective for reducing unconscious bias (see pages 6 and 16: the overall picture is mixed, but the conclusion is clear). My own experience suggests that high-quality training and reading around the issue has made me more aware of the issues, and at least slightly less likely to rush to (probably poor) conclusions.

I highly recommend the third part of Cordelia Fine’s book Delusions of gender. The first two parts make a very convincing case that many stereotypical gender traits are not hard-wired, but instead products of culture and upbringing, or even (on closer inspection) non-existent. The final part examines how our remorselessly gendered society creates these biases and misconceptions.

#### What is unconscious bias?

First of all, I prefer the term ‘implicit bias’, since one can wrongly interpret ‘unconscious bias’ as referring to something that is independent of our thought processes and beyond our control.

Let me introduce my personal answer with an object that should be emotionally neutral and familiar to readers, namely ‘a vector space’. What comes into your head? Is it a finite dimensional vector space over $\mathbb{R}$, such as the familiar Euclidean space $\mathbb{R}^3$, or (my answer) an indeterminately large space over a finite field: $\mathbb{F}_q^n$? Or perhaps the most important vector spaces you meet are function spaces, in which case you might be imagining Hilbert space, with or without a preferred orthonormal basis. Yet other answers are possible: someone working in cryptography might think of $\mathbb{F}_2^{256}$. Quite possibly, I’ve missed your preferred example completely. Or maybe your brain just doesn’t work this way, and all you think about is the abstract definition of vector spaces and your immediate associations are to the main theorems you use when working with them. Anyway, my point is that we don’t think about vector spaces ‘in isolation’: instead they come with a bundle of implicit associations that are deeply shaped by our education and day-to-day experience.

Now instead think about ‘Mathematics professor’. Without claiming the thought processes are completely analogous, I hope you will agree that something similar goes on, with a bunch of implicit associations coming into our heads. For instance, I immediately start thinking about some of my professorial colleagues in Mathematics and ISG. In this respect I’m lucky: because I have personal examples to draw on, my immediate mental image is not the stereotypical old white man.

Taking this as a roughly accurate portrait of human cognition, we now see a mechanism in which bias can enter our decisions. For instance, in the Stanford lab manager study, the implicit associations around the word ‘manager’ bring to mind men, and so the male candidate is favoured. I suspect either you will readily accept this point, or feel it is completely unwarranted, so I won’t argue it any further, but instead refer you to the literature.

#### What about the Implicit Association Test?

My reading suggests that the Implicit Association Test is valuable as a way of raising awarenss of implicit bias. But is has been much criticised, and it is not clear the biases it identifies translate into unfair discrimination.

#### Isn’t this a huge piece of bureaucracy?

A ‘non-FAQ’, although the question has occurred to others. The answer is ‘Yes’. A recent report has many criticism of the Athena SWAN process. For instance, from the summary on page 3:

The application process must be streamlined and the administrative burden on staff, particularly female staff, reduced.

For what it’s worth, I think I could have written a major grant application or completed a substantial research project in the time it took just to draft the submission. Even this rough measure takes no account of the hours of time (not just mine) spent consulting over draft actions and the many weeks of work that the College’s Equality and Diversity Coordinator put into the bid.

#### What’s the point of doing all this when it clearly wouldn’t address Y (where Y is the manifest injustice of your choice)?

Just because it (probably) wouldn’t have prevented Y, doesn’t mean it isn’t worth doing for other reasons.

#### No seriously, what are the consequences if we don’t have an Athena SWAN award?

RCUK (the main funder for mathematics research) recommends ‘participation in schemes such as Athena SWAN’. There is no requirement to have an award, and my impression (contrary to what I half-expected) is that it is not likely to become a requirement in the near future.

Royal Holloway expects its departments to apply for awards. So if we don’t get it, we will either have to change this policy or work towards a reapplication in a few years. In short, we will be back to where we were two years ago. We could implement the Action Plan anyway, but without the motivation of holiding an award, progress might slip.

The Action Plan was formulated after long discussions within the E&D Committee and consulted on widely with department members. All actions are owned by the member of staff most closely involved: this is typically not me (the E&D Champion). I believe it will drive substantial culture improvements in Mathematics and ISG.

#### Do all Athena SWAN applications have references to the research literature on gender equality and feminism?

(A blatant ‘non-FAQ’.) No. In fact ours is the first I’ve seen. Probably it’s also the first Athena SWAN bid in which the Action Plan is generated by a customised database written from scratch in a functional programming language and outputting to LaTeX.

### References

1. Pragya Agarwal, SWAY: Unravelling unconscious bias, Bloomsbury, 2020.
2. Robert W. Aldritch et al, Black, Asian and Minority Ethnic groups in England are at increased risk of death from COVID-19: indirect standardisation of NHS mortality data [version 1; peer review: awaiting peer review], Wellcome Open Research, Coronavirus (COVID-19) collection, 6 May 2020.
3. Doyin Atewologun, Tinu Cornish and Fatima Tresh, Unconscious bias training: An assessment of the evidence for effectiveness, Equality and Human Rights Commission research report 113, March 2018.
4. Athena SWAN Charter Review Independent Steering Group for Advance HE, The Future of Athena SWAN, March 2020.
5. Anne Boring, Kellie Ottoboni and Philip B. Start, Student evaluations of teaching (mostly) do not measure teaching effectiveness, ScienceOpen Research (2016)
6. Caroline Criado-Perez, Invisible Women: Exposing Data Bias in a World Designed for Men, Chatto & Windus, 2019.
7. Cordelia Fine, Delusions of Gender, Icon Books Ltd, 2005.
8. Cordelia Fine, Testosterone Rex Icon Books Ltd, 2017.
9. Uta Frith, Understanding unconscious bias. Royal Society, 2015.
10. Cassandra M. Guarino and Victor M. H. Borden, Faculty service loads and gender: are women taking care of the academic family?, Research in Higher Education (2017) 58 672–694.
11. Nancy Hopkins, Diversification of a university faculty:
Observations on hiring women faculty in the Schools of Science and Engineering at MIT
, MIT Faculty Newsletter 2006 XVIII.

12. Corinne A. Moss-Racusin, John F. Dovidio, Victoria L. Brescoll, Mark J. Grahama, and Jo Handelsmana, Science faculty’s subtle gender biases favor male students, PNAS (2012) 109 16474–16479.
13. Ruth Pearce, Certifying equality? Critical reflections on Athena SWAN and equality accreditation, report for Centre for Women and Gender, University of Warwick, July 2017.
14. Research Excellence Framework, Guidance on submissions 2021, January 2019.
15. Safezone training.
16. UK Trendence Research, How are Gen Z responding to the coronavirus pandemic?, March 2020.
17. Trades Union Congress, Women and casualization: Women’s experience of job insecurity, January 2015.
18. Sandra Tzvetkova and Esteban Ortiz-Ospina, Working women: What determines female labor force participation?, Our World in Data (2017).
19. Liz Whitelegg, Jennifer Dyer and Eugenie Hunsicker, Work allocation models, Athena Forum, January 2018.

## The counter-intuitive behaviour of high-dimensional spaces

April 18, 2020

This post is an extended version of a talk I last gave a few years ago on an extended Summer visit to Bristol; four weeks into lockdown seems like a good time to write it up for a more varied audience. The overall thesis is that it’s hard to have a good intuition for really high-dimensional spaces, and that the reason is that, asked to picture such a space, most of us come far closer to something like $\mathbb{R}^4$ then the more accurate $\mathbb{F}_2^{1000}$. This is reflected in the rule of thumb that the size of a tower of exponents is determined by the number at the top: $1000^2$ is tiny compared to $2^{1000}$ and $2^{2^{2^{100}}}$ should seem only a tiny bit smaller than

$\displaystyle 4^{2^{2^{100}}} = 2^{2^{2^{100}+1}}$

when both are compared with $2^{2^{2^{101}}}$.

Some details in the proofs below are left to the highly optional exercise sheet that accompanies this post. Sources are acknowledged at the end.

As a warm up for my talk, I invited the audience to order the following cardinals:

$2, 2^{56}, 120^{10}, 10^{120}, \mathbb{N}, 2^\mathbb{N}, 2^{2^\mathbb{N}}.$

Of course they are already correctly (and strictly) ordered by size. From the perspective of ‘effective computation’, I claim that $2$ and $2^{56}$ are ‘tiny’, $10^{120}, \mathbb{N}$ and $2^\mathbb{N}$ are ‘huge’ and $120^{10}$ and $2^{2^\mathbb{N}}$ sit somewhere in the middle. To give some hint of the most surprising part of this, interpret $2^\mathbb{N}$ as the Cantor set $C$, so $2^{2^\mathbb{N}}$ is the set of subsets of $C$. Computationally definable subsets of $C$ are then computable predicates on $C$ (i.e. functions $C \rightarrow \{T, F\}$), and since $C$ is compact, it is computable whether two such predicates are equal. In contrast, there is no algorithm that, given two predicates on $\mathbb{N}$, will run for a finite length of time and then return the correct answer.

### Euclidean space and unit balls

Let $B^n = \{x \in \mathbb{R}^n : ||x|| \le 1\}$ be the solid $n$-dimensional unit ball in Euclidean space and let $S^n = \{x \in \mathbb{R}^{n+1} : ||x|| = 1\}$ be the $n$-dimensional surface of $B^{n+1}$.

#### House prices in Sphereland

Imagine a hypothetical ‘Sphereland’ whose inhabitants live uniformly distributed on the surface $S^n$. For instance, $S^2$ is the one-point compactification of Flatland. With a god’s eye view you are at the origin, and survey the world. At what values of the final coordinate are most of the inhabitants to be found?

For example, the diagram below shows the case $n=2$ with two regions of equal height shaded.

It is a remarkable fact, going all the way back to Archimedes, that the surface areas of the red and blue regions are equal: the reduction in cross-section as we go upwards is exactly compensated by the shallower slope. For a calculus proof, observe that in the usual Cartesian coordinate system, we can parametrize the part of the surface with $x=0$ and $z > 0$ by $(0,y,\sqrt{1-y^2})$. Choose $k > 0$. Then since $(0,-z,y)$ is orthogonal to the gradient of the sphere at $(0,y,z)$, to first order, if we increase the height $z$ from $z$ to $z+k$ then we must decrease the second coordinate $y$ from $y$ to $y-z/y k$. This is shown in the diagram below.

Hence the norm squared of the marked line segment tangent to the surface is

$\displaystyle \bigl|\bigl| \bigl(0, -\frac{z}{y}k, k\bigr) \bigr|\bigr|^2 = \frac{z^2k^2}{y^2} + k^2 = k^2 \frac{z^2+y^2}{y^2} = k^2 \frac{1}{1-z^2}.$

As in Archimedes’ argument, the area of the region $R$ between the lines of latitude of height between $z$ and $z+k$ is (to first order in $k$) the product of $k/\sqrt{1-z^2}$ and the circumference of the line of latitude at height $z$. It is therefore

$\displaystyle \frac{k}{\sqrt{1-z^2}} \times 2 \pi \sqrt{1-z^2} = 2\pi k$

which is independent of $z$. As a small check, integrating over $z$ from $-1$ to $1$ we get $4\pi$ for the surface area of the sphere; as expected this is also the surface area of the enclosing cylinder of radius $1$ and height $2$.

This cancellation in the displayed formula above is special to the case $n=2$. For instance, when $n=1$, by the argument just given, the arclength at height $z$ is proportional to $1/\sqrt{1-z^2}$. Therefore in the one-point compactification of Lineland (a place one might feel is already insular enough), from the god’s perspective (considering slices with varying $z$), most of the inhabitants are near the north and south poles ($z = \pm 1$), and almost no-one lives on the equation ($z=0$).

More generally, the surface area of $S^n$ is given by integrating the product of the arclength $1/\sqrt{1-z^2}$ and the surface area of the ‘latitude cross-section’ at height $z$. The latter is the $(n-1)$-dimensional sphere of radius $\sqrt{1-z^2}$. By dimensional analysis, its surface area is $C\bigl( \sqrt{1-z^2}\bigr)^{n-1}$ for some constant $C$. Hence the density of Sphereland inhabitants at height $z$ is proportional to $\bigl( \sqrt{1-z^2} \bigr)^{n-2}$. (A complete proof of this using only elementary calculus is given in Exercise 1 of the problem sheet.) In particular, we see that when $n$ is large, almost all the density is at the equator $z=0$. From the god’s eye perspective, in high-dimensional Sphereland, everyone lives on the equator, or a tiny distance from it. To emphasise this point, here are the probability density functions for $n \in \{1,2,3,5,10,25\}$ using colours red then blue then black.

As a further example, the scatter points below show $1000$ random samples from $S^{20}$, taking (left) $(x_1,x_2)$ coordinates and (right) $(x_{20}, x_{21})$ coordinates. Note that almost no points have a coordinate more than $1/2$ (in absolute value). Moreover, since the expected value of each $x_i^2$ is $1/20$, we find most of the mass at $x_i \approx 1/4$.

In particular, if a random inhabitant of large $n$-Spaceland is at $(x_1,\ldots, x_{n+1})$ then it is almost certain that most of the $x_i$ are very small.

One feature of this seems deeply unintuitive to me. There is, after all, nothing intrinsic about the $z$-coordinate. Indeed, the god can pick any hyperplane in $\mathbb{R}^{n+1}$ through the origin, and get a similar conclusion.

#### Concentration of measure

One could make the claim about the expected sizes of the coordinates more precise by continuing with differential geometry, but Henry Cohn’s answer to this Mathoverflow question on concentration of measure gives an elegant alternative approach. Let $X_1, \ldots, X_n$ be independent identically distributed normal random variables each with mean $0$ and variance $1/n$. Then $(X_1,\ldots,X_n)$ normalized to have length $1$ is uniformly distributed on $S^n$. (A nice way to see this is to use that a linear combination of normal random variables is normally distributed to show invariance under the orthogonal group $\mathrm{SO}_n(\mathbb{R})$.) Moreover, $\mathbb{E}[X_1^2 + \cdots + X_n^2] = 1$, and, using that $\mathrm{Var}[X_i^4] = 3/n^2$, we get

$\displaystyle \mathrm{Var}[X_1^2 + \cdots + X_n^2] = n \frac{3}{n^2} + n(n-1) \frac{1}{n^2} - 1 = \frac{2}{n}.$

Hence, by Chebychev’s Inequality that if $Z$ is a random variable then

$\displaystyle \mathbb{P}\bigl[|Z-\mathbb{E}Z| \ge c\bigr] \le \frac{\mathrm{Var} Z}{c^2},$

the probability that $X_1^2 + \cdots + X_n^2 \not\in (1-\epsilon,1+\epsilon)$ is at most $2/n\epsilon^2$ which tends to $0$ as $n \rightarrow \infty$. Therefore we can, with small error, neglect the normalization and regard $(X_1,\ldots, X_n)$ as a uniformly chosen point on $S^{n-1}$. By Markov’s inequality, that if $Z$ is a non-negative random variable then $\mathbb{P}[Z > a\mathbb{E}Z] < 1/a$, the probability that $|X_1| \ge a/\sqrt{n}$ (or equivalently, $X_1^2 \ge a^2/n$) is at most $1/a^2$, and the probability that $|X_1| \ge 1/n^{1/3}$ (or equivalently, $X_1^2 \ge 1/n^{2/3} = a/n$ with $a = n^{1/3}$) is at most $1/n^{1/3}$. Since

$\displaystyle \bigl(1 - \frac{1}{n^{1/3}}\bigr)^n \approx e^{-n/n^{1/3}} \rightarrow 0$

as $n \rightarrow \infty$, we see that with high probability, a random Sphereland inhabitant has all its coordinates in $(-1/n^{1/3}, 1/n^{1/3})$. I think this makes the counter-intuitive conclusion of the previous subsection even starker.

#### Volume of the unit ball

The ‘length of tangent line times area of cross-section’ argument says that if $A(S^n)$ is the surface area of $S^n$ then

\begin{aligned} \displaystyle A(S^n) &= \int_{-1}^1 \frac{A(S^{n-1}) \sqrt{1-z^2}^{n-1}}{\sqrt{1-z^2}} \mathrm{d} z \\ &= A(S^{n-1}) \int_{-1}^1 \sqrt{1-z^2}^{n-2} \mathrm{d} z. \end{aligned}

A quick trip to Mathematica to evaluate the integral shows that

$\displaystyle A(S^n) = A(S^{n-1}) \frac{\sqrt{\pi}\Gamma(\frac{n}{2})}{\Gamma(\frac{n+1}{2})}.$

It follows easily by induction that $A(S^n) = 2\sqrt{\pi}^{n+1} / \Gamma(\frac{n+1}{2})$. Since $B^n = \bigcup_{r=0}^1 rS^{n-1}$ and $A(rS^{n-1}) = r^{n-1} A(S^{n-1})$, the volume $V_n$ of $B^n$ is

$V_n = \displaystyle A(S^{n-1}) \int_0^1 r^{n-1} \mathrm{d}r = \frac{2\sqrt{\pi}^n}{n\Gamma(\frac{n}{2} )}.$

In particular, from $\Gamma(t) = (t-1)!$ for $t \in \mathbb{N}$ we get $V_{2m} = \pi^m / m!$. Hence $V_{2(m+1)} = \frac{\pi}{(m+1)} V_{2m}$ (is there a quick way to see this?), and the proportion of the cube $[-1,1]^{2m}$ occupied by the ball $B^{2m}$ is

$\displaystyle \frac{(\pi/4)^m}{m!}.$

Thus the proportion tends to $0$, very quickly. I find this somewhat surprising, since my mental picture of a sphere is as a bulging convex object that should somehow fill out’ an enclosing cube. Again my intuition is hopelessly wrong.

### Coding Theory

We now turn to discrete spaces. Our friends Alice and Bob must communicate using a noisy channel that can send the bits $0$ and $1$. They agree (in advance) a binary code $C \subseteq \mathbb{F}_2^n$ of length $n$, and a bijection between codewords and messages. When Bob receives a binary word in $\mathbb{F}_2^n$ he decodes it as the message in bijection with the nearest codeword in $C$; if there are several such codewords, then he chooses one at random, and fears the worst. Here ‘nearest’ of course means with respect to Hamming distance.

#### Shannon’s probabilistic model

In this model, each bit flips independently with a fixed crossover probability $p$. If $p = 1/2$ then reliable communication is impossible and if $p > 1/2$ then we can immediately reduce to the case $p < 1/2$ by flipping all bits in a received word. We therefore assume that $p < 1/2$. In this case, Shannon's Noisy Coding Theorem states that the capacity of the binary symmetric channel is $1 - h(p)$, where $h(p) = -p\log_2 p - (1-p) \log_2 (1-p)$ is the binary entropy function. That is, given any $\epsilon > 0$, provided $n$ is sufficiently large, there is a binary code $C \subseteq \mathbb{F}_2^n$ and size $|C| \ge 2^{(1-h(p)-\epsilon)n}$ such that when $C$ is used with nearest neighbour decoding to communicate on the binary symmetric channel, the probability of a decoding error is $\le \epsilon$ (uniformly for every codeword). We outline a proof later.

For example, the theoretical maximum 4G data rate is $10^8$ bits per second. Since $h(1/4) \approx 0.8113$, provided $n$ is sufficiently large, even if one in four bits gets randomly flipped by the network, Shannon’s Noisy Coding Theorem promises that Alice and Bob can communicate reliably using a code of size $2^{0.188n}$. In other words, Alice and Bob can communicate reliably at a rate of up to $18.8$ million bits per second.

In Hamming’s model the received word differs from the sent word in at most $e$ positions, where these positions are chosen by an adversary to be as inconvenient as possible. Nearest neighbour decoding always succeeds if and only if the minimum distance of the code is at least $2e+1$.

In the usual binary symmetric channel with crossover probability $p$, when $n$ is large, the chance that the number of flips in the normal binary symmetric channel is more than $(p+\epsilon)n$ is negligible. Therefore a binary code with minimum distance $2e+1$ can be used to communicate reliably on an adversarial binary symmetric channel with crossover probability $p < e/n$, in which the number of flipped bits is always $pn$, and these bits are chosen adversarially.

So how big can a code with minimum distance $d$ be? Let $C$ be such a code and for each $w \in \mathbb{F}_2^{n-2d}$, let

$C_w = \bigl\{(u_1,\ldots, u_{2d}) : u \in C, (u_{2d+1}, \ldots, u_n) = w\bigr\}.$

Observe that each $C_w$ is a binary code of length $2d$ and minimum distance at least $d$. By the Plotkin bound, $|C_w| \le 4d$ for all $d$. (Equality is attained by the Hadamard codes $(2d,4d,d)$.) Since each $C_w$ has size at most $2^{n-2d}$, we find that

$|C| \le 4d\times 2^{n-2d}.$

The relative rate $\rho(C)$ of a binary code $C$ of length $n$ is defined to be $(\log_2 |C|)/n$. By the bound above, if $C$ is $e$-error correcting (and so its minimum distance satisfies $d \ge 2e+1$) we have

$\displaystyle \rho(C) \le \frac{\log_2 (8e+4)2^{n-4e-2}}{n} \le 1 - 4\frac{e}{n}.$

In particular, if $e/n \ge \frac{1}{4}$ then $\rho(C) = 0$: the code consists of a vanishing small proportion of $\mathbb{F}_2^n$. We conclude that if the crossover probability in the channel is $\frac{1}{4}$ or more, fast communication in the Hamming model is impossible.

We have seen that Shannon’s Noisy Coding Theorem promises reliable communication at a rate beyond the Plotkin bound. I hope it is clear that there is no paradox: instead we have demonstrated that communication with random errors is far easy than communication with adversarial errors.

In my view, most textbooks on coding theory do not give enough emphasis to this distinction. They share this feature with the students in my old coding theory course: for many years I set an optional question asking them to resolve the apparent clash between Shannon’s Noisy Coding Theorem and the Plotkin Bound; despite several strong students attempting it, only one ever got close to the explanation above. In fact in most years, the modal answer was ‘mathematics is inconsistent’. Of course as a fully paid-up member of the mathematical establishment, I marked this as incorrect.

#### The structure of large-dimensional vector spaces

I now want to argue that the sharp difference between the Shannon and Hamming models illuminates the structure of $\mathbb{F}_2^n$ for large $n$ is large.

When lecturing in the Hamming model, one often draws pictures such as the one below, in which, like a homing missile, a sent codeword inexorably heads towards another codeword (two errors, red arrows), rather than heading in a random direction (two errors, blue arrows).

While accurate for adversarial errors, Shannon’s Noisy Coding Theorem tells us that for random errors, this picture is completely inaccurate. Instead, if $C$ is a large code with minimum distance $pn$ and $u \in C$ is sent over the binary symmetric channel with crossover probability $p$, and $v \in \mathbb{F}_2^n$ is received, while $v$ is at distance about $pn$ from $u$, it is not appreciably closer to any other codeword $u' \in C$. I claim that this situation is possible because $\mathbb{F}_2^n$ is ‘large’, in a sense not captured by the two-dimensional diagram above.

#### Shannon’s Noisy Coding Theorem for the binary symmetric channel

To make the previous paragraph more precise we outline a proof of Shannon’s Noisy Coding Theorem for the binary symmetric channel. The proof, which goes back to Shannon, is a beautiful application of the probabilistic method, made long before this was the standard term for such proofs.

We shall simplify things by replacing the binary symmetric channel with its ‘toy’ version, in which whenever a word $u \in \mathbb{F}_2^n$ is sent, exactly $pn$ bits are chosen uniformly at random to flip. (So we are assuming $pn \in \mathbb{N}$.) By the Law of Large Numbers, this is a good approximation to binomially distributed errors, and it is routine using standard estimates (Chebychev’s Inequality is enough) to modify the proof below so it works for the original channel.

Proof. Fix $\rho < 1 - h(p)$ and let $M = 2^{n \rho}$, where $n$ will be chosen later. Choose $2M$ codewords $U(1), \ldots, U(2M)$ uniformly at random from $\mathbb{F}_2^n$. Let $P_i$ be the probability (in the probability space of the toy binary symmetric channel) that when $U(i)$ is sent, the received word is decoded incorrectly by nearest neighbour decoding. In the toy binary symmetric channel, the received word $v$ is distance $pn$ from $U(i)$, so an upper bound for $P_i$ is the probability $Q_i$ that when $U(i)$ is sent, there is a codeword $U(j)$ with $j\not=i$ within distance $pn$ of the received word. Note that $U(i), U(j), P_i, P_j$ are all themselves random variables, defined in the probability space of the random choice of code.

Now $Q_i$ is in turn bounded above by the expected number (over the random choice of code) of codewords $U(j)$ with $j\not=i$ within distance $pn$ of the received word. Since these codewords were chosen independently of $U(i)$ and uniformly from $\mathbb{F}_2^n$, it doesn't matter what the received word is: the expected number of such codewords is simply

$\displaystyle \frac{(2M-1) V_n(pn)}{2^n}$

where $V_n(pn)$ is the volume (i.e. number of words) in the Hamming ball of radius $pn$ about $\mathbf{0} \in \mathbb{F}_2^n$. We can model words in this ball as the output of a random source that emits the bits $0$ and $1$ with probabilities $1-p$ and $p$, respectively. The entropy of this source is $h(p)$, so we expect to be able to compress its $n$-bit outputs to words of length $h(p)n$. Correspondingly,

$V_n(pn) \le 2^{h(p)n}.$

(I find the argument by source coding is good motivation for the inequality, but there is of course a simpler proof using basic probability theory: see the problem sheet.) Hence

$\displaystyle P_i \le \frac{(2M-1)2^{h(p)n}}{2^n} \le 2 \times 2^{(\rho + h(p) - 1)n}.$

Since $\rho < 1-h(p)$, the probability $P_i$ of decoding error when $U(i)$ is sent becomes exponentially small as $n$ becomes large. In particular, the mean probability $P = \frac{1}{2M} \sum_{i=1}^{2M} P_i$ is smaller than $\epsilon / 2$, provided $n$ is sufficiently large. A Markov’s Inequality argument now shows that by throwing away at most half the codewords, we can assume that the probability of decoding error is less than $\epsilon$ for all $M$ remaining codewords. $\Box$

#### Varying the alphabet

Increasing the size of the alphabet does not change the situation in an important way. In fact, if $\mathbb{F}_2$ is replaced with $\mathbb{F}_p$ for $p$ large then the Singleton Bound, that $|C| \le p^{n-d}$ becomes effective, giving another constraint that also apparently contradicts Shannon’s Noisy Coding Theorem. There is however one interesting difference: in $\mathbb{F}_2^n$, every binary word has a unique antipodal word, obtained by flipping all its bits, whereas in $\mathbb{F}_p^n$ there are $(p-1)^n$ words at distance $n$ from any given word. This is the best qualitative sense I know in which $\mathbb{F}_2^n$ is smaller than $\mathbb{F}_p^n$.

### Cryptography and computation

Readers interested in cryptography probably recognised $56$ above as the key length of the block cipher DES. This cipher is no longer in common use because a determined adversary knowing (as is usually assumed) some plaintext/cipher pairs can easily try all $\mathbb{F}_2^{56}$ possible keys and so discover the key. Even back in 2008, an FPGA-based special purpose device costing £10000 could test $65.2 \times 10^9 \approx 2^{35.9}$ DES keys every second, giving $12.79$ days for an exhaustive search.

Modern block ciphers such as AES typically support keys of length $128$ and $256$. In Applied Cryptography, Schneier estimates that a Dyson Sphere capturing all the Sun’s energy for 32 years would provide enough power to perform $2^{192}$ basic operations, strongly suggesting that $256$ bits should be enough for anyone. The truly paranoid (or readers of Douglas Adams) should note that in Computational capacity of the universe, Seth Lloyd, Phys. Rev. Lett., (2002) 88 237901-3, the author estimates that even if the universe is one vast computer, then it can have performed at most $10^{120} \approx 2^{398.6}$ calculations. Thus in a computational sense, $2^{56}$ is tiny, and $2^{256}$ and $10^{120}$ are effectively infinite.

The final number above was $120^{10} \approx 2^{69.1}$. The Chinese supercomputer Sunway TaihuLight runs at 93 Petaflops, that is $93 \times 10^{15} \approx 2^{56.4}$ operations per second. A modern Intel chip has AES encryption as a primitive instruction, and can encrypt at 1.76 cycles per byte for a 256-bit key, encrypting 1KB at a time. If being very conservative, we assume the supercomputer can test a key by encrypting 16 bytes, then it can test $2^{56.4}/2^4/1.76 = 2^{51.6}$ keys every second, requiring $2.12$ days to exhaust over $2^{69.1}$ keys. Therefore $120^{10}$ is in the tricky middle ground, between the easily computable and the almost certainly impossible.

My surprising claim that $2^{2^\mathbb{N}}$ also sits somewhere in the middle comes from my reading of a wonderful blog post by Martín Escardó (on Andrej Bauer’s blog). To conclude, I will give an introduction to his seemingly-impossible Haskell programs.

As a warm-up, consider this Haskell function which computes the Fibonacci numbers $F_n$, extending the definition in the natural way to all integers $n$.

fib n | n == 0  = 0
| n == 1  = 1
| n >= 2  = fib (n-1) + fib (n-2)
| n <= -1 = fib (n+2) - fib (n+1)


Except from the minor notation change that the conditions appear before rather than after the equals sign, this Haskell code is a verbatim mathematical definition. The code below defines two predicates $p$ and $q$ on the integers such that $p(n)$ is true if and only if $F_n^2 - F_{n-1}F_{n+1} = (-1)^{n-1}$ and $q(n)$ is true if and only if $F_n \equiv 0$ mod $3$.

p n = fib n * fib n - fib (n+1) * fib (n-1) == (-1)^(n+1)
q n = fib n mod 3 == 0


The first equation is Cassini’s Identity, so $p(n)$ is true for all $n \in \mathbb{Z}$; $q(n)$ is true if and only if $n$ is a multiple of $4$. We check this in Haskell using its very helpful ‘list-comprehension’ syntax; again this is very close to the analogous mathematical notation for sets.

*Cantor> [p n | n <- [-5..5]]
[True,True,True,True,True,True,True,True,True,True,True]

*Cantor> [q n | n <- [-5..5]]
[False,True,False,False,False,True,False,False,False,True,False]


Therefore if we define $p'(n)$ to be true (for all $n \in \mathbb{Z}$) and $q'(n)$ to be true if and only if $n \ \mathrm{mod} \ 4 = 0$, we have $p = p'$ and $q = q'$.

An important feature of Haskell is that it is strongly typed. We haven’t seen this yet because Haskell is also type-inferring in a very powerful way, that makes explicit type signatures usually unnecessary. Simplifying slightly, the type of the predicates above is Integer -> Bool, and the type of fib is Integer -> Integer. (Integer is a built in Haskell type that supports arbitrary sized integers.) The family of predicates $r_m$ defined by

$r_m(n) \iff n \ \mathrm{mod}\ m = 0$

is defined by

r m n = n mod m == 0


Here r has type Integer -> Integer -> Bool. It is helpful to think of this in terms of the currying isomorphism $C^{A \times B} \cong (C^A)^B$, ubiquitous in Haskell code. We now ask: is there a Haskell function

equal :: (Integer -> Bool) -> (Integer -> Bool) -> Bool


taking as its input two predicates on the integers and returning True if and only if they are equal? The examples of $p, p', q, q'$ above show that such a function would, at a stroke, make most mathematicians unemployed. Fortunately for us, Turing’s solution to the Entscheidungsproblem tells us that no such function can exist.

Escardó’s post concerns predicates defined not on the integers $\mathbb{Z}$, but instead on the $2$-adic integers $\mathbb{Z}_2$. We think of the $2$-adics as infinite bitstreams. For example, the Haskell definitions of the bitstream $1000 \ldots$ representing $1 \in \mathbb{Z}_2$ and the bitstream $101010 \ldots$ representing

$2^0 + 2^2 + 2^4 + 2^6 + \cdots = -\frac{1}{3} \in \mathbb{Z}_2$

are:

data Bit = Zero | One
type Cantor = [Bit]
zero = Zero : zero
one  = One : zero
bs   = One : Zero : bs :: Cantor


(I’m deliberately simplifying by using the Haskell list type []: this is not quite right, since lists can be, and usually are, finite.) Because of its lazy evaluation — nothing is evaluated in Haskell unless it is provably necessary for the computation to proceed — Haskell is ideal for manipulating such bitstreams. For instance, while evaluating bs at the Haskell prompt will print an infinite stream on the console, Haskell has no problem performing any computation that depends on only finitely many values of a bitstream. As proof, here is $-\frac{1}{3} - 2 \times \frac{1}{3} = -1$:

*Cantor> take 10 (bs + tail bs)
[One,One,One,One,One,One,One,One,One,One]
*Cantor> take 10 (bs + tail bs + one)
[Zero,Zero,Zero,Zero,Zero,Zero,Zero,Zero,Zero,Zero]


(Of course one has to tell Haskell how to define addition on the Cantor type: see Cantor.hs for this and everything else in this section.) As an example of a family of predicates on $\mathbb{Z}_2$, consider

twoPowerDivisibleC :: Int -> Cantor -> Bool
twoPowerDivisibleC p bs = take p zero == take p bs


Thus twoPowerDivisibleC p bs holds if and only if bs represents an element of $2^p \mathbb{Z}_2$. For example, the odd $2$-adic integers are precisely those for which twoPowerDivisibleC 1 bs is false:

*Cantor> [twoPowerDivisibleC 1 (fromInteger n) | n <- [0..10]]
[True,False,True,False,True,False,True,False,True,False,True]


In the rooted binary tree representation of $\mathbb{Z}_2$ shown below, the truth-set of this predicate is exactly the left-hand subtree. The infinite path to $-1/3$ is shown by thick lines.

Here is a somewhat similar looking definition

nonZeroC :: Cantor -> Bool
nonZeroC (One : _)   = True
nonZeroC (Zero : bs) = nonZeroC bs


While a correct (and correctly typed) Haskell definition, this does not define a predicate on $\mathbb{Z}_2$ because the evaluation of nonZeroC zero never terminates. In fact, it is surprisingly difficult to define a predicate (i.e. a total function with boolean values) on the Cantor type. The beautiful reason behind this is that the truth-set of any such predicate is open in the $2$-adic topology. Since this topology has as a basis of open sets the cosets of the subgroups $2^p \mathbb{Z}_2$, all predicates look something like one of the twoPowerDivisibleC predicates above.

This remark maybe makes the main result in Escardó’s blog post somewhat less amazing, but it is still very striking: it is possible to define a Haskell function
 equalC :: (Cantor -> Bool) -> (Cantor -> Bool) -> Bool 
which given two predicates on $\mathbb{Z}_2$ (as represented in Haskell using the Cantor type) returns True if and only if they are equal (as mathematical functions on $\mathbb{Z}_2$) and false if and only if they are unequal. Escardó’s ingenious definition of equalC` needs only a few lines of Haskell code: it may look obvious when read on the screen, but I found it a real challenge to duplicate unseen, even after having read his post. I encourage you to read it: it is fascinating to see how the compactness of $\mathbb{Z}_2$ as a topological space corresponds to a ‘uniformity’ property of Haskell predicates.

### Sources

The results on volumes of spheres and balls are cobbled together from several MathOverflow questions and answers, in particular Joseph O’Rourke’s question asking for an intuitive explanation of the concentration of measure phenomenon, and S. Carnahan’s answer to a question on the volume of the $n$-dimensional unit ball. The coding theory results go back to Shannon, and can be found in many textbooks, for example Van Lint’s Introduction to coding theory. The use of the ‘toy’ binary symmetric channel is my innovation, used when I lectured our Channels course in 2019–20 to reduce the technicalities in the proof of Shannon’s Noisy Coding Theorem. The diagrams were drawn using TikZ; for the spheres I used these macros due to Tomasz M. Trzeciak. The material about computability is either very basic or comes from a blog post by Martín Escardó (on Andrej Bauer’s blog) giving an introduction to his paper Infinite sets that admit fast exhaustive search, published in the proceedings of Logic in Computer Science 2007.

## A dismal outlook: why does Microsoft Teams make it so hard to join meetings?

April 8, 2020

The purpose of this post is to document two of the more blatant bugs I’ve encountered in my forced exposure to Microsoft Teams during the Coronavirus crisis. To add insult to injury, the only way to work around them is to use the Microsoft Outlook, another piece of software riddled with deficiencies. And don’t even think of using Microsoft Outlook (the application): instead one must use the still clunkier web interface.

Let’s get started. Here is a screenshot of me scheduling a meeting at 15:22 for 16:00 today.

Notice anything odd. Probably not: what sane person checks the timezone? But, look closely: Microsoft Teams firmly believes that London is on UTC+00:00 (same as GMT), not BST. This belief is not shared by my system, or any other software on it that I can find.

Now let’s try to join the meeting. Okay it’s early, but we are keen. Here is a screenshot of me hovering with my mouse over the meeting.

There is no way to join. Double clicking on the meeting just gives a chance to reschedule it (maybe to a later date, when Microsoft has fixed this glaring deficiency). The ‘Meet now’ button starts an unrelated meeting.

Okay, maybe our mistake was to join an MS Teams meeting using MS Teams. Let’s try using the Outlook web calendar. Here is a screenshot.

Here is a close-up of the right hand of the right-hand side

On the one hand, the times say that the meeting started 46 minutes ago; on the other, it is ‘in 14 min’. Perhaps because of this temporal confusion, there is no way to join the meeting.

Finally, here is MS Teams at 16:00.

Nothing has changed, there is still no way to join the meeting.

Update. Apparently one of my errors was to schedule a meeting with no invitees. Under Microsoft’s interpretation, such meetings may be scheduled, but never attended (even by gate-crashing). On the time-zone front, both the Outlook web calendar and MS Teams continue to insist that London is on UTC+00:00, but, bizarrely, choosing London as my location (it already was) fixed the scheduling bug. In the Outlook example I invited myself, but still there was no link.

Many thanks to Remi from the Royal Holloway IT support team for steering me on an expert course around the shark-invested waters of Microsoft software.

Further update. The other day I accidentally scheduled a meeting using Microsoft Outlook (the application), taking care to include myself on the guestlist, and hit essentially the same bug. Here is me scheduling a test meeting.

and here are three screen shots as I frantically view the meeting in all the available applications, trying to find a way to join it. None is available.

The prosection rests its case. How can Microsoft justify releasing such inept pieces of software? Whatever top-secret protocol they are using, they can’t even speak it fluently between each other!

## Stanley’s theory of P-partitions and the Hook Formula

March 22, 2020

The aim of this post is to introduce Stanley’s theory of labelled $\mathcal{P}_\preceq$-partitions and, as an application, give a short motivated proof of the Hook Formula for the number of standard Young tableaux of a given shape. In fact we prove the stronger $q$-analogue, in which hook lengths are replaced with quantum integers. All the ideas may be found in Chapter 7 of Stanley’s book Enumerative Combinatorics II so, in a particularly strong sense, no originality is claimed. I thank my Ph.D. student Eoghan McDowell for several helpful corrections and comments; of course I have full responsibility for any remaining errors.

The division into four parts below and the recaps at the start of each part have the aim of reducing notational overload (the main difficulty in the early parts), while also giving convenient places to take a break.

### Part 1: Background on arithmetic partitions

Recall that an arithmetic partition of size $n$ is a weakly decreasing sequence of natural numbers whose sum is $n$. For example, there are $8$ partitions of $7$ with at most three parts, namely

$(7), (6,1), (5,2), (5,1,1), (4,3), (4,2,1), (3,3,1), (3,2,2),$

as represented by the Young diagrams shown below.

Since the Young diagram of a partition into at most $3$ parts is uniquely determined by its number of columns of lengths $1$, $2$ and $3$, such partitions are enumerated by the generating function

$\displaystyle \frac{1}{(1-q)(1-q^2)(1-q^3)} = 1 \!+\! q \!+ \!2q^2 \!+\! 3q^3 \!+\! 4q^4 \!+\! 5q^5 \!+\! 7q^6 \!+\! \cdots$

For example $(4,2,1)$ has two columns of length 1, and one each of lengths 2 and 3. It is counted in the coefficient of $q^7$, obtained when we expand the geometric series by choosing $q^{1 \times 2}$ (two columns of length 1) from

$\displaystyle \frac{1}{1-q} = 1 + q + q^2 + \cdots,$

then $q^{2 \times 1}$ (one column of length 2) from

$\displaystyle \frac{1}{1-q^2} = 1+q^2 + q^4 + \cdots$

and finally $q^{3 \times 1}$ (one column of length 3) from

$\displaystyle \frac{1}{1-q^3} = 1+q^3+q^6 + \cdots .$

Now suppose that we are only interested in partitions where the first part is strictly bigger than the second. Then the Young diagram must have a column of size $1$, and so we replace $1/(1-q)$ in the generating function with $q/(1-q)$. Since the coefficient of $q^6$ above is $7$, it follows (without direct enumeration) that there are $7$ such partitions. What if the second part must also be strictly bigger than the third? Then the Young diagram must have a column of size $2$, and so we also replace $1/(1-q^2)$ with $q^2/(1-q^2)$. (I like to see this by mentally removing the first row: the remaining diagram then has a column of size $1$, by the case just seen.) By a routine generalization we get the following result: partitions with at most $k$ parts such that the $j$th largest part is strictly more than the $(j+1)$th largest part for all $j \in J \subseteq \{1,\ldots, k-1\}$ are enumerated by

$\displaystyle \frac{q^{\sum_{j \in J} j}}{(1-q)(1-q^2) \ldots (1-q^k)}.$

### Part 2: $\mathcal{P}_\preceq$-partitions

Let $\mathcal{P}$ be a poset with partial order $\preceq$. A $\mathcal{P}_\preceq$-partition is an order-preserving function $p : \mathcal{P} \rightarrow \mathbb{N}_0$.

For example, if $\mathcal{P} = \{1,\ldots, k\}$ with the usual order then $\bigl( p(1), \ldots, p(k)\bigr)$ is the sequence of values of a $\mathcal{P}$-partition if and only if $p(1) \le \ldots \le p(k)$. Thus, by removing any initial zeros and reversing the sequence, $\mathcal{P}$-partitions are in bijection with arithmetic partitions having at most $k$ parts.

I should mention that it is more usual to write $\mathcal{P}$ rather than $\mathcal{P}_\preceq$. More importantly, Stanley’s original definition has ‘order-reversing’ rather than ‘order-preserving’. This fits better with arithmetic partitions, and plane-partitions, but since our intended application is to reverse plane-partitions and semistandard Young tableaux, the definition as given (and used for instance in Stembridge’s generalization) is most convenient.

#### Reversed plane partitions

Formally the Young diagram of the partition $(4,2,1)$ is the set

$[(4,2,1)] = \{(1,1),(1,2),(1,3),(1,4),(2,1),(2,2),(3,1)\}$

of boxes. We partially order the boxes by (the transitive closure of) $(a,b) \preceq (a+1,b)$ and $(a,b) \preceq (a,b+1)$. This is shown diagramatically below.

For this partial order, $\mathcal{P}_\preceq$-partitions correspond to assignments of non-negative integers to $[(4,2,1)]$ such that that the rows and columns are weakly increasing when read left to right and top to bottom. For example, three of the $72$ $\mathcal{P}_\preceq$-partitions of size $6$ are shown below

Such assignments are known as reverse plane partitions. The proof of the Hook Formula given below depends on finding the generating function for reverse plane partitions in two different ways: first using the general theory of $\mathcal{P}$-partitions, and then in a more direct way, for instance using the Hillman–Grassl bijection.

#### Enumerating $\mathrm{RPP}(2,1)$

As a warm-up, we replace $(4,2,1)$ with the smaller partition $(2,1)$, so now $\mathcal{P}_\preceq = \{(1,1),(1,2),(2,1)\}$, ordered by $(1,1) \preceq (1,2)$, $(1,1) \preceq (2,1)$. Comparing $p(1,2)$ and $p(2,1)$, we divide the $\mathcal{P}_\preceq$-partitions into two disjoint classes: those with $p(1,2) \le p(2,1)$, and those with $p(1,2) > p(2,1)$. The first class satisfy

$p(1,1) \le p(1,2) \le p(2,1)$

and so are in bijection with arithmetic partitions with at most $3$ parts. The second class satisfy

$p(1,1) \le p(2,1) < p(1,2)$

so are in bijection with arithmetic partitions with at most $3$ parts, whose largest part is strictly the largest. By the first section we deduce that

\displaystyle \begin{aligned}\sum_{n=0}^\infty |\mathrm{RPP}_{(2,1)}(n)|q^n &= \frac{1+q}{(1-q)(1-q^2)(1-q^3)} \\&= \frac{1}{(1-q)^2(1-q^3)}.\end{aligned}

The cancellation to leave a unit numerator is a feature of the beautiful generating function for reverse plane partitions, revealed in the final part below.

#### Labelled $\mathcal{P}$-partitions

A surprisingly helpful way to keep track of the weak/strong inequalities seen above is to label the poset elements by natural numbers. We define a labelling of a poset $\mathcal{P}_\preceq$ of size $k$ to be a bijective function $L : \mathcal{P} \rightarrow \{1,\ldots, k\}$. Suppose that $y$ covers $x$ in the order on $\mathcal{P}_\preceq$. Either $L(x) < L(y)$, in which case we say the labelling is natural for $(x,y)$, or $L(x) > L(y)$, in which case we say the labelling is strict for $(x,y)$. A $(\mathcal{P}_\preceq,L)$-partition is then an order preserving function $p : \mathcal{P} \rightarrow \mathbb{N}_0$ such that if $x \prec y$ is a covering relation and

• $L(x) < L(y)$ (natural) then $p(x) \le p(y)$;
• $L(x) > L(y)$ (strict) then $p(x) < p(y)$.

Note that the role of the labelling is only to distinguish weak/strong inequalities: the poset itself determines whether $p(u) \ge p(v)$ or $p(v) \le p(u)$ for each comparable pair $u$, $v \in \mathcal{P}$. If we drop the restriction that $x \prec y$ is a covering relation, and just require $x \prec y$ then we clearly define a subset of the labelled $(\mathcal{P}_\preceq, L)$-partitions, and it is not hard to see that in fact the definitions are equivalent. If feels most intuitive to me to state the definition as above.

Let $\mathrm{Par}(\mathcal{P}_\preceq, L)$ denote the set of $(\mathcal{P}_\preceq,L)$-partitions. For example, if $\mathcal{P} = \{(1,1),(1,2),(2,1)\}$ as in the example above, then the $\mathcal{P}_\preceq$-partitions are precisely the $(\mathcal{P}_\preceq, L)$-partitions for any all-natural labelling; the two choices are

$L(1,1) = 1, L(1,2) = 2, L(2,1) = 3,$

and

$L'(1,1) = 1, L'(1,2) = 3, L'(2,1) = 2.$

Working with $L$, the partitions with $p(1,1) \le p(1,2) \le p(2,1)$ form the set $\mathrm{Par}(\mathcal{P}_\unlhd, L)$ where $\unlhd$ is the total order refining $\preceq$ such that

$(1,1) \unlhd (1,2) \unlhd (2,1)$

and the partitions with $p(1,1) \le p(2,1) < p(1,2)$ form the set $\mathrm{Par}(\mathcal{P}_{\unlhd'}, L)$ where

$(1,1) \unlhd' (2,1) \unlhd' (1,2).$

The division of $\mathcal{P}$-partitions above is an instance of the following result.

Fundamental Lemma. Let $\mathcal{P}$ be a poset with partial order $\preceq$ and let $L : \mathcal{P} \rightarrow \{1,\ldots, k\}$ be a labelling. Then

$\mathrm{Par}(\mathcal{P}_\preceq, L) = \bigcup \mathrm{Par}(\mathcal{P}_\unlhd, L)$

where the union over all total orders $\unlhd$ refining $\preceq$ is disjoint.

Proof. Every $(\mathcal{P}_\preceq, L)$ partition appears in the right-hand side for some $\unlhd$: just choose $\unlhd$ so that if $p(x) < p(y)$ then $x \lhd y$ and if $p(x)=p(y)$ and $x \prec y$ then $x \lhd y$. (In the second case, if $x$ and $y$ are not comparable under $\preceq$ then it is arbitrary whether we set $x \lhd y$ or $y \lhd x$.)

On the other hand, suppose that $p \in \mathrm{Par}(\mathcal{P}_\preceq,L)$ is in both $\mathrm{Par}(\mathcal{P}_{\unlhd},L)$ and $\mathrm{Par}(\mathcal{P}_{\unlhd'},L)$. Choose $x$ and $y \in \mathcal{P}$ incomparable under $\preceq$ and such that $x \lhd y$ and $y \lhd' x$. From $\mathcal{P}_\lhd$ we get $p(x) \le p(y)$ and from $\mathcal{P}_{\lhd'}$ we get $p(y) \le p(x)$. Therefore $p(x) = p(y)$. Now using the labelling for the first time, we may suppose without loss of generality that $L(x) < L(y)$; since $y \lhd' x$ the labelling is strict for $x$ and $y$ and so we have $p(y) < p(x)$, a contradiction. $\Box$.

Suggested exercise. Show that the generating functions enumerating $\mathrm{RPP}(2,2)$ and $\mathrm{RPP}(3,1)$ are $1/(1-q)(1-q^2)^2(1-q^3)$ and

$\displaystyle\frac{1}{(1-q)^2(1-q^2)(1-q^4)}$

respectively; there are respectively $2$ and $3$ linear extensions that must be considered.

### Part 3: Permutations

Recall that $\mathcal{P}_{\preceq}$ is a poset, $L : \mathcal{P} \rightarrow \{1,\ldots, k\}$ is a bijective labelling and that a $(\mathcal{P}_{\preceq},L)$-partition is a function $p : \mathcal{P} \rightarrow \mathbb{N}_0$ such that if $x \preceq y$ then $p(x) \le p(y)$, with strict inequality whenever $L(x) > L(y)$.

#### Connection with permutations

We write permutations of $\{1,\ldots, k\}$ in one-line form as $\pi_1\ldots \pi_k$. Recall that $\pi$ has a descent in position $i$ if $\pi_i > \pi_{i+1}$.

Example. Let $(1,1) \preceq (1,2)$, $(1,1) \preceq (2,1)$ be the partial order used above to enumerate $\mathrm{RPP}(2,1)$, as labelled by $L(1,1) = 1$, $L(1,2) = 2$, $L(2,1) = 3$. The total order $(1,1) \unlhd (1,2) \unlhd (2,1)$ corresponds under $L$ to the identity permutation $123$ of $\{1,2,3\}$, with no descents. The total order $(1,1) \unlhd' (2,1) \unlhd' (1,2)$ corresponds under $L$ to the permutation $132$ swapping $2$ and $3$, with descent set $\{2\}$.

In general, let $x_1 \unlhd \ldots \unlhd x_k$ be a total order refining $\preceq$. Let $i \le k-1$ and consider the elements $x_i$ and $x_{i+1}$ of $\mathcal{P}$ labelled $L(x_i)$ and $L(x_{i+1})$. In any $\mathcal{P}_\unlhd$-partition $p$ we have $p(x_i) \le p(x_{i+1})$, with strict inequality required if and only if $L(x_i) > L(x_{i+1})$. Therefore, using the total order $\unlhd$ to identify $p$ with a function $\{1,\ldots, k\} \rightarrow \mathbb{N}_0$, i.e. the function $i \mapsto p(x_i)$, we require $p(i) \le p(i+1)$ for all $i$, with strict inequality if and only if $L(x_i) > L(x_{i+1})$. Equivalently,

$p(1) \le \ldots \le p(k)$

with strict inequality whenever there is a descent $L(x_i)L(x_{i+1})$ in the permutation $L(x_1) \ldots L(x_k)$ corresponding under $L$ to $\unlhd$. Conversely, a permutation $\pi_1\ldots \pi_k$ of $\{1,\ldots, k\}$ corresponds to a total order refining $\preceq$ if and only if $L(x)$ appears left of $L(y)$ whenever $x \preceq y$. Therefore Stanley’s Fundamental Lemma may be restated as follows.

Fundamental Lemma restated. Let $\mathcal{P}$ be a poset with partial order $\preceq$ and let $L : \mathcal{P}_\preceq \rightarrow \{1,\ldots, k\}$ be a labelling. Then, using the labelling $L$ to identity elements of $\mathrm{Par}(\mathcal{P}_\preceq, L)$ with functions on $\{1,\ldots, k\}$,

$\mathrm{Par}(\mathcal{P}_\preceq, L) = \bigcup P_\pi$

where $P_\pi$ is the set of all functions $p : \{1,\ldots, k\} \rightarrow \mathbb{N}_0$ such that $p(1) \le \ldots \le p(k)$ with strict inequality whenever $\pi_i > \pi_{i+1}$, and the union is over all permutations $\pi$ such that $L(x)$ appears to the left of $L(y)$ in the one-line form of $\pi$ whenever $x \preceq y$. Moreover the union is disjoint. $\Box$

The sequences $\bigl( p(1), \ldots, p(k) \bigr)$ above correspond to partitions whose $\bigl((k+1)-(i+1)\bigr)$th largest part is strictly more than their $((k+1)-i)$th largest part (which might be zero), and so are enumerated by

$\displaystyle \frac{q^{\sum_{i : \pi_i > \pi_{i+1}} (k-i)}}{(1-q)\ldots (1-q^k)}.$

The power of $q$ in the numerator is, by the standard definition, the comajor index of the permutation $\pi$. We conclude that

$\displaystyle \sum_{p \in P(\mathcal{P}_\preceq,L)} q^{|p|} = \frac{\sum_\pi q^{\mathrm{comaj}(\pi)}}{(1-q)\ldots (1-q^k)}$

where the sum in the numerator is over all permutations $\pi$ as in the restated Fundamental Lemma.

#### Example: $\mathrm{RPP}(3,2)$

We enumerate $\mathrm{RPP}(3,2)$. There are $5$ extensions of the partial order on $[3,2]$,

• $(1,1) \unlhd (1,2) \unlhd (1,3) \unlhd (2,1) \unlhd (2,2)$
• $(1,1) \unlhd (1,2) \unlhd (2,1) \unlhd (1,3) \unlhd (2,2)$
• $(1,1) \unlhd (1,2) \unlhd (2,1) \unlhd (2,2) \unlhd (1,3)$
• $(1,1) \unlhd (2,1) \unlhd (1,2) \unlhd (1,3) \unlhd (2,2)$
• $(1,1) \unlhd (2,1) \unlhd (1,2) \unlhd (2,2) \unlhd (1,3)$

corresponding under the labelling

to the permutations $12345, 12435, 12453, 14235, 14253$ with descent sets $\varnothing$, $\{3\}, \{4\}, \{2\}, \{2,4\}$ and comajor indices $0$, $2$, $1$, $3$, $4$, respectively. By the restatement of the fundamental lemma and the following remark,

\begin{aligned}\sum_{n=0}^\infty |\mathrm{RPP}_{(3,2)}(n)|q^n &= \frac{1 + q^2 + q + q^3 + q^4}{(1-q)(1-q^2)(1-q^3)(1-q^4)(1-q^5)} \\ &= \frac{1}{(1-q)^2(1-q^2)(1-q^3)(1-q^4)}. \end{aligned}

We end this part by digressing to outline a remarkably short proof of an identity due to MacMahon enumerating permutations by major index and descent count.

Exercise. If $\mathcal{P} = \{1,\ldots, k\}$ and all elements are incomparable under $\preceq$, then a $\mathcal{P}_\preceq$-partition is simply a function $p : \{1,\ldots, k\} \rightarrow \mathbb{N}_0$.

• How are such partitions enumerated by the restated Fundamental Lemma?
• Deduce that

$\displaystyle \frac{1}{(1-q)^k} = \frac{\sum_{\pi} q^{\mathrm{comaj}(\pi)}}{(1-q)\ldots (1-q^k)}.$

where the sum is over all permutations of $\{1,\ldots, k\}$.

• Give an involution on permutations that preserves the number of descents and swaps the comajor and major indices. Deduce that $\mathrm{comaj}(\pi)$ can be replaced with $\mathrm{maj}(\pi)$ above.

Exercise. The argument at the start of this post shows that $1/(1-qt) \ldots (1-q^k t)$ enumerates partitions with at most $k$ parts by their size (power of $q$) and largest part (power of $t$).

• Show that partitions whose $j$th largest part is strictly larger than their $j+1$th largest part for all $j \in J \subseteq \{1,\ldots,k-1\}$ are enumerated in this sense by

$\displaystyle \frac{q^{\sum_{j \in J}j}t^{|J|}}{(1-qt)\ldots (1-q^kt)}.$

• Let $c_k(m)$ be the number of compositions with $k$ parts having $m$ as their largest part. Show that

$\displaystyle \sum_{m=0}^\infty c_k(m) t^m = \frac{\sum_{\pi} q^{\mathrm{comaj}(\pi)} t^{\mathrm{des}(\pi)}}{(1-qt)\ldots (1-q^k t)}$

where $\mathrm{des}(\pi)$ is the number of descents of $\pi$.

• Deduce that

$\displaystyle \sum_{m=0}^\infty \Bigl(\frac{q^{m+1}-1}{q-1}\Bigr)^k t^m = \frac{\sum_{\pi} q^{\mathrm{comaj}(\pi)} t^{\mathrm{des}(\pi)}}{(1-t)(1-qt)\ldots (1-q^k t)}.$

• Hence prove MacMahon’s identity
$\displaystyle \sum_{r=0}^\infty [r]_q^k t^r = \frac{\sum_\pi q^{\mathrm{maj}(\pi)} t^{\mathrm{des}(\pi)+1}}{(1-t)(1-qt)\ldots (1-q^k t)}$

where $[r]_q = (q^r-1)/(q-1) = 1 + q + \cdots + q^{r-1}$.

### Part 4: Proof of the Hook Formula

#### The restated Fundamental Lemma for reverse plane partitions

Fix a partition $\lambda$ of size $k$ and let $\mathcal{P}_{\preceq}$ be the poset whose elements are the boxes of the Young diagram $[\lambda]$ ordered by (the transitive closure of) $(a,b) \preceq (a+1,b)$ and $(a,b) \preceq (a,b+1)$. Let $\gamma_a$ be the sum of the $a$ largest parts of $\lambda$, and define a labelling $L : [\lambda] \rightarrow \{1,\ldots, k\}$ so that the boxes in row $a$ of the Young diagram $[\lambda]$ are labelled $\gamma_{a-1}+1,\ldots, \gamma_a$. (This labelling was seen for $\lambda=(3,2)$ in the example at the end of the previous part.) Since this label is all-natural, the $(\mathcal{P}_\preceq, L)$-partitions are precisely the reversed plane partitions of shape $\lambda$. By the restated Fundamental Lemma and the following remark we get

$\displaystyle \sum_{n=0}^\infty |\mathrm{RPP}_\lambda(n)|q^n = \frac{\sum_{\pi} q^{\mathrm{comaj}(\pi)}}{(1-q) \ldots, (1-q^k)}$

where the sum is over all permutation $\pi$ such that the linear order defined by $\pi$ refines $\preceq$. Equivalently, as seen in Part 3 and the final example, the label $L(a,b)$ appears to the left of both $L(a+1,b)$ and $L(a,b+1)$ in the one-line form of $\pi$ (when the boxes exist). This says that $\pi^{-1}_{L(a,b)} < \pi^{-1}_{L(a+1,b)}$ and $\pi^{-1}_{L(a,b)} < \pi^{-1}_{L(a,b+1)}$. Therefore, when we put $\pi^{-1}_i$ in the box of $[\lambda]$ with label $i$, we get a standard tableau. Moreover, $\pi$ has a descent in position $i$, if and only if $i$ appears to the right of $i+1$ in the one-line form of $\pi^{-1}$. Therefore defining $\mathrm{comaj}(t) = \sum_i (n-i)$, where the sum is over all $i$ appearing strictly below $i+1$ in $t$, we have $\mathrm{comaj}(\pi) = \mathrm{comaj}(t)$. (Warning: this is the definition from page 4 of this paper of Krattenthaler, and is clearly convenient here; however it does not agree with the definition on page 364 of Stanley Enumerative Combinatorics II.) We conclude that

$\displaystyle \sum_{n=0}^\infty |\mathrm{RPP}_\lambda(n)|q^n = \frac{\sum_{t \in \mathrm{SYT}(\lambda)} q^{\mathrm{comaj}(t)}}{(1-q)\ldots (1-q^k)}$

where $\mathrm{SYT}(\lambda)$ is the set of standard tableaux of shape $\lambda$.

For example we saw at the end of the previous part that the refinements of $\preceq$ when $\lambda = (3,2)$ correspond to the permutations

$12345, 12435, 12453, 14235, 14253.$

Their comajor indices are $0$, $2$, $1$, $3$, $4$, their inverses are

$12345, 12435, 12534, 13425, 13524$

and the corresponding standard tableaux, obtained by putting $\pi^{-1}_i$ in the box of $[(3,2)]$ labelled $i$ are

Here the final tableau has $3$ strictly above $2$ and $5$ strictly above $4$, so its comajor index is $(5-2) + (5-4) = 4$.

#### Hillman–Grassl algorithm

Since this post is already rather long, I will refer to Section 7.22 of Stanley Enumerative Combinatorics II or, for a more detailed account that gives the details of how to invert the algorithm, Section 4.2 of Sagan The symmetric group: representations, combinatorial algorithms and symmetric functions. For our purposes, all we need is the remarkable corollary that

$\displaystyle \sum_{n=0}^\infty |\mathrm{RPP}_\lambda(n)|q^n = \frac{1}{\prod_{(a,b)\in[\lambda]} (1-q^{h_{(a,b)}})}$

where $h_{(a,b)}$ is the hook-length of the box $(a,b) \in [\lambda]$. We saw the special cases for the partitions $(2,1)$ and $(3,2)$ above.

This formula can also be derived by letting the number of variables tend to infinity in Stanley’s Hook Content Formula: see Theorem 7.21.2 in Stanley’s book, or Section 2.6 in this joint paper with Rowena Paget, where we give the representation theoretic context.

#### Proof of the Hook Formula

Combining the results of the two previous subsections we get

$\displaystyle \frac{1}{\prod_{(a,b)\in[\lambda]} (1-q^{h_{(a,b)}})} = \frac{\sum_{t \in \mathrm{SYT}(\lambda)} q^{\mathrm{comaj}(t)}}{(1-q)\ldots (1-q^k)}.$

Equivalently, using the quantum integer notation $[r]_q = (1-q^r)/(1-q)$ and $[k]!_q = [k]_q \ldots [1]_q$, we have

$\displaystyle \sum_{t \in \mathrm{SYT}(\lambda)} q^{\mathrm{comaj}(t)} = \frac{[k]!_q}{\prod_{(a,b) \in [\lambda]} [h_{(a,b)}]_q}.$

This is the $q$-analogue of the Hook Formula; the weaker normal version is obtained by setting $q=1$ to get

$\displaystyle |\mathrm{SYT}(\lambda)| = \frac{k!}{\prod_{(a,b) \in [\lambda]} h_{(a,b)}}.$

## What makes a good/bad lecture?

December 30, 2019

In the opinion of 45 Royal Holloway 2nd year students the answers, taking only those mentioned at least three times, are:

#### What makes a good lecture?

• Engaging/enthusiastic lecturer (22)
• Interactive (8)
• Clear voice (7)
• Eye contact with audience (5)
• Checking for understanding (4)
• Clear (4)
• Clear handwriting (4)
• Jokes/humour (4)
• Seems interested in what they’re saying (4)
• Well prepared (4)
• Examples (3)
• Sound excited/animated (3)
• High quality notes (3)

#### What makes a bad lecture?

• Too quiet (18)