Parallelism in Haskell

May 1, 2022

The purpose of this post is to give the simplest way I’ve found to get parallel execution to work in ghc 8.10.1.

The unhelpful cabal

I’ll begin by getting rid of the trivial difficulties. I’m obliged to use Stack (and the related cabal system), as it’s the only way I can find to get ghc onto recent versions of Mac OS X. I am unable to make stack work in any simple way. Here is the complicated way I’ve settled on. For each project:

  1. Make a directory with all .hs source files.
  2. Create a .cabal file as the example below.
  3. Run stack --init.
  4. Edit stack.yaml so that the resolver field reads resolver: lts-18.3.
  5. Run stack -ghci
  6. Build with stack build
  7. Run executable with stack exec Main <arguments>

For (1) use a separate Main.hs module declaring main. Do not expect the ghc option -main-is to work. A suitable .cabal file for (2) is

name:               ParTest
version:            0.0.0.0
cabal-version:      >= 1.22
build-type:         Simple

executable Main
  main-is:          Main.hs
  other-modules:    ParTest
  build-depends:    base >= 4.14.1.0, array, deepseq, parallel
  default-language: Haskell98
  ghc-options:      -threaded -O2 -with-rtsopts=-N

Note that the build-depends field is separate from any import declarations in the .hs files and uses a different naming convention. For instance the package parallel above corresponds to import Control.Parallel and import Control.Parallel.Strategies in the module file. Fortunately if you omit a required package from build-depends then there is a very helpful error message that includes the package name you need. The purpose of (4) is to prevent stack from downloading a more recent version of ghc than it supports: ‘Stack has not been tested with GHC versions above 8.10, and using 9.0.2, this may fail’ — indeed, fail it does. (Thanks to stack‘s insistence on downloading an entirely fresh ghc for each project, I suspect I have now downloaded over 30Gb worth of copies of ghc trying to chase down this glitch.) At least once I’ve found that (6) fails unless one first does (5): I have no idea why.

More positively, while I’ve had no success at all in using --ghc-options="-threaded" to pass options to ghc, the ghc-options field in the cabal file automates all of this, with the relevant run-time option included to boot. And I will concede that despite all the fuss of using cabal, it does in my experience avoid the dreaded ‘dependency hell’.

Note that ghc has to be explicitly told to compile threaded code. If one omits -threaded then par and the related constructs from Control.Parallel appear to have no effect. (When testing this, I discovered that its often essential to run stack clean after editing the .cabal file to make the changes ‘stick’.) This behaviour of ghc seems unnecessarily unhelpful to me: why can’t the compiler issue a warning when it sees par without threading turned on?

Benchmarking with lazy evaluation

A recurring issue with Haskell benchmarks is that lazy evaluation means that it’s very easy to write code that executes quickly, but only becaue it doesn’t do what strict imperative languages such as C might lead one to expect. For a simple example, suppose expensive :: Integer -> Integer is a function such that expensive k takes time \mathrm{O}(2^k) to compute. We can build a list of the first n values of expensive using

testExpensive n = length [expensive k | k <- [1..n]]

Irrespective of how expensive is defined, evaluating testExpensive
at the Haskell prompt takes time \mathrm{O}(n). The reason is that under lazy evaluation, only the existence of each entry of the list has to be established; each expensive k sits within the list as an unevaluated thunk. This can be seen interactively using the very useful sprint command in ghci. Try for instance

let xs = [expensive k | k <- [1..n]]

for the expensive function of your choice (for instance collatzSum below), followed by :sprint xs, length xs, :sprint xs, sum xs, :sprint xs.

Warning: if you do this interactively, you must avoid giving xs a polymorphic type, as will probably happen if you omit all type annotations. For instance if xs has type Num a => [a] then each evaluation will refer to a separate copy of xs. Thus with let xs = map (\x -> x + 1) [1..10] you’ll always get _ from :sprint xs, whereas let xs = map (\x -> x + 1) [1..10] :: [Int] will work as expected.

An expensive function the compiler can’t magic away

Open mathematical problems are a rich source of such functions. Here I’ll go for the Collatz conjecture, on which Paul Erdős said “Mathematics is not yet ripe enough for such questions”; despite an astonishing partial result due to Terence Tao, his assessment still stands today. Let us define:

collatzList :: Integer -> [Integer] 
collatzList 0 = error "collatzList: 0"
collatzList 1 = []
collatzList n | n `mod` 2 == 0 = n : collatzList (n `div` 2)
              | otherwise      = n : collatzList (3 * n + 1)
          
collatzSum :: Integer -> Integer
collatzSum n = sum $ collatzList n

While ghc is a deeply impressive piece of software, I think we can safely assume that it does not have, deeply embedded in its internals, an effective proof of the Collatz conjecture that gives a short-circuited evaluation of collatzSum. We can therefore take collatzSum as the expensive function we require.

Parallelising the Collatz sum of sums

Evaluating sum [collatzSum k | k <- [1..1000000], using the module Main.hs at the end of this post, takes about 9.5 seconds on my computer. The code below parallelises this computation by splitting the list in two, and evaluating the sum of each half in parallel. Its running time is about 5 seconds.

collatzSumOfSumsP :: Integer -> Integer
collatzSumOfSumsP r = sA `par` (sB `par` sA + sB)
    where sA = sum [collatzSum n | n <- [1..r], n `mod` 2 == 0]
          sB = sum [collatzSum n | n <- [1..r], n `mod` 2 == 1]

Here par is a Haskell function with the type a -> b -> b. The unusual type is a clue that it breaks the normal rules of evaluation. (Indeed, the reasoning used in Wadler’s remarkable paper Theorems for free! shows that the only strict function with this type must discard the first variable and return the second unmodified.) The effect of par is to evaluate the first argument to weak head normal form (WHNF), and then, in parallel, but otherwise following the usual rules of Haskell evaluation, evaluate the second argument. (Compare seq :: a -> b -> b which does exactly the same, but in series.) In particular, since sA is defined in the where clause, it is evaluated exactly once, in parallel with sB.

In fact the simpler definition collatzSumOfSumsP r = sA `par` sA + sB also works: the compiler is smart enough to avoid the ‘not-parallel’ code that in theory computes sA and sA+sB in parallel, but hangs on sA+sB waiting for sA, rather than getting on with sB.

The variation below evaluates the lists in parallel and then takes the sum. Its running time is (up to random noise) the same as collatzSumOfSumsP. For rpar import Control.Monad.Strategies and for force import Control.DeepSeq.

interleave :: [a] -> [a] -> [a]
interleave xs [] = xs
interleave [] ys = ys 
interleave (x : xs) (y : ys) = x : y : interleave xs ys 

parallelInterleave :: (NFData a) => [a] -> [a] -> [a]
parallelInterleave xs ys =
    runEval $ do 
        xsP <- rpar (force xs)
        ysP <- rpar (force ys)
        return $ interleave xsP ysP

collatzSumOfSumsPL :: Integer -> Integer
collatzSumOfSumsPL r = sum $ parallelInterleave ssA ssB
    where ssA = [collatzSum n | n <- [1..r], n `mod` 2 == 0]
          ssB = [collatzSum n | n <- [1..r], n `mod` 2 == 1]

This version is more typical of the kind of problem I really want to parallelise, in which a huge collection of candidate solutions is generated, and then some further computation is done on this collection to decide which solutions work.

Parallel interleave

The engine driving collatzSumOfSumsPL is the function parallelInterleave, which uses rpar from Control.Parallel.Strategies and force from Control.Deepseq to evaluate the lists in parallel.

The use of force is essential because of a very well-known Haskell ‘gotcha’. Consider for instance let xs = collatzList 10; seq xs "Now xs has been evaluated?!". One can check using :sprint that xs is indeed evaluated, but only to WHNF: the output is xs = 10 : _. The correct implementation of force (for lists) is recursive:

forceList :: [a] -> [a]
forceList [] = []
forceList (x : xs) = let xsF = forceList xs in xsF `seq` (x : xsF)  

This can be used as a drop-in replacement for force in parallelInterleave. But it’s all fiddly enough that I think it’s best to leave it to the experts who wrote the Control.DeepSeq package and use force as above. (The first version of this post even had a version that accidentally erased the list, while still failing to force non-head terms.)

The problem with collatzSumOfSumsPL is that since it evaluates both the lists ssA = [collatzSum n | n <- [1..r], n `mod` 2 == 0] and ssB = [collatzSum n | n <- [1..r], n `mod` 2 == 1] in full before taking the sum of each list (and then the sum of the sums), it uses memory proportional to the running time of the entire program, rather than memory bounded by the largest value of collatzSum n as in collatzSumOfSumsP. Our use of par and seq has forced us to confront the messy reality that semantically equivalent programs can have very different space requirements.

More on space

The code below computes collatzSumOfSums by using an array, storing the value collatzSum n for each relevant n. Since Haskell arrays are strict, this means each call collatzSumOfSums r has to fully allocate an array of size \mathrm{O}(r).

hugeCollatz :: Integer -> Array Integer Integer
hugeCollatz r = array (1, r) [(i, collatzSum i) | i <- [1..r]]

sumArray :: Array i Integer -> Integer
sumArray arr = sum $ elems arr

collatzSumOfSumsA :: Integer -> Integer
collatzSumOfSumsA r = sumArray $ hugeCollatz r

Using collatzSumOfSumsA as the expensive function in our benchmarks, the extra space required to interleave the strictly evaluated lists before taking the sum can easily be seen using top.

collatzSumOfSumOfSums :: Integer -> Integer
collatzSumOfSumOfSums t = sum [collatzSumOfSums r | r <- [1..t]]

collatzSumOfSumOfSumsAPHuge :: Integer -> Integer
collatzSumOfSumOfSumsAPHuge t = sum [sumArray arr | arr <- arrays]
    where arraysA = [hugeCollatz r | r <- [1..t], r `mod` 2 == 0]
          arraysB = [hugeCollatz r | r <- [1..t], r `mod` 2 == 1]
          arrays  = parallelInterleave arraysA arraysB

collatzSumOfSumOfSumsAPSmall :: Integer -> Integer
collatzSumOfSumOfSumsAPSmall t = sum ss
    where ssA = [sumArray $ hugeCollatz r | r <- [1..t], r `mod` 2 == 0]
          ssB = [sumArray $ hugeCollatz r | r <- [1..t], r `mod` 2 == 1]
          ss  = parallelInterleave ssA ssB

On my machine, evaluating these three functions for t = 2500 take respectively:

  • 14 seconds, memory 20Mb
  • 8 seconds, memory growing to 320Mb
  • 8 seconds, memory 20Mb

The saving in collatzSumOfSumOfSumsAPSmall from summing the arrays as they are produced is shown in the diagram below.

Conclusion

Despite my initial struggles, I think these experiments show that it’s possible to get a substantial gain from parallelism in Haskell without any very expert knowledge or recondite tricks. But to get the full benefit one has to parallelise all the code, not just the first stage in the pipeline. This is of course the more complicated option: my hope for my intended applications is that functions such as rdeepseq can automate the plumbing.

Further reading

I discovered the Eval monad used by rpar :: Strategy a (equivalently, rpar :: a -> Eval a) from Chapter 2 of Parallel and concurrent programming in Haskell by Simon Marlow. Chapter 3 goes into much more detail. See also Chapter 24 of Real world Haskell by Bryan O’Sullivan, Don Stewart, and John Goerzen.

Appendix: Main module

For the record here is the Main.hs module I used to get these timings.

module Main where

import System.Environment

import ParTest

-- Tests with r = 1000000
testSumOfSums :: IO ()
testSumOfSums =
do putStrLn "testSumOfSums"
rStr : _ <- getArgs
let r = read rStr :: Integer
-- putStrLn $ show $ collatzSumOfSums r   -- 316% cpu, 9.582 total
-- putStrLn $ show $ collatzSumOfSumsP r  -- 304% cpu, 4.962 total
putStrLn $ show $ collatzSumOfSumsPL r    -- 342% cpu, 4.998 total

-- Tests with t = 2500
testSumOfSumOfSums :: IO ()
testSumOfSumOfSums =
do tStr : _ <- getArgs
let t = read tStr :: Integer
--putStrLn $ show $ collatzSumOfSumOfSums t       -- 312% cpu 13.837 total, memory 20Mb
--putStrLn $ show $ collatzSumOfSumOfSumsAPHuge t -- 435% cpu 7.902 total, memory to 320Mb
putStrLn $ show $ collatzSumOfSumOfSumsAPSmall t  -- 377% cpu 7.077 total, memory 20Mb

main :: IO ()
main = testSumOfSums
-- main = testSumOfSumOfSums


Q is for quantum

October 24, 2021

The purpose of this post is to serve partly as a review of Terry Rudolph’s book Q is for quantum and partly as a record of my thoughts after a few months of reading about quantum computation. The first part of Ruldolph’s book may be downloaded for free from his website.

Part 1: Q-computing

The PETE box and the Stern–Gerlach experiment

Q is for quantum begins with a version of the Stern–Gerlach experiment. We are asked to imagine that black and white balls are sent through a box called ‘PETE’. (The afterword makes clear the name ‘PETE’ comes from Rudolph’s collaborator Pete Shadbolt.) After a black ball goes through a PETE box, it is observed to be equally likely to be black and white. The same holds for white balls.

No physical experiment can distinguish the black balls that come out from the black balls that come in, or a white ball that comes out from a white ball from the supply of fresh balls. For example, if the white balls that come out are discarded, and the black balls that come out are passed through a second PETE box, then again the output ball is equally likely to be black and white.

The surprise comes when PETE boxes are connected so that a white ball put into the first one passes immediately into the second. Now the output is always white.

Naturally we are suspicious that, despite our earlier failure to distinguish balls output from PETE boxes from fresh balls, the first PETE box has somehow ‘tagged’ the input ball, so that when it goes into the second box, the second PETE box knows to output a white ball. When we test this suspicion by observing the output of the first PETE box, we find (as expected) that the output of the first box is 50/50, but so is the output of the second. Somehow observation has changed the outcome of the experiment.

This should already seem a little surprising. But if we accept that observed balls coming out of PETE are experimentally indistinguishable from fresh balls then, Rudolph argues, we have to make a more radical change to our thinking: there is no way to interpret the state of an unobserved ball midway between two PETE boxes as either black or white. As Rudolph says, ‘We are forced to conclude that the logical notion of “or” has failed us’. This quickly leads Rudolph to introduce an informal idea of superposition and his central concept of a misty state. For example, the state | \psi\rangle shown above is drawn as the diagram below.

Rudolph then makes the bold step of supposing that when a black ball, represented by | 1\rangle is put through PETE, the output is \frac{1}{\sqrt{2}}( |0\rangle - |1\rangle); of course this is drawn in his book as another misty state, this time with a minus sign attached to the black ball. Does this minus sign contradict the experimental indistinguishability of output balls? I’ll quote Rudolph’s explanation of this point since it is so clear, even though (alas) it contains the one typo I found in the entire book (page 19):

But it isn’t a “physically different” type of black ball; if we looked at the ball at this stage we would just see it as randomly either white or black. No matter what we do, we won’t be able to see anything to tell us that when it we see it black [sic] it is actually “negative-black”.

In the mathematical formalism, this is the unobservability of a global phase (a consequence of the Born rule for measurement). Thus in a sense the PETE box does tag the black balls, but not in a way we can observe. The behaviour of a white ball sent into two PETE boxes in immediate succession is easily explained using superposition. The mathematical calculation

\begin{aligned} |0\rangle &\mapsto \frac{1}{\sqrt{2}} (|0\rangle + |1\rangle) \\ &\mapsto \frac{1}{\sqrt{2}} \bigl( \frac{1}{\sqrt{2}} (|0\rangle + |1\rangle) + \frac{1}{\sqrt{2}}( \frac{1}{\sqrt{2}} (|0\rangle - |1\rangle) \bigr) \\ &= |0\rangle \end{aligned}

is not significantly shorter than Rudolph’s equivalent demonstration using misty diagrams on page 20. His book shows impressive economy by explaining many of the key ideas in quantum theory and computation using just these misty diagrams to compute with the PETE box and certain carefully chosen initial states.

Indeed, Rudolph has already shown one fundamental idea: if we accept that the misty state drawn above is a complete description (much more on this later) of a ball leaving a PETE box, then we have to give up on determinism. Instead the universe is fundamentally random. This was difficult for the early architects of quantum theory to accept, since it contradicts the idea that all physical properties have a definite value. Things are perhaps easier for us now: on the quantum side we are used to the uncertainty principle, and I’d argue that the complexity of modern life, and our relentless exposure to probabilistic and statistical reasoning, makes it easier for us to accept that randomness might be ‘hard-wired’ into the laws of physics. We return to these subtle points after the discussion of Part 2 of Rudolph’s book, where we follow Bell’s argument in refuting the ‘local hidden variable’ theory that, even though we cannot observe it for ourselves, PETE boxes are deterministic, and emit balls tagged in such a way that the laws of physics conspire to agree with the predictions from the misty state theory.

Translating from misty diagrams to the usual bra/ket formalism is of course routine. As one might hope, this infusion of mathematics has some advantages. For instance, if we accept the Born rule that measuring \alpha|0\rangle + \beta |1\rangle in the orthogonal basis |0\rangle, 1|\rangle gives |0\rangle with probability |\alpha|^2 and |1\rangle with probability |\beta|^2, then conservation of probability requires that this state evolves by unitary transformations in \mathrm{U}_2(\mathbb{C}). Rudolph’s ‘bold’ step is now forced: once we have

H|0\rangle = \frac{1}{\sqrt{2}} (|0\rangle + |1 \rangle),

unitary dynamics requires that, up to a phase,

H |1 \rangle \mapsto \frac{1}{\sqrt{2}} (|0 \rangle - |1 \rangle).

Thus, as my notation in the diagrams above suggests, the PETE box is the Hadamard gate

H = \frac{1}{\sqrt{2}} \left( \begin{matrix} 1 & 1 \\ 1 & -1 \end{matrix} \right).

The Stern–Gerlach experiment shows further quantum effects which are not captured by Rudolph’s thought experiment. In the original experiment, silver atoms were sent through a magnetic field, strongest at the top of the apparatus and weakest at the bottom. The atomic number of silver is

47 = 2 + 8 + 18 + 18 + 1,

and correspondingly there is a single electron in the 4f subshell. The behaviour of the atom in the magnetic field in the experiment is determined by the spin of this electron. After passing through the magnetic field, two beams of atoms are observed: one deflected upwards and one deflected downwards. The amount of deflection is constant, showing the quantisation of the spin of this single electron.

Rudolph’s thought experiment corresponds to an extended version of this experiment, in which two different fields are used. Interpreting | 0 \rangle as `spin up’, the diagram below shows atoms, prepared in the spin-up state by a vertical field, being deflected first by a horizontal magnetic field and then a second horizontal field in the combiner (this field has the reverse gradient to the first horizontal field), and finally a third vertical magnetic field identical to the first one.

If the atoms are observed in the middle part of the apparatus (shown by a blue box above), or if one of the ‘mirrors’ indicated by diagonal lines is replaced with a barrier, then the output is 50%/50% (red text above). This is interpreted on the Wikipedia page as

… the measurement of the angular momentum on the x direction destroys the previous determination of the angular momentum in the z direction.

But if the atoms are not observed in the middle, then the output from the final vertical field is 100% spin-up, as for the output from the first vertical field. This holds even if the blue part of the apparatus, in which  the horizontal field polarises electrons horizontally into two different beams, sends the atoms on a round trip of thousands of kilometres. As long as, they meet up at the combiner (without observation at any stage), the same interference occurs, and the output from the final vertical field is always spin-up.  Since Rudolph’s thought experiment has no analogue of horizontal polarisation, this feature is not captured. But Rudolph has already made a convincing (and entirely honest) case that quantum phenomena cannot be understood by classical physics or conventional probability.

For a detailed account of the physics of the Stern–Gerlach experiment I found these notes useful. See Figure 6.9 for the experiment shown schematically above. Also I highly recommend this MIT lecture by Allan Adams for an entertaining and instructive treatment of the Stern–Gerlach experiment that has some features in common with Rudolph’s presentation, for instance, the use of colour to replace the spin-up and spin-down states |0\rangle, |1\rangle; he also brings in `hardness’  to replace horizontal polarisation and the states |+\rangle, |-\rangle seen in the blue part of the diagram above.

Bank vaults and the Deutsch–Josaz algorithm

Part I ends with an illustration of quantum computing. We are asked to imagine that thieves have broken into a bank vault, in which each room is stocked with eight golden bars, numbered from 0 up to 7. In some rooms, all bars are fake (‘all that glistens is not gold’); in other rooms, there are four fake bars and four real bars. Happily each room has an ‘Archimedes’ machine, that given as input the ball sequence encoding the bar number in binary and one extra ball, flips the colour of the extra ball if and only if the bar is genuine. For example since 6 is 110 in binary, the input balls encoding 6 are black, black, white. We could use either colour for the extra ball. If we use white, then, supposing that bar 6 is genuine, the output is

A|110 \rangle |0 \rangle = |110\rangle |1\rangle.

Moving the result from the Z-basis |0\rangle, |1\rangle to the X-basis, |+\rangle = \frac{1}{\sqrt{2}}(|0\rangle + |1\rangle), |-\rangle = \frac{1}{\sqrt{2}}(|0\rangle - |1\rangle) we may rewrite this as

A|110\rangle |-\rangle = -|110 \rangle |-\rangle.

Less happily for them, the thieves only have time to use the Archimedes machine once. The ingenious solution is to apply Archimedes to a superposition of the numbers of all 8 bars, created by three white balls put into three PETE boxes in parallel, with the extra ball in the state |-\rangle = H |1\rangle, created by putting a black ball into a PETE box. The input is therefore |+\rangle |+\rangle |+\rangle |-\rangle and the output is

A|+\rangle |+\rangle |+\rangle |-\rangle = \sum_{v=0}^7 (-1)^{f(v)} |v \rangle |-\rangle

where f(v) = 0 if bar v is fake, and f(v) = 1 if bar v is genuine. If all bars are fake then the output is

\sum_{v=0}^7 |v \rangle |-\rangle = |+\rangle |+\rangle |+\rangle |-\rangle

in agreement with the input. In this case, measuring in the X basis must give |+\rangle |+\rangle |+\rangle |-\rangle. Suppose instead that the vault has an even split. Since

\langle +++ | \sum_{v=0}^7 (-1)^{f(v)} |v \rangle = \sum_{v=0}^7 (-1)^{f(v)} = 0,

measuring in the X basis cannot give |+\rangle|+\rangle|+\rangle |-\rangle. Therefore one use of Archimedes suffices to distinguish the two classes of room.

My account here differs from Rudolph’s only in measuring in the X-basis rather than applying a further three PETE boxes to move the first three balls back to the Z-basis, where they can be measured (in the only way possible in his book) by inspecting their colour. This change replaces an operation with a computational flavour (applying three H gates in parallel) with mere linear algebra (just express the output in the X-basis). Similary I prefer to think of the input as the product state |+\rangle|+\rangle|+\rangle|-\rangle in the X-basis rather than as the result of four H gates applied to the product state |0\rangle|0\rangle|0\rangle|1\rangle in the Z-basis. Of course this reflects my perspective that quantum computation is hard but (after 25 years of it) linear algebra is easy. The change of perspective is shown in the two circuit diagrams below.

Rudolph can’t employ these dodges and instead gives a `proof by example’ for the even split case, using an alphabetic notation for the final misty state in the Z-basis:

\{ WWW,WWB,,\ldots,-BWB,-BBW,BBB \},

with 64 entries. The reader is invited to check that the coefficient of WWW is zero. With the exception of the final discussion of ‘ontic states’, this is probably the hardest part of the book, as clearly the author was aware, writing ‘Even if you raced through all that (as I would do on a first reading) and didn’t really follow it, that’s OK.’ Besides the jump in difficulty, a possible weakness of this section is the contrived nature of the problem. This however reflects the history of the subject: the Deutsch–Josaz algorithm was one of the earliest instances of quantum speed-up over classical algorithms. Rudolph illustrates this speed-up by considering a vault with 65536 gold bars.

I think it is a little too tempting to conclude from Rudolph’s exposition that quantum computers are, at least for some tasks, strictly superior to classical computers. This however is unproved: the subtle point is that in any algorithmic specification of the problem, the Boolean function f : \mathbb{F}_2^n \rightarrow \mathbb{F}_2 either has to be treated as an oracle, in which case the Deutsch–Josaz algorithm only proves the weaker result that relative to an oracle quantum computers are superior to classical computers, or given as part of the input, in which case it is no longer clear that there is no better classical algorithm than evaluating f at 2^{n-1} + 1 of the inputs.

Still, this section is a nice illustration that quantum computers derive their power from interference, and not just (as Rudolph very clearly explains on page 50) from the superposition of exponentially many states. The reversible nature of quantum computing also emerges quite naturally from the white/black ball model and the use of the fourth ball in the input to Archimedes, but Rudolph chooses not to comment on this.

Part 2: Q-entanglement

This part, the highlight of the book for me, has an exceptionally clearly presented and improved version of the EPR-paradox. I’ll begin by giving my version of the original, separate from the more philosophical concerns of Einstein, Podolsky and Rosen, which I discuss in the final part.

The EPR-paradox as it seems now

Let \mathcal{H} be 2-dimensional Hilbert space modelling a single qubit, with Z-basis \left|0\right>, \left|1\right>. Alice and Bob have shared the Bell state

\left|\beta_{00}\right> = \frac{1}{\sqrt{2}} \bigl( \left|00\right> + \left|11\right>\bigr)

so that Alice has control of the first qubit and Bob the second. If all Alice and Bob can do is measure their qubit in the Z-basis then \left| \beta_{00} \right> behaves exactly like a shared (but unknown) classical bit: for each b \in \{0,1\} when Alice measures b, she can be certain Bob will also measure b, and vice-versa.

To motivate the next step we observe that \left|\beta_{00}\right> is not, as might seem from the formula above, in any way tied to the Z-basis. For instance, it is easy to check that

\left|\beta_{00}\right> = \frac{1}{\sqrt{2}} \bigl( \left|+\right>\otimes\left|+\right> + \left|-\right>\otimes\left|-\right>\bigr),

and in fact this formula holds replacing \left|+\right>, \left|-\right> with any orthonormal basis of \mathcal{H}. It follows that Alice and Bob can measure in any pre-agreed basis \left|u\right>, \left|v\right> of \mathcal{H}, and obtain the same classical bit. This suggests they might also try measuring in different bases. Let Alice’s basis (for the first qubit) be

\left|u_\theta\right> = \cos \theta \left|0\right> + \sin \theta \left|1 \right>, \left|v_\theta\right> = -\sin \theta \left|0 \right> + \cos \theta \left|1\right>

and let Bob’s basis (for the second qubit) be

\left|u_\phi\right> = \cos \phi\left|0\right> + \sin \phi \left|1 \right>, \left|v_\phi\right> = -\sin \phi \left|0 \right> + \cos \phi \left|1\right>.

Thus Alice’s basis \left|u_\theta\right>, \left|v_\theta\right> is obtained by rotating the Z-basis by \theta, and Bob’s basis \left|u_\phi\right>, \left|v_\phi\right> by rotating the Z-basis by \phi. It follows, by considering the inverse rotation, that \left|0\right> = \cos \theta \left|u_\theta\right> - \sin \theta \left| v_\theta\right> and \left|1\right> = \sin \theta \left|u_\theta\right> + \cos \theta \left| v_\theta\right>. There are of course similar expressions for the Z-basis elements in terms of Bob’s basis, obtained by replacing \theta with \phi. Using these we obtain

\begin{aligned}\sqrt{2} \left|\beta_{00}\right> &= \cos\theta \cos \phi \left| u_\theta u_\phi \right> - \sin \theta \cos \phi \left|v_\theta u_\phi\right> \\ & \quad - \cos \theta \sin \phi \left| u_\theta v_\phi \right> + \sin \theta \sin \beta_{00} \left| v_\theta v_\phi \right> \\ &\quad + \sin \theta \sin \phi \left| u_\theta u_\phi \right> + \cos \theta \sin \phi \left|v_\theta u_\phi \right> \\& \quad + \sin \theta \cos \phi \left| u_\theta v_\phi \right> + \cos\theta\cos\phi \left| v_\theta v_\phi\right> \\ &= \cos (\theta - \phi) \left| u_\theta u_\phi \right> - \sin (\theta - \phi) \left| v_\theta u_\phi \right> \\ & \quad + \sin (\theta -\phi) \left| u_\theta v_\phi\right> + \cos (\theta - \phi) \left|v_\theta v_\phi\right>.\end{aligned}

For instance if \theta = \phi then \psi = \frac{1}{\sqrt{2}}\bigl( \left| u_\theta u_\theta \right> + \left| v_\theta v_\theta \right> \bigr), so Alice and Bob either both measure u or both measure v, as we claimed above. In particular, as already remarked, if Alice and Bob agree to measure in the Z-basis, so \theta = \phi = 0 then when Alice measures \left|0\right\rangle she can be sure Bob will also measure \left| 0 \right\rangle. If instead \theta - \phi = \frac{\pi}{2} then the other two summands vanish and Alice measures u_\theta if and only if Bob measures v_\phi. If \theta - \phi = \frac{\pi}{4} then there is no correlation between Alice’s measurement and Bob’s measurement. In particular, if Alice and Bob agree that Alice will measure in the X-basis and Bob will measure in the Z-basis then Bob is equally likely to measure \left|0\right> as \left|1\right>, irrespective of Alice’s measurement.

In general, by the Born rule, if \theta - \phi = \gamma then the probability that Alice and Bob obtain the same result (i.e. either u_\theta for Alice and u_\phi for Bob or v_\theta for Alice and v_\phi for Bob) is \cos^2 \gamma. All this leaves no doubt that Alice’s measurement affects Bob’s. This is the spooky ‘action at a distance’, or non-locality, that so bothered Einstein. We defer further discussion to Part 3, and instead present the main result from Bell’s 1964 paper, written 29 years after Einstein, Podolsky and Rosen.

Bell’s game

We suppose that Alice and Bob are, as is usual in the tortured circumstances of thought experiments in quantum mechanics, held in isolated Faraday cages 20 light years apart. They are made to play the following game. The Gamesmaster chooses two classical bits and tells Alice b_A and Bob b_B. Alice and Bob must each submit further classical bits s_A and s_B. They win if s_A + s_B = b_Ab_B. That is, if b_Ab_B = 0 then Alice and Bob win if and only if they guess the same, and if b_Ab_B = 1 then Alice and Bob win if and only if they guess differently.

Using a single shared (but unknown) classical bit, or equivalently, by only measuring the Bell state in the same pre-agreed basis, Alice and Bob can win with probability \frac{3}{4}. One simple strategy is to ignore b_A and b_B entirely and simply submit their shared classical bit. In this case s_A+ s_B = 0 and our heroes win if and only if b_Ab_B = 0, which holds with probability \frac{3}{4}. In another strategy Alice submits b_A and Bob submits 0; since b_A = b_Ab_B except when b_A = 1 and b_B = 0, again the winning probability is \frac{3}{4}. (This strategy makes no use of their shared bit.) It is not hard to convince oneself that, provided the Gamesmaster plays optimally, for instance by choosing b_A and b_B independently and uniformly at random, no better strategies are available. This holds even if Alice and Bob are allowed to use their own (independent, classical) sources of randomness before deciding what to submit.

By measuring in different bases, Alice and Bob can do strictly better. It is clear that if Alice is told b_A = 0, she wants to choose a basis that must be close to Bob’s, while if Alice is told b_A = 1, then she wants a basis close to Bob’s when b_B = 0, and a basis rotated by nearly a right angle when b_B = 1. Bob is of course in the same position. Thinking along these lines one can show that a good choice of bases for the measurements (each basis depending only on the classical bit each of them is told) is as follows:

  • For Alice: if b_A = 0 take \theta =0, if b_A = 1 take \theta = \pi/4;
  • For Bob: if b_B = 0 take \phi = \frac{\pi}{8}, if b_B = 1 take \phi = -\frac{\pi}{8}.

These bases are shown in the diagram below.

Note that Alice’s basis is the Z-basis \left|0\right>, \left|1\right> when b_A = 0 and the X-basis \left|+\right>, \left|-\right> when b_A = 1. This collection of all four bases is in fact the unique optimal configuration, up to rotations and reflections.

Checking the four cases for (b_A, b_B) shows that if b_Ab_B = 0 then the bases chosen by Alice and Bob differ by an angle of \pm \frac{\pi}{8}. Therefore Alice and Bob measure in nearby bases and submit identical bits with probability \cos^2 \frac{\pi}{8}, winning with probability \cos^2 \frac{\pi}{8} \approx 0.855. If b_Ab_B = 1 then the bases chosen by Alice and Bob differ by an angle of \frac{\pi}{2} - \frac{\pi}{8} = \frac{3\pi}{8}. The probability that Alice and Bob submit identical bits is now \cos^2 (\frac{\pi}{2} - \frac{\pi}{8}) = \sin^2 \frac{\pi}{8} = 1 - \cos^2 \frac{\pi}{8}. Since their aim now is to submit different bits, they win again with probability \cos^2 \frac{\pi}{8} \approx 0.855.

To give an explicit example, suppose that b_A = 0, so Alice measures in the Z-basis, and b_B = 0 so Bob measures in the Z basis rotated by \frac{\pi}{8}. The relevant form for \left| \beta_{00} \right\rangle is

\begin{aligned} \left| \beta_{00} \right\rangle & = \frac{1}{\sqrt{2}} \Bigl( \cos \frac{\pi}{8} \left| 0 \right\rangle \left| u_{\pi/8} \right\rangle -\sin \frac{\pi}{8} \left| 0 \right\rangle \left| u_{\pi/8} \right\rangle \\ & \quad\qquad\qquad + \sin \frac{\pi}{8} \left| 1\right\rangle \left| v_{\pi/8} \right\rangle + \cos \frac{\pi}{8} \left| 1 \right\rangle \left| v_{\pi/8} \right\rangle \Bigr) \end{aligned}

and so, as just claimed, the probability that either Alice measures \left| 0 \right\rangle and Bob measures \left| u_{\pi/8} \right\rangle or Alice measures \left| 1 \right\rangle and Bob measures \left| v_{\pi/8} \right\rangle is \frac{1}{2}(\cos^2 \frac{\pi}{8} + \cos^2 \frac{\pi}{8}) = \cos^2 \frac{\pi}{8}.

The Gamesmaster can arrange that Alice and Bob are told b_A and b_B at the same time, and must then reply within a few seconds. (To take into account special relativity, we put Aice at the origin and Bob in a position space-like separated from Alice so that in any reference frame light takes more than the allowed number of seconds to travel from Alice to Bob.) This rules out even one way communication between Alice and Bob, unless they can employ a particle travelling faster than light. By applying a Lorentz transform one can convert any FTL-communication into a violation of causality. It is inconceivable that Alice’s qubit somehow transmits the result of Alice’s measurement to Bob’s qubit. But still, by choosing their bases carefully, Alice and Bob are able to influence the strength of the correlation (or anti-correlation) between their two measurements to the extent that they can play the game strictly better than classical physics allows. All this follows just from the Born rule for measurement and the simple rule proved above for how the Bell state transforms under two different rotations.

Rudolph’s improved version

To make a convincing demonstration of the improvement from the classical \frac{3}{4} winning probability to \cos^2 \frac{\pi}{8}, Alice and Bob would of course have to play the game multiple times, and accept that they will lose with probability about 15%. For instance, after 100 trials with 85 successes, one would reject the null-hypothesis that Alice and Bob are using a classical strategy with confidence specified by the p-value 0.011.

In Part 2, Rudolph offers an improved version of the EPR-paradox, using a game with three outcomes (win, draw, lose) in which, by using a suitable entangled 2-qubit state, Alice and Bob can sometimes win and guarantee never to lose. Rudolph’s ingenious construction means that only measurements in the Z– and X-basis are required, so (making the same shift from measurement-in-arbitrary-basis to computation-by-unitary-operator-and-Z-basis measurement remarked on earlier) Alice and Bob can work the magic using only black and white balls and the PETE box. Rudolph sets things up very carefully, in the entertaining context of Randi’s well known million dollar price for an experimentally testable demonstration of telepathy (or indeed, any psychic phenomenon). In particular, my claim above that no classical strategy can beat \frac{3}{4} in the EPR-game becomes the claim that any classical strategy in Rudolph’s game that has a non-zero winning probability also has a non-zero losing probability: this is convincingly shown in the narrative.

Part 2 ends with a discussion of just what is paradoxical about the games. Rudolph argues that a causal explanation of the nonlocal correlation in misty states is inconsistent with our understanding that causes precede effects, and that information cannot be transmitted faster than light. He remarks ‘physicists seriously consider other disturbing options’, for instance the super-deterministic theory where the bits b_A and b_B and Alice and Bob’s strategy (and even, whether or not Alice and Bob will follow their strategy as instructed) are already ‘known’ to the entangled qubits. Another possibility is that Rudolph’s ‘misty state’, modelling the two entangled qubits, is in fact ‘some kind of real physical object like a radio wave or a bowl of soup, and when we separate the psychics the mist is stretched between them’. (Rudolph is very clear that until this point, the misty states have been a mathematical abstraction, just as my calculations with unitary matrices above.)

Part 3: Q-reality

It is both a strength and weakness of Q is for quantum that at this point it is very tempting to throw up one’s hands and say ‘okay, I’m convinced misty states are the best approximation to reality there is, non-locality and paradoxical behaviour accepted’. I think this is particularly tempting for mathematicians —  surely a theory of such beauty and explanatory power has to be ‘right’? As a partial corrective to this, let me break from following Rudolph’s book, and instead consider the EPR-paradox in its historical context. We follow this with a discussion of quantum teleportation, before rejoining Rudolph’s final chapter.

Rocky states and the EPR-paradox as presented by the authors

At the end of Part 2, Rudolph introduces the idea of a ‘rocky state’, namely a classical probability distribution on physical states. He shows that the entanglement in the Bell state \left|\beta_{00}\right\rangle = \frac{1}{\sqrt{2}}\bigl( \left|0\right\rangle + \left|1\right\rangle \bigr) cannot be explained in this way: in mathematical language, a classical probability distribution that gives some probability to the observations \left| 0\right\rangle and \left| 1 \right\rangle for both Alice and Bob must give a non-zero probability to \left| 0 \right\rangle \left|1 \right\rangle, but this has coefficient zero in \left|\beta_{00}\right\rangle. This supports his conclusion that quantum theory (rather than classical probability) is required to explain the EPR-paradox.

Whatever the status of misty states, rocky states are certainly epistemic: on page 103 early in Part 3, Rudolph writes `the rocky states are unquestionably representing our knowledge’. For example, if I borrow a coin from Malcolm and flip it on a table, and see roughly how far it is from the edge, my rocky state might be as shown left below. Malcolm however knows his coin is double-headed, so his rocky state is as shown on the right.

As Rudolph writes (page 104),

the very fact that two different rocky states can correspond to the same real state indicates that the rocky states themselves are not a physical property of the coin.

So far, so clear, I hope. It is therefore striking that a very similar argument led Einstein, Podolsky and Rosen to conclude that misty states could not be in bijection with real states. To understand this, I think one must first appreciate that the authors took at one of the starting points in their paper that particles had real physical properties that were revealed by observation. Some version of this view, ‘scientific realism’, is implicit in the scientific method, and helps to explain its power. (We used it earlier, when we deduced from the experimental indistinguishability of balls output by the PETE box from fresh balls that there was a fundamental randomness in the laws of physics.) As a working definition, let me define naive realism to mean that an output ball from a PETE box is definitely either black or white — we just don’t know what it is until observation. Equivalently, the X-basis state \left|+\right\rangle is a convenient mathematical representation for a real state that, on observation, turns out to be either \left|0\right\rangle or \left|1\right\rangle. Thus, in naive realism, quantum states are incomplete descriptions of real states, and the real states have deterministic non-random behaviour.

Suppose as in the exposition of the EPR-paradox in Part 2, that Alice and Bob share the entangled Bell state \left|\beta_{00}\right>, but now separated by (let us say) 10 light seconds, and that Bob refuses to measure until he hears that Alice has done so. Suppose that Bob always measures in the Z-basis. We saw that if Alice also measures in the Z-basis then Alice and Bob are guaranteed to get the same Z-basis measurement, while if Alice measures in the X-basis, Bob is equally likely to get \left| 0 \right\rangle as \left|1\right\rangle, whatever Alice measures. This agrees with the predictions of the extended version of the Born rule dealing with a measurement on just one qubit in an entangled system. In detail, if Alice measures in the Z-basis

  • and gets \left| 0 \right\rangle then Bob’s qubit is in the state \left| 0\right\rangle and Bob measures \left|0\right\rangle;
  • and gets \left| 1 \right\rangle then Bob’s qubit is in the state \left| 1 \right\rangle and Bob measures \left|1\right\rangle.

If Alice measures in the X-basis

  • and gets \left| + \right\rangle then Bob’s qubit is in the state \left| + \right\rangle = \frac{1}{\sqrt{2}}\bigl( \left|0\right\rangle + \left|1\right\rangle \bigr) and, by the Born rule, Bob is equally likely to measure \left|0\right\rangle as \left|1\right\rangle;
  • and gets \left| - \right\rangle then Bob’s qubit is in the state \left| - \right\rangle = \frac{1}{\sqrt{2}}\bigl( \left|0\right\rangle - \left|1\right\rangle \bigr) and, by the Born rule, Bob is equally likely to measure \left|0\right\rangle as \left|1\right\rangle.

Thus depending on Alice’s measurement, the vector in Hilbert space (equivalently one of Rudolph’s ‘misty states’, or the ‘wavefunction’ in the language of the EPR paper) describing Bob’s qubit may take two different values. Except for changing ‘position’ to Z-basis and ‘momentum’ to X-basis, this is the same situation as on page 779 of the EPR paper, where the authors write

We see therefore that as a consequence of two different measurements performed upon the first system, the second system may be left in states with two different wavefunctions. On the other hand, since at the time of measurement the two systems no longer interact, no real change can take place in the second system in consequence of anything that may be done to the first system. This is, of course, merely a statement of what is meant by the absence of an interaction between the two systems. Thus, it is possible to assign two different wavefunctions … to the same reality (the second system after the interaction with the first).

The authors conclude that the wavefunction is, like the rocky state of the flipped coin above, an incomplete description of reality. Repeating Rudolph’s argument from page 104 quoted above, since two different rocky states correspond to the same real state, the rocky state cannot be a physical property of the coin. This is consistent with (but does not imply in its full strength) naive realism.

The expectation of the EPR authors, if I understand the history correctly, was that quantum mechanics would be subsumed by a larger theory — maybe with ‘hidden variables’ — whose states would, everyone could agree, be in bijection with physical reality. Even the term ‘hidden variable’ now seems pejorative, but this idea of a progression of understanding by a sequence of more and more refined theories is of course entirely consistent with the history of physics.

Note the quote makes it very clear that the EPR authors do not regard Alice’s act of measurement as an interaction with Bob’s qubit. This assumption should seem doubtful given the game discussed in Part 2, where we saw that the improvement in the winning chance from \frac{3}{4} to \cos^2 \frac{\pi}{8} depended on Alice and Bob being able to influence the correlation between their measurements of their shared Bell state by careful choice of bases, using only locally available information. Indeed Bell’s 1964 paper concludes that this improvement is inconsistent with a local hidden variable theory:

In a theory in which parameters are added to quantum mechanics to determine the results of individual measurements, without changing the statistical predictions, there must be a mechanism whereby the setting of one measuring device can influence the reading of another instrument, however remote. Moreover, the signal involved must propagate instantaneously, so that such a theory could not be Lorentz invariant.

This puts us back where we were at the end of Part 2, forced to accept the spooky ‘action at a distance’ effect of measuring. This non-locality was the central problem for Einstein with the theory: on the inability to measure Bob’s qubit in both the Z– and X-bases he wrote `es ist mir wurst’ (literally, ‘it is a sausage to me’). I’m happy to agree: I think scientific realism can easily stretch to accommodate measurements whose precision is limited by the uncertainty principle, and even to ruling out as simply nonsensical the idea of measuring both the position and momentum of a particle at the same instant. (Like the Z– and X-matrices, position and momentum are conjugate Hermitian operators.) More worrying perhaps, we are left with no better candidates for the ultimate elements of reality that Rudolph’s misty states. Rudolph admits the problems (page 81):

Such a mist would … have to have many physical properties that differ from any other kind of physical stuff we have ever encountered. It would be arbitrarily stretchable and move instantaneously when you whack it at one end, for example. The whole question of how to interpret the mist—as something physically real? as something which is just mathematics in our heads?—is one of the major schisms between physicists.

I would have liked to see it mentioned here that even if Alice’s measurement does, as experimentally verified, instantaneously affect Bob’s qubit, it appears to be impossible to use this effect to transmit information. To be fair, Rudolph does consider this point carefully, but only later in Part 3. This distinction is particularly relevant in the context of quantum teleportation, which I discuss next.

Quantum teleportation

Most accounts of quantum teleportation that I have seen begin with the circuit below.

Alice begins in control of three unentangled qubits. The first is in the state \left|\psi \right\rangle that she wants to teleport to Bob. After applying the Hadamard and CNOT gates, Alice sends the third qubit to Bob. She then applies another CNOT gate, a Hadamard gate and measures her two qubits in the Z-basis. (Why?, you might well ask.) She sends the two classical bits that are the results of her measurements to Bob by a classical channel, marked as double blue lines in the diagram. Finally Bob applies a correction by X and Z gates and, as if by magic, obtains \left|\psi\right\rangle.

I think I would have got the idea rather more quickly if the circuit diagram had been accompanied by the explanations below:

  • The circuit left of the red dashed line prepares the Bell state \left|\beta_{00}\right> on the second and third qubits.
  • The CNOT gate and Hadamard gate on the right of the red dashed line transform the Bell basis to the Z-basis. One example is shown above, another is \left|\beta_{10}\right\rangle = \frac{1}{\sqrt{2}}\bigl( \left|00 \right\rangle - \left| 11 \right \rangle \bigr) which is sent by the CNOT gate to \frac{1}{\sqrt{2}} \bigl( \left|00\right\rangle - \left|10\right\rangle = \left|-\right\rangle \otimes \left|0\right\rangle and then by the Hadamard gate to \left|1\right\rangle \otimes \left|0\right\rangle.
  • By the identity \left| \psi\right\rangle \otimes \left|\beta_{00}\right\rangle = \frac{1}{4} \sum_{z=0}^1\sum_{x=0}^1 Z^zX^x \left| \beta_{00} \right\rangle \otimes X^xZ^z \left| \psi \right\rangle
    when Bob applies the corrections from the classical bits sent by Alice, he obtains \left|\psi\right\rangle. (Note there is no ambiguity in X^x Z^z \left| \beta_{00}\right\rangle because X \otimes X and Z \otimes Z stabilise \left|\psi\right\rangle. This identity follows easily from the calculations on page 83 of Quantum computing: A gentle introduction by E. Rieffel and W. Polak.)

Since learning the rudiments of the ZX-calculus, I also find the graphical proof that the circuit behaves as claimed quite satisfying. Note in particular that the Bell state is simply the cup (shown horizontally) on the left of the dashed red-line.

The algebraic view of the ‘yank’ that simplifies the ZX-calculus diagram is the identity
\Bigr( \frac{1}{\sqrt{2}} \bigl( \left\langle 00 \right| \!+\! \left\langle 11 \right| \bigr) \otimes \left\langle \phi \right| \Bigr) \Bigl( \left|\psi \right\rangle \otimes \frac{1}{\sqrt{2}} \bigl( \left| 00 \right\rangle \!+\! \left| 11 \right\rangle \bigr) \Bigr) = \left\langle \phi \left| \right. \psi \right\rangle.
See (36) and (37) in ZX-calculus for the working computer scientist by John van de Wetering for more on this.

It is not hard to convince oneself that Bob cannot learn anything by observing the state Z^z X^x \left|\psi\right\rangle unless he knows at least one of the classical bits z and x sent to him by Alice. For instance, if \left|\psi\right\rangle is known by all to be a Z-basis element \left|b\right> then Bob needs x. (And in this case Alice might as well have sent the bit b to Bob directly.) Thus while we believe that Alice’s measurements instantaneously affect Bob’s qubit, Bob cannot learn any information until light from Alice can reach Bob. This can be made precise using density matrices: the density matrix for Bob’s qubit is the uniform linear combination of the rank 1 density matrices for the four post-measurement states of the third qubit in the quantum teleportation protocol, namely

\frac{1}{4}\Bigl( \left| \psi \right\rangle \left\langle \psi \right| + X\left| \psi \right\rangle \left\langle \psi \right|X + Z\left| \psi \right\rangle \left\langle \psi \right| Z + XZ \left| \psi \right\rangle \left\langle \psi \right| ZX \Bigr)

Setting \left|\psi\right\rangle = \alpha\left|0\right\rangle + \beta\left|1\right\rangle this becomes

\begin{aligned} \frac{1}{4} \Bigl(& \left(\begin{matrix} |\alpha|^2 & \alpha\overline{\beta} \\ \overline{\alpha}\beta & |\beta|^2 \end{matrix}\right) + \left(\begin{matrix} |\beta|^2 & \overline{\alpha}\beta \\ \alpha\overline{\beta} & |\alpha|^2 \end{matrix}\right) + \left(\begin{matrix} |\alpha|^2 & -\alpha\overline{\beta} \\ -\overline{\alpha}\beta & |\beta|^2 \end{matrix}\right) \\ & + \left(\begin{matrix} |\beta|^2 & -\overline{\alpha}\beta \\ -\alpha\overline{\beta} & |\beta|^2 \end{matrix}\right) \Bigr) = \frac{1}{4} \left(\begin{matrix} 2|\alpha|^2 & 0 \\ 0 & 2|\beta|^2 \end{matrix} \right) = \left( \begin{matrix} \frac{1}{2} & 0 \\ 0 & \frac{1}{2} \end{matrix} \right).\end{aligned}

The right-hand side is the density matrix for the second qubit in the Bell state \left|\beta_{00}\right\rangle and is maximally uninformative: measurement in any orthonormal basis gives each basis vector with equal probability \frac{1}{2}.

In addition, quantum teleportation appears to be consistent with naive realism as defined above. Let \left|\psi\right\rangle = \alpha\left|0\right\rangle + \beta\left|1\right\rangle and think of \left|\psi\right\rangle as a qubit that is \left|0\right\rangle with probability |\alpha|^2 and is \left|1\right\rangle with probability |\beta|^2. At the end of the protocol, this probability distribution is transferred from Alice’s qubit to Bob’s qubit. This is consistent with the idea that the qubit has a definite, but unknown state, to which this probability distribution refers.

In a classical analogue (which I learned from Prof. Ruediger Schack) this transfer of probability distributions can, and in fact must be interpreted entirely epistemically. The analogue of the Bell state is two coins in unknown but equal states: imagine Alice tapes coin 2 to coin 3, flips the composite coin, and then without looking at either coin, keeps coin 2, and gives coin 3 to Bob. The analogue of \left|\psi\right\rangle is a classical probability distribution on coin 1. In the protocol, Alice performs a parity measurement by performing another ‘tapped together flip’ on coins 1 and 2. She then looks at them both, and learns whether they are ‘same’ or ‘different’. She reports this to Bob, who then flips coin 3 if and only if Alice reports ‘different’.

Alice’s probability distribution now refers to coin 3: if Alice believed that coin 1 was heads with probability p then she should now believe that Bob’s coin 3 is heads with probability p. To show the epistemic character of this, imagine that coin 1 is prepared by a coin toss performed by Alice’s assistant Charlie, and that Alice does not look at the coin until after the parity measurement. If the coin is fair then Alice’s probability is \frac{1}{2}, but Charlie’s probability is either 0 or 1. In each case, this probability refers to coin 3 at the end of the protocol. Even though p can be any real number in the interval [0,1], it may now seem that no information is teleported, except the single classical bit from Alice’s ‘same’ or ‘different’.

I think this makes it all a little more plausible that quantum states are incomplete descriptions of a deterministic theory, and that while this theory must include instantaneous ‘actions at a distance’ effects, it could be that such effects cannot be used to convey information, and so do not (from our human perspective) violate causality. And while we saw in the discussion of the PETE box in Part 1, that the realist point of view that qubits have a definite state —  we just don’t know what it is until observation —  requires hidden variables, the asymmetry between Alice and Charles seen above, and the epistemic interpretation of the ‘teleportation’, shows that the existence of such hidden information by no means impossible.

Discussion of Part 3

Rudolph begins this part by explaining that if misty states are real then the behaviour of the PETE box can be explained: for example the X-basis state \left|+\right\rangle = \frac{1}{\sqrt{2}}\bigl(\left|0\right\rangle + \left|1\right\rangle\bigr) models a white ball output by a PETE box; if the next PETE box can ‘see’ this state then it can of course output a white ball in response. (That this happens by linear unitary evolution is nice for mathematicians, but potentially misleading: while linearity means we only have to state the effect of the Hadamard transformation on basis vectors, the behaviour of the PETE box, or rather the Stern–Gerlach magnetic fields, on superpositions is a property of the physical theory.) He also notes that this hypothesis does not require that misty states are a complete description of physical states.

The difficulty with this hypothesis is of course observation: why can PETE boxes observe X-basis states without collapse, while human observation collapse everything we see into the Z-basis? We have seen that this act of observation requires instantaneous ‘action at a distance’ effects. Rudolph mentions attempts to make measurement compatible with unitary dynamics, writing (page 113)

But there are some models which work (at least for the ball type experiments we have considered; making them work for all experiments we can presently do is more tricky), and which soon will be experimentally ruled in or out.

Unhelpfully, no more details are given. He also mentions the ‘many worlds interpretation’ in which measurement splits the universe in two, in which the observer becomes a gigantic misty state representing a superposition of ‘observer saw black’ and ‘observer saw white’.

Next Rudolph considers the epistemic idea that misty states are ‘features of our knowledge, rather than real states of the world’ (page 115). He gives the argument from the EPR paper that since Alice’s measurement of her qubit in the entangled Bell pair can leave Bob’s qubit in two different misty states, and assuming that these measurements do not affect the physical properties of Bob’s qubit, two different misty states correspond to the same physical state, and so misty states cannot be physical properties. We saw above that the improvement from \frac{3}{4} to \cos^2 \frac{\pi}{8} in the winning probability for makes this assumption very doubtful; Rudolph makes the same point very clearly using his improved version of Bell’s game.

Rudolph goes on to illustrate, as usual in his toy model of black and white balls and PETE boxes, that measurements on an entangled Bell pair cannot be used to send information, by giving a special case of the calculation with density matrices above. The section concludes (page 126)

All this is mainly strange if you consider the mist to be real—if you follow Enstein and deny that the real state of Bob’s ball is changing at all then you should not expect them to be able to communicate in this manner.

But then how would you explain how the psychics won your gold?

The Bell game requires Alice and Bob to share the entangled Bell state; Rudolph’s improved version requires a slightly more complicated entanglement. Rudolph concludes with a demonstration that even separable quantum states show behaviour that is incompatible with naive realism as defined above. (He also paints Einstein as a naive realist, but on my understanding does not accurately reflect Einstein’s opinions by the end of his life.) Rudolph’s exposition features an entertaining dialogue between Einstein and Pooh Bear. I’ll present it rather more briefly here.

Suppose that Malcolm prepares one of the four states

  1. \left|0\right\rangle \otimes \left|0\right\rangle,
  2. \left|0\right\rangle \otimes \left|+\right\rangle,
  3. \left|+\right\rangle \otimes \left|0\right\rangle,
  4. \left|+\right\rangle\left|+\right\rangle.

If, as in naive realism, we do not believe that misty states are themselves real, but instead think that \left|+\right\rangle is really either \left|0\right\rangle or \left|1\right\rangle (we just don’t know which until observation), then since any of the four states may give \left|0\right\rangle \otimes \left|0\right\rangle on observation, it is experimentally impossible to learn anything about which state was prepared. This however is false. We calcluate that the output of the circuit shown below

is

  1. \frac{1}{\sqrt{2}}\bigl(\left|00\right\rangle + \left| 11 \right\rangle \bigr),
  2. \frac{1}{2}\bigl( \left|00\right\rangle + \left|01\right\rangle + \left|10\right\rangle + \left|11\right\rangle,
  3. \frac{1}{2}\bigl( \left|00\right\rangle + \left|01 \right\rangle - \left|10 \right\rangle + \left|11\right\rangle,
  4. \frac{1}{\sqrt{2}}\bigl( \left|00 \right\rangle - \left|01 \right\rangle.

Now consider what happens when we measure the second qubit in the Z-basis.

  • If it is measured as \left|1\right\rangle we measure the first qubit in the Z-basis; if the result of both measurements is \left|01 \right\rangle then the initial state was not (1), if is \left|11 \right\rangle then the initial state was not (4).
  • If instead the second qubit is measured as \left|0\right\rangle then we apply a Hadamard gate to the first qubit, giving (1) \left|00\right\rangle \mapsto \left|+0\right\rangle, (2) \left|+0\right\rangle \mapsto \left|00\right\rangle, (3) \left|-0\right\rangle \mapsto \left|10\right\rangle, (4) \left|00\right\rangle \mapsto \left|+0\right\rangle, and then measure the first qubit in the Z-basis. Now from cases (2) and (3) we see that if the result of both measurements is \left| 00\right\rangle the initial state was not (3), and if it is \left|10\right\rangle the initial state was not (2).

Therefore in all cases something can be deduced about the initial state.

Rudolph clearly explains that this result is inconsistent with naive realism but consistent with the view that physical states correspond to unique misty states. But, like Einstein, Rudolph finds the consequences of this, in particularly non-locality, hard to swallow. He concludes (page 139):

I personally live in cognitive dissonance: on a day-to-day basis I talk about the physical properties of the photons … as if they are as tangible as any of the physical properties of the human-scale objects in the room around me. They are not. I suspect I should treat the misty states as states of knowledge, but to be understood within a more general framework of theories of inference than our present theories find comfortable.

The idea that things may not be precisely as they present themselves to our senses has a long pedigree, going back as far as Plato and his shadows on the cave. Hume says it clearly in Section 118 in Enquiry concerning human understanding:

The table, which we see, seems to diminish, as we remove further from it. But the real table, which exists independently of us, suffers no alteration. If was therefore, nothing buts its image, which as present to the mind.

The tension here between things as they are (ontological properties) and things as they seem to us (epistemological properties) has of course been much debated by philosophers after Hume, most notably Kant with his noumena (or Ding an such) and phenomena. What we have seen in this post is that it may be impossible to maintain this separation: the EPR-paradox, Bell’s game, and Rudolph’s final demonstration all point to a theory in which the physical properties making up a real state includes a unique misty state. Moreover, neither the Stern–Gerlach experiment (with its random deflections), nor the EPR-paradox (with its spooky ‘action at a distance’), nor Rudolph’s final demonstration (quantum effects without initial entanglement) require the real state to be anything more than the misty state. But if this is true, it follows that our observed experience of phenomena cannot be disentangled with the ontological question: what is the nature of things as they really are?


Playing the majority game with boolean formulae

September 26, 2021

In the majority game, there are n numbered balls, each either black or white. One knows that black balls are in the majority. At each step, the questioner may ask

‘are balls i and j the same colour’

and get a truthful answer. How many questions does it take to find a black ball? This problem was first solved by Saks and Werman, who proved the elegant result that n-B(n) questions are necessary and sufficient, where B(n) is the number of 1s in the binary form of n.

It is a nice exercise to find a questioning strategy that meets this target. If you discover something fundamentally different to the ‘Binary Knight Hunt’ from Section 2 of this paper, I would like to know about it! (The name of this strategy comes from a reframing of the problem in which black balls become Knights, who always tell the truth, white balls become Knaves, who always lie, and a comparison between balls i and j becomes the question `Person i, is Person j a knight?’.)

Since we need it later, I’ll summarise the Binary Knight Hunt here. At each step, the question graph is a forest and all components have 2-power size. Say that a component is uniform if all its balls have the same colour. At each step the questioner picks two balls in uniform components of the same size and compares them. If the answer is ‘different’, black and white balls are equally balanced in the new unified component; this component can then be ignored. Suppose that a position is reached in which the uniform components have distinct sizes 2^{u_1} > \ldots > 2^{u_r}. Since black and white balls are equally balanced in the remaining components and 2^{u_1} > 2^{u_2} + \cdots + 2^{u_s}, the balls in the component of size 2^{u_1} are in the majority. (We will see later that this generalises to multiple colours.) Moreover, since the sum of component sizes expresses n as a sum of powers of 2, the number of components is at least B(n). Since each question reduces the number of components by one, it follows that the Binary Knight Hunt wins in at most n-B(n) questions.

The harder part of the Saks—Werman result is that n-B(n) questions are necessary. This part was generalised by John Britnell and me to the case where black balls outnumber white balls by at least a specified excess.

A related problem asks for the minimum number of questions to identify the colour of every ball. After the Binary Knight Hunt finds a black ball, the question graph is a forest: turning it into a tree by connecting the component of the known black ball to all other components identifies all the balls in n-1 questions; the final question graph is a tree. When n is odd, there are 2^{n-1} possible sets of black balls (for each partition \{1,\ldots, n\} = A \cup B one may colour the larger set black), and so we must learn n-1 bits of information. Therefore this strategy is optimal. For even n the same result holds, but requires a bit more work to prove: see Theorem 4 in this later paper of mine.

The majority game for a boolean function

To state the intended generalisation we need some more notation.
Let c_i be the colour of ball i, identifying black with 0 and white with 1. A comparison of ball i with ball j reveals c_i + c_j. This is F(c_i,c_j) where F is the two-variable boolean formula

F(x,y) = x + y.

In this post we present some preliminary results and open problems on generalizations of the majority game in which f is replaced with an arbitrary boolean formula.

Note that, in the original game, we cannot know for sure a ball is of the minority colour unless it has been compared with a ball known to be of majority colour. This observation has no analogue when comparison is replaced with evaluation of an arbitrary boolean formula. For each boolean formula we therefore have three variants of the majority game.

  1. Find a ball of known colour.
  2. Find a ball of the majority colour.
  3. Find the colour of every ball.

For simplicity we shall assume from now on that n=2m+1 is odd.

Linear formulae

The relevant formulae are F_\ell(x_1,\ldots,x_{\ell}) = x_1 + \cdots + x_\ell. We immediately note that if \ell is odd then F_\ell(c_i,\ldots, c_i) = c_i, and so we can determine a ball’s colour in just one evaluation, making (1) trivial. For (2), the first m questions may find white balls, so m+1 evaluations are necessary and sufficient. For (3) the obvious strategy uses n questions.

Problem 1. Is it possible, using F_\ell when \ell is odd and \ell \ge 3, to find the colour of every ball using n-1 evaluations?

An attempt to block this strategy by requiring the arguments of F_\ell to be distinct is easily subverted, provided that n \ge 2\ell - 1. (Equivalently, m \ge \ell-1.) Begin by evaluating F(c_1,\ldots,c_{\ell-1},c_k) for \ell distinct choices of k \in \{\ell, \ldots, n). Suppose that the results are

\begin{aligned} c_1 + \cdots + c_{\ell-1} + c_{i_1} &= \cdots = c_1 + \cdots + c_{\ell-1} + c_{i_s} = 0 \\ c_1 + \cdots + c_{\ell-1} + c_{j_1} &= \cdots = c_1 + \cdots + c_{\ell-1} + c_{j_t} = 1.\end{aligned}

Since s+t = \ell, exactly one of s and t is odd. If it is s then

c_{i_1} + \cdots + c_{i_{s-1}} + c_{j_1} + \cdots + c_{j_t} = 0;

if it is t then

c_{i_1} + \cdots + c_{i_s} + c_{j_1} + \cdots + c_{j_{t-1}} = 0.

Therefore we obtain \ell-1 balls k_1, \ldots, k_{\ell-1} c_{k_{\ell-1}} such that c_{k_1} + \cdots + c_{k_{\ell-1}} = 0. Since F(c_{k_1}, \ldots, c_{k_{\ell-1}}, c_r) = c_r we are again in the happy position of being able to learn a ball’s colour with a single evaluation. This gives a strategy for game (1) using \ell+1 evaluations. For game (2) we use \ell + m evaluations

F(c_{k_1}, \ldots, c_{k_{\ell-1}}, c_j)

for m+1 distinct j, each not equal to any of k_1, \ldots, k_{\ell-1}; this is possible since m+1 + (\ell-1) \le m+1 + m \le n.

Problem 2. Find optimal strategies for games (1), (2) and (3) using F_\ell when repeated arguments are disallowed.

Problem 3. Show that game (1) cannot be won if n \le 2\ell - 2.

When \ell is even one can use a similar trick to reduce to the case \ell=2 by noting that F(x,\ldots, x, y) = x+y. If distinct arguments are required then one can use \ell-1 questions to find balls k_1, \ldots, k_{\ell-2} such that c_{k_1} + \cdots + c_{k_{\ell-2}} = 0; then

F(c_{k_1}, \ldots, c_{k_{\ell-2}}, c_i, c_j) = c_i + c_j.

Problem 4. Do these reductions give optimal strategies for the three problems?

The results so far suggest that games (1) and (2), with repeated arguments permitted, are the most appealing problems.

Majority vote

An important example of a non-linear boolean function is the 3-way majority vote function

M(x,y,z) = xy + yz + zx.

Since M(x,x,x) = x, we must require distinct arguments to avoid the reduction to Problem 1. With this condition, observe that if

M(x,y,z) \not= M(x,y,w)

then z \not= w, and since swapping z and w affected the majority, we must also have x \not= y; now M(x,y,u) = u for any u. This suggests the following strategy: evaluate M at disjoint triples c_i, c_j, c_k until both values 0 and 1 are seen. Let

M(u,v,w) \not= M(u',v',w').

Now, interpolating between the sets by evaluating M(u,v,w') and M(u,v',w'), one can find x and y with x \not= y. As above, one further question will win game (1) and one can use very similar ideas to win games (2) and (3).

Unfortunately, if there are few white balls, it may take many evaluations to find a triple in which white balls are in the majority. For instance, suppose that n=2^4-1 = 15 and there are just two white balls. We identify \{1,\ldots, 15\} with \mathbb{P}(\mathbb{F}_2^4) and cover the 2-subsets of points of \mathbb{P}(\mathbb{F}_2^4) using the

\displaystyle \binom{4}{2}_2 = \frac{(2^4-1)(2^4-2)}{(2^2-1)(2-1)} = 35

lines in \mathbb{P} (\mathbb{F}_2^4). Since there are \binom{15}{2} = 105 distinct 2-subsets, and each 3-subset contains 3 2-subsets, this is the minimum possible number of 3-subsets. Still the questioning strategy requires 35 evaluations (plus those to find x and y). Generally if n=2^r-1 then there are O(n^2) lines, and so O(n^2) evaluations may be necessary. Thus the attractive feature of the majority game (and many related Knights and Spies problems) that only O(n) questions are needed is lost.

It therefore seems natural to require that there are exactly m white balls and m+1 black balls. Suppose, again for simplicity, that m = 2p+1 is odd. If we evaluate M at p distinct triples (using 3p of the 4p+3 balls) and get 0 every time, then we have used up at least 2p black balls and at most p white balls. Hence any remaining triple must have white balls in the majority. This gives a strategy for games (1) and (2) using n/4 + O(1) questions, and for game (3) using 5n/4 + O(1) questions.

Problem 5. Are these strategies optimal?

Problem 6. Generalize to the majority-vote function with any odd number of variables.

The majority game with values in \mathbb{Z}/q\mathbb{Z}

Another generalization that might be of interest is to allow q different colours, labelled by elements of \mathbb{Z}/q\mathbb{Z}. The result of a comparison of balls with colours c and c' is now c+c' \in \mathbb{Z}/q\mathbb{Z}. Components in the question graph are now labelled by q-tuples (m_0,\ldots, m_{q-1}) up to cyclic permutation. For example, when q=3 a component labelled (10,5,0) contains 10 balls of colour c, 5 balls of colour c+1 and no balls of colour c+2, for some c \in \mathbb{Z}/3\mathbb{Z}. A comparison of a ball in the component (m_0,\ldots,m_{q-1}) with one in the component (m_0',\ldots,m'_{q-1}) unites the components as

(m_0+m'_k, m_1+m'_{k+1}, \ldots, m_{q-1}+m'_{q-1+k})

for some k. In one version of the problem, it is known that balls of one colour are strictly in the majority, and the aim is to find one; in another, the aim is to either find a ball of a colour in plurality (strictly more than n/q such balls) or correctly declare that all colours occur equally often, in a third, the aim is to find a ball of plurality colour (knowing that black balls are in plurality).

Problem 7. Since \log_2 q bits of information are learned in each evaluation, we might expect fewer questions to be necessary than in the usual majority game. On the other hand, the greater number of colours gives the adversary more options. What are optimal strategies for the cyclic q-coloured majority and plurality games?

q-coloured majority games

Some computer analysis of the cyclic 3-coloured majority game suggests it may be disappointingly technical. To state some results, it is convenient to define the value of a game to be the number of components in a final position. For n \le 16, the value is B(n), as for the 2-colour majority game. But for n = 17, the value is 3 and the questioner can win in 14 questions, claiming in a terminal position has three components. One optimal line of questioning follows a Binary Knight Hunt to reach the position

(4,0,0), (4,0,0), (2,0,0), (2,0,0), (1,0,0), (1,0,0), (1,0,0), (1,0,0), (1,0,0).

The scan below (click for a larger version) then shows the game tree.

The effect of the cyclic restriction is seen only in some of the sublines. In particular, note that from the position

(4,3,0), (5,3,0), (1,0,0), (1,0,0),

written as 43\ 53 \ 1 \ 1 above, a move connecting the components (4,3,0), (5,3,0) results in either (9,6,0), (4,8,3) or (7,3,5) and in all cases a majority ball is now known. But without the restriction, the next position may be (7,8,0), (1,0,0), (1,0,0) and no majority ball can be identified. Another subtle point is that the next position cannot be (6,5,4), (1,0,0), (1,0,0), since this has no colour in the majority.

Perhaps surprisingly, if we drop the cyclic ordering, so that components (m_0,m_1,\ldots,m_{q-1}) and (m_0',m_1',\ldots,m_{q-1}') may be united as

(m_0+m_{0\sigma}',m_1+m_{1\sigma}', \ldots, m_{q-1}+m_{(q-1)\sigma}'

for any permutation \sigma then the q-coloured majority game essentially reduces to the 2-colour majority game.

Theorem. For any q \ge 2, the value of q-coloured majority game on n balls with arbitrary permutations permitted is B(n).

Proof. Let Paul be the questioner and suppose that Carole (an anagram of oracle) supplies the answer. Paul’s aim is to maximise the value of the final position, Carole to minimise it. Paul commits at the start to playing the Binary Knight Hunt: as in the 2-colour version, an unbalanced component are is now played in after their formation, and Paul stops as soon as all uniform components have distinct sizes. The non-obvious point is that this still works: suppose that the uniform components have sizes 2^{u_1} > \ldots > 2^{u_s} and the balanced components have sizes 2^{v_1+1}, \ldots, 2^{v_t+1}. Let black be the majority colour. If balls in the uniform component of size 2^{u_1} are not black, then since 2^{u_1} > 2^{u_2} + \cdots + 2^{u_s} we have

\begin{aligned}  2^{u_2} & + \cdots + 2^{u_s} + 2^{v_1} + \cdots + 2^{v_t} \\ & < \frac{2^{u_1} + 2^{u_2} + \cdots + 2^{u_s}}{2} + \frac{2^{v_1+1} + \cdots + 2^{v_t + 1}}{2}  \\ & \le n/2, \end{aligned}

a contradiction. Therefore the value is at least B(n). On the other hand, Carole can say at the start ‘I promise to play as if there are only two colours’, and, by using an optimal strategy for the 2-colour game ensure that when Paul claims there are no more than B(n) components. \Box

Since removing the cyclic restriction makes the questioners task harder, the theorem implies that the value of the cyclic q-coloured majority game is at least B(n). The example above shows that strict inequality may hold.

Other games in the literature

Neither q-coloured game is the same as the ‘plurality problem’ considered for three colours in this article in STACS2004 by Martin Aigner, Gianluca De Marco, Manuela Montangero, in which the result of a comparison is the familiar ‘same’ / ‘different’ from the majority game, rather than ‘0 so same’, ‘1 so different with shift 1’, ‘2 so different with shift 2′ as here. The authors’ main result is that between 3 \lfloor n/2 \rfloor - 1 and \lfloor 5n/3 \rfloor -2 comparisons are necessary and sufficient to find a black ball.

The closest game considered in the literature appears to come from A plurality problem with three colors and query size three, by Dániel Gerbner, Dániel Lenger and Máté Vizer: here each query involves three balls (so three components are united), and there is no cyclic order on colours. Thus a query with components labelled (100,50,0), (10,5,0), (2,1,0) may give (102,56,10). The main result is impressively sharp: if n is even then between 3n/4 - 2 and 3n/4 -1/2 questions are needed to identify a ball whose colour is in a plurality.


Engaging students from diverse backgrounds

September 17, 2021

On 14 September 2021 Royal Holloway Mathematics held a meeting Engaging students from diverse backgrounds, organised by Stefanie Gerke and me, and sponsored by the London Mathematical Society. The speakers were invited to give practical advice on the challenges of teaching university level mathematics. I won’t try to summarise all aspects of the four interesting talks, or all the discussion they stimulated, but here at least is a partial record.

Lara Alcock

Lara Alcock first spoke at Royal Holloway back in 2015. I wrote up this meeting here. For many of us, this was the first time we had heard anyone talk about research in mathematical pedagogy; the event spurred several of us to experiment with new teaching approaches, including the ‘flipped classroom’. Lara’s preferred model was clear from her title: Tilting the classroom: engaging students in large classes. As her abstract promised, she gave 18 (a mathematical 18, there were probably more) straightforward practices to improve student engagement in large lectures. Here are some that stood out for me.

  • Put administrative announcements on the board / projector as students come in.
  • Start with simple multiple choice questions that test students on material they are supposed to have prepared. I think something like this is essential: students won’t bother preparing unless they know we are serious about it, and this means investing time in live-sessions.
  • Give students `gappy’ notes or other activities that they complete during the lecture or during pauses in online videos. For instance, the matching exercise shown below, which seems a nice way to demystify the dry-as-dust field axioms.

  • Give students guidance on how to study: Lara mentioned a study of Bjork et al showing there are powerful misconceptions on this issue: one conclusion (see after Figure 3) is that learners typically underestimate the improvement they can make due to studying, but also underestimate the amount they will forget without periodic reinforcement.
  • Self-explanation training: how to think deeply about a proof in a way that goes beyond paraphrasing it.
  • Ask questions that may split the audience, for instance, which of \implies, \Leftarrow, \iff is correct in this gap? `Whatever you think, half the people disagree with you … do you want to change your mind?’.
  • Read and explain a proof to the person next to you (viable even with social distancing).

The context was a first year analysis course, a subject that is a notorious cause of student disengagement. It was clear from Lara’s talk and the comments from students that her approach worked, and that students rose to the challenge of her course, both when delivered using live large lectures and, post-Covid, with online lectures. Lara’s enthusiasm for the large lecture as an effective way to teach students shone through: she ended by remarking `I’d much rather have large lectures back: I miss the atmosphere’. For more on of Lara’s approach to analysis teaching, I highly recommend her book, How to think about analysis.

Barrie Cooper

Barrie Cooper’s talk came second and touched on many themes developed more fully in the other three talks. His title was Competencies, challenge, and co-curation: selected strategies for inclusive assessment in mathematics. He started by asking the audience what the terms ‘diversity’ and ‘inclusion’ meant to them: answers included the students’ race, gender and social capital (a useful notion, I think: easy to take it for granted when one has it, and easy to assume others have it when they don’t), and the preparedness of our students for academic study. He then discussed strategies for designing inclusive assessments. One key point was that we should make clear what is expected at each level: what do students need to do to pass, to get a 2:1, and to get a first? Not making these expectations clear may particularly hurt students coming from disadvantaged backgrounds.

Barrie pointed out that designing inclusive assessment meant assessing different skills, and assessing in different ways. For example, his course (on graph theory) had only 30% of credit on the final exam, which was open book. Part of the balance was made up of ‘competency based assessment’: basic tests of understanding on which the required pass mark was 100%. Tools such as Stack, Numbas and, more ambitiously, GitHub Classroom were mentioned.

Barrie finished by going into more detail on ‘co-curation’ in the topical context of epidemic modelling on a dynamic network with probabilistic infection. Students used the R-package epimodel, working in small groups to generate results by experimentation with the model, and sharing whatever resources they found most useful — not always those that Barrie had anticipated. It all sounded like a great exercise to me: both mathematically deep, but also bringing in wider concerns, and open-ended in that (as we’ve seen many times in the last 18 months) there’s no unique right answer.

Matthew Inglis

Matthew Inglis‘ title was Individual differences in students’ use of optional learning resources and concerned two cohorts of students: one doing a mathematics degree, the other studying mathematics as part of an engineering degree. The resources available to them were

  • Traditional live lectures;
  • Mathematics Learning Support Centre (like a continuous workshop with a staff member always on call);
  • Online video lectures and other E-resources.

Usage was tracked throughout the term. It was quickly clear that there were four distinct clusters: the `traditionalists’ who attended live lectures, the `MLSCers’ who made high use of the support centre, the `E-learners’, and (his term) the `slackers’, who did not make much use of anything.


As his graph above shows, the clusters were very pronounced: despite all our efforts with `active blended learning’, it seems students stuck with one learning mode throughout their course. Matthew presented extensive data showing that the traditionalists and MLSCers did best at the exam, the E-learners did significantly worse, and the slackers did very badly indeed. The most significant correlation across all clusters for attainment was with live-lecture attendance, even after normalising the data to take into account the different courses.

Matthew ended by speculating on what would happen if the online lectures were withdrawn? Would the E-learners simply become slackers? Or would they move into another cluster? Since the E-learners did less well than the other two clusters, is producing online videos a good investment of our time? When I lectured cryptography last year, I put a vast effort into making short online videos: even without editing (but some re-recording) I think this took me far longer than delivering the lectures. They were very popular with the students, and attainment was good, but I certainly can’t give an unqualified `yes’ in answer to Matthew’s question. As he said, offering students too much choice makes work for their lecturers.

Maurice Chiodo

After a break for lunch we finished with Maurice Chiodo‘s talk Societal responsibility and ethical conduct in mathematics. He started by challenging the idea that pure mathematics is a universal force for good, or, at worst, morally neutral. Instead, like any powerful tool, it can be used for good or harm, and the harm may arise inadvertently.

A compelling example of this is the use of machine learning algorithms in predictive policing: concentrating police in one area may well find more crime, but at the expense of leaving other crime undetected, and creating a feedback loop that damages community relations. Other examples included the epidemic modelling from Barrie Cooper’s talk, the A-level results fiasco in September 2020, and what we know from the Snowden leaks about mass internet surveillance.

A recurring theme was that models based on unsound implicit assumption could be mathematically sound, but lead to the wrong conclusions. For instance, incredibly enough, the model used to predict the spread of Covid-19 in care homes failed to consider that many staff (often employed by agencies) moved between care homes. Another instance comes from finance: given M independent risky assets each with a normal \mathrm{N}(0,\sigma^2) return, their mean has variance \sigma^2/ M, so mathematically is a safer investment. (This is the basis of the infamous Collatoralized Debt Obligation, peddled enthusiastically by investment banks such as Goldman Sachs who rewarded their clients by then betting against the asset they had just purchased.) As the 2008 Financial Crisis showed, the independence assumption is utter garbage. Yet another worrying example comes from the use of machine learning in sentencing algorithms: the algorithm faithfully learns to perpetuate past racial biases.

Maurice then made the key point that students `fail to hear us say that there are ethical issues’. If they are important (and they are), why don’t we tell them? Successfully introducing ethics into the mathematics curriculum has three requirements:

  • A seminar series or lecture series or course on societal responsibility in mathematics;
  • Mathematical exercises with an ethical component;
  • Students must see that all their lecturers take ethics seriously.

As a step towards this, next time I lecture a cryptography course I will set explicit questions on the ethnical issues, rather than just leave them to asides in lectures or printed notes. I hope Maurice’s talk will encourage others to make similar changes: this seems to be to be a valuable way we can broaden our curriculum and maybe even increase student engagement.

Maurice made many further interesting points. For instance, mathematics students appear to gravitate to a utilitarian perspective on ethics; he mentioned the Trolley Problem as a way to counter this. (Even before the Fat Man makes his appearance one can challenge strictly utilitarian `body-count’ ethics by making the person on the second track a close relative, rather than the typical stranger.) He finished by saying that `He was far more scared of 10 mathematicians than 10 lawyers … so why is it that the lawyers get ethical training but we don’t?’

Conclusion

We were lucky to get four interesting talks by excellent speakers. We had postponed the meeting from May 2020 in the hope of holding it face-to-face, but still, the online format enabled a broader attendance from people around the UK.

All four talks had practical advice, but also surprises to offer: how many of us would have predicted the four sharp clusters in Matthew Inglis’ talk (well perhaps, the existence of the ‘slacker’ cluster was no surprise), or the enthusiasm for live lectures shown by Lara’s students.

The final discussion session also offered much to think about: for instance, my not-so-secret view is that university education by intelligent, well-read, liberally minded people has a civilizing influence and value that goes beyond the subject specific skills and knowledge we hope to impart; but this view can rightly be challenged as patronising to our students. Barrie’s talk in particular made me realise that ‘decolonizing the curriculum’ isn’t just the buzz-phrase de jour: there are real practical changes we should all be making to ensure that the mathematics we love so much remains accessible to our changing student intake.

Comments are very welcome and can be posted below.


The Hamming code, quadratic residue codes and an exceptional isomorphism

July 4, 2021

As a pleasant way to pass a Sunday afternoon — or at least, more pleasant than flat-hunting in Bristol in a wildly inflated property market — let me offer yet another way to prove the exceptional isomorphism \mathrm{GL}_3(\mathbb{F}_2) \cong \mathrm{PSL}_2(\mathbb{F}_7). An earlier post uses modular representation theory to do in about a screenful of WordPress. Here we do it using the well known result that, up to position permutation, there is a unique perfect 1-error correcting linear binary code of length 7.

The Hamming code

We define the [7,4,3]Hamming code C to be the linear code of length 7 and dimension 4 with parity check matrix

H = \left( \begin{matrix} 1 & 0 & 1 & 0 & 1 & 0 & 1 \\ 0 & 1 & 1 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 & 1 & 1 & 1 \end{matrix} \right).

Since each non-zero binary word of length 3 appears exactly once as a column of H, the syndromes for the 7 possible one bit errors, namely

H(1,0,0,0,0,0,0)^t, \ldots, H(0,0,0,0,0,0,1)^t

are distinct. Hence C is 1-error correcting, and therefore has minimum distance \ge 3. Since the disjoint Hamming balls of radius 1 about the codewords each contain 1 + \binom{7}{1} = 8 binary words of length 7, and 8 \times 2^4 = 2^7 = |\mathbb{F}_2^7|, these balls partition \mathbb{F}_2^7. Therefore C is a perfect 1-error correcting linear binary code.

Lemma. The automorphism group of C is a subgroup of S_7 containing \mathrm{GL}_3(\mathbb{F}_2).

Proof. Each g \in \mathrm{GL}_3(\mathbb{F}_2) permutes the 7 non-zero elements of \mathbb{F}_2^3. Let \sigma_g \in S_7 be the position permutation of the columns of H induced by g. For example,

g = \left( \begin{matrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{matrix} \right) \implies gH = \left( \begin{matrix} 0 & 0 & 0 & 1 & 1 & 1 & 1 \\ 1 & 0 & 1 & 0 & 1 & 0 & 1 \\ 0 & 1 & 1 & 0 &0 & 1 & 1 \end{matrix} \right)

and so in this case \sigma_g = (1,4,2)(3,5,6). (Since I am hyper-paranoid about such things, I will verify that g \mapsto \sigma_g is a homomorphism by comparing

g(g'H) = g (H . \sigma_{g'}) = (gH) \cdot \sigma_{g'} = H \cdot \sigma_g \sigma_{g'}

with (gg') H = H \cdot \sigma_{gg'}, where \cdot denotes the column permutation action.) Now writing P(\sigma) for the permutation matrix of \sigma (acting on the right as usual when composing permutations left to right, so P(\sigma)_{ij} = [i\sigma = j]), we have

v (gH)^t = v (HP(\sigma_g))^t = v P(\sigma_g)^t H^t = v P(\sigma_g^{-1}) H^t = (v \cdot \sigma_g^{-1}) H^t

for any v \in \mathbb{F}_2^7. If v \in C then the left-hand side is

v H^t g^t = 0 g^t = 0.

Therefore v \cdot \sigma_g^{-1} \in C, and indeed the image of the homomorphism \sigma is a group of position permutations of C. For instance, in the example above, (1,1,1,0,0,0,0)(gH)^t = 0 and correspondingly, (1,1,1,0,0,0,0)\sigma_g = (1,0,0,1,1,0,0) is a codeword in C. Since the group homomorphism g \mapsto \sigma_g is injective, the lemma follows. \Box.

Another nice way to prove the lemma is to calculate the 7 words in C of weight 3. By the definition above, these have non-zero entries in the positions

\{1,2,3\}, \{1,4,5\}, \{2,4,6\}, \{1,6,7\}, \{2,5,7\}, \{3,4,7\}, \{3,5,6\},

as shown below as the lines in the Fano plane \mathbb{P}^2(\mathbb{F}_7^3).

Quadratic residue codes

We may construct \mathbb{F}_{2^3} as \mathbb{F}_2(\alpha) where \alpha has minimal polynomial x^3 + x + 1. The binary quadratic residue code of length 7 is the cyclic code D whose generator polynomial has as its roots the powers of \alpha corresponding to the 3 quadratic residues modulo 7, namely 1, 2 and 4. That is,

g(x) = (x-\alpha)(x-\alpha^2)(x-\alpha^4).

Since these roots are conjugate under the Frobenius automorphism \beta \mapsto \beta^2 of \mathbb{F}_{2^3}, the generator polynomial is the minimum polynomial of \alpha; that is g(x) = x^3 + x + 1. Thinking of D as the ideal \langle x^3+x+1 \rangle of \mathbb{F}_2[x]/\langle x^7+1 \rangle, we have v(x) \in D if and only if v(\alpha) = v(\alpha^2) = v(\alpha^4) = 0. From this it is easy to see that D has minimum weight 3 and so, as before, D is a perfect 1-error correcting binary code. (More generally, in a quadratic residue code of odd prime length n, the minimum distance d satisfies d^2 \ge n.) The same argument holds for the cyclic code E = \langle x^3 + x^2 + 1 \rangle whose generator polynomial is (x-\alpha^3)(x-\alpha^5)(x-\alpha^6) is defined using the 3 non-residues modulo 7.

By the uniqueness result mentioned earlier, the codes C, D and E are all equivalent. Fix a bijection \star : E \rightarrow D.

We now identify the positions of D with \mathbb{F}_7. Since D is cyclic, the position permutation j \mapsto j + 1 is an automorphism of D. This gives one element of \mathrm{PSL}_2(\mathbb{F}_7). To obtain the rest, we need the following two claim.

Claim. If r \in \{1,2,4\} then the position permutation j \mapsto rj preserves D.

Proof. Thinking of D as an ideal, we have v(x) \in D if and only if v(\alpha) = v(\alpha^2) = v(\alpha^4) = 0, so if and only if v(x^2) \in D. This deals with the case r=2, and r=4 is obtained by squaring. \Box.

Claim. If s \in \{3,5,6\} then the position permutation j \mapsto tj sends D to E. The position permutation 0 \mapsto 0 and t \mapsto t^{-1} sends D to E.

Proof. For the first part, use that the generator polynomial for E is defined using the 3 non-residues modulo 7. The second part follows similarly, since \beta is a root of x^3 + x + 1 if and only if \beta^{-1} is a root of x^3 + x^2 + 1. \Box

We may therefore define an action of \mathrm{PSL}_2(\mathbb{F}_7) on D by identifying the positions of codewords with \mathbb{F}_7 and then applying the corresponding position permutation, followed by \star if (according to the second claim above), we end in E rather than in D. Using that \mathrm{PSL}_2(\mathbb{F}_7) is a simple group, we see this homomorphism is injective.

Conclusion

We have shown that the automorphism group, G say, of a perfect 1-error correcting linear binary code contains subgroups S and T isomorphic to the finite simple groups \mathrm{GL}_3(\mathbb{F}_2) and \mathrm{PSL}_2(\mathbb{F}_7) of order 168. Setting H = S \cap T we have |G| \ge 168^2 / |H|. Since G \le S_7, and 168 = 2^3 . 3 .7 and 7! = 2^4.3^2. 5.7, we have |H| \ge 2^2 . 7 = 28. Hence H has index at most 6 in S. The coset action of S on H now gives a homomorphism S \rightarrow S_6; since \mathrm{GL}_3(\mathbb{F}_2) is simple and has elements of order 7, this homomorphism must be trivial. Therefore H = S. Similarly H = T. Hence S = T and we have the required automorphism.


Collisions in Greg Egan’s Orthogonal physics

April 12, 2021

In Greg Egan’s remarkable Orthogonal Trilogy, physics differs from ours in one crucial respect: the Lorentz metric c^2\mathrm{d}t^2 - \mathrm{d}\mathbf{x}^2 is replaced with the Riemannian metric \mathrm{d}t^2 + \mathrm{d}\mathbf{x}^2, putting time on the same footing as space. As one would expect from this author, the consequences are worked out in great detail, and are integral to the plot. This comes at some predictable expense to characterisation and versimilitude: still I’ve found the first two books sufficiently interesting that I can (almost) ignore that much of the exposition consists of the characters giving each other physics tutorials, conducting experiments (which remarkably always work, even if the results often come as a surprise) or listening to each other give lectures and seminars. That said, the physics is leavened by some set-piece ethical dilemmas, and there is also a well-developed biological theme, concentrated on the unique (but not completely implausible) morphology of the Orthogonal race.

The purpose of this post is to do one of the exercises left implicitly to the reader in the second book, The Eternal Flame, in which it is observed that when a photon is deflected by a stationary particle of ordinary matter (called a `luxagen’ in the trilogy), it may glance off at two different angles. Moreover, there is a maximum deflection angle, which is independent of the energy of the photon.

Here I’ll show that this follows from the relation between energy and momentum in Orthogonal physics, and the two conservation laws, and implies that Orthogonal photons are heavier than luxagens. I’ll also show that the same behaviour holds in our universe, whenever the moving particle is heavier than the particle it strikes. My solution uses piles and piles of algebra to arrive at a surprisingly simple answer. Egan’s book hints at a simple trigonometric solution: made more plausible because his characters are of course brilliant at Euclidean geometry, thanks to the double motivation from their physics and our shared mathematics. Or maybe there is a better way to organize the algebra, perhaps starting `We take the Lagrangian …’, or maybe `By working in the reference frame in which the centre of momentum is fixed …’. Either way, I would like to know it.

Much of my interest arises from how Egan (and his characters) arrive at this setup: for more background see the supplementary material on Egan’s website, or read the first book in the trilogy, The Clockwork Rocket. At the time of writing I still have the third book to look forward to.

Space-time and the energy-momentum relation

There is 4-dimensional space-time. In our universe, when we measure a time interval T by the distance cT that light travels in this interval, space-time has the Lorentz metric c^2 \mathrm{d}t^2 - \mathrm{d}x^2 - \mathrm{d}y^2 - \mathrm{d}z^2. For example, a photon may move from (0,0,0,0) to (1,c,0,0) in one unit of time, and, corresponding to the fact that no time passes for the photon, the Lorentz distance between these space-time points is zero.

In the Orthogonal universe space-time has the Riemannian metric k^2 \mathrm{d}t^2 + \mathrm{d}x^2 + \mathrm{d}y^2 + \mathrm{d}z^2, where k is a constant with dimension distance/time; by choosing units appropriately (thus sweeping under the carpet the first half of the first book on Yalda’s experiments on Mount Peerless, the beautiful dual Pythagorean Theorem, and the fact that in Orthogonal physics, the speed of light is not constant), we may assume that the numerical value of k is 1. We take the following (and only the following, for the time being) as further axioms:

  1. Physical laws are invariant under the group of transformations preserving the Lorentz/Riemannian metric.
  2. Every particle has a symmetry invariant tangent vector (kv_t,v_x,v_y,v_z) to its world line called its 4velocity with units distance/time; in our universe k = c and in the Orthogonal universe k = 1. A stationary observer measures the particle’s velocity as the 3-vector (v_x,v_y,v_z)/v_t, as illustrated in the diagram below (click for a larger pdf).
  3. Multiplying the 4-velocity, normalized to have length c (our universe) or 1 (the Orthogonal universe), by the mass M of a particle gives its 4-momentum (E/c, \mathbf{p}), where \mathbf{p} is the 3-vector momentum.

Note that (1) only makes sense because we measure time and distance in compatible units: thus v_t is dimensionless (while the constant k has units distance/time) and whenever a time appears, such as t_1 in the diagram above, it is multiplied by c. Therefore the symmetry group of space-time can reasonably mix them up.

The way I imagine (2), half-remembered from when I did special relativity 20 years ago (my reference frame), is that the xt-plane has clocks every metre, connected by rigid rods, which record the time of passing of every particle. The particle is at position x_0 (which conveniently enough happens to have a clock) at time ct_0, and at position x_1 (similarly conveniently placed) at time ct_1. The stationary observer can therefore collect the measurements from the clocks (simply by travelling to each one), and approximate the particle’s velocity as (x_1-x_0)/(t_1-t_0) and so v_x/v_t \approx (x_1-x_0)/(t_1-t_0). Everyday objects in our universe, such as cars, trains and coronavirus, move at a negligible fraction of the speed of light, so c(t_1-t_0) is typically huge compared to x_1-x_0, and corresponding the velocity measured by the observer is a tiny fraction of the speed of light.

Note this is not the velocity that an observer travelling with the particle would measure. This requires the notion of proper time, seen in the diagram above as the lengths of the trajectories (1 and 2) in the xt-plane. Using this one can define the 4-vector velocity as the rate of change of space-time position with respect to the proper time along the trajectory. But in this post we will instead use the axiomatic definition in (2).

In general, writing \mathbf{v} for the 3-velocity, and as usual in physics, \mathbf{v}^2 for the square of its 3-vector Euclidean norm

\displaystyle ||v|| = \sqrt{v_x^2+ v_y^2+v_z^2},

axiom (2) implies that (kv_t,v_x,v_y,v_z) = (kv_t, \mathbf{v}v_t). By symmetry invariance, the Lorentzian norm squared of the tangent vector is (c^2 - \mathbf{v}^2) v_t^2, and the Euclidean norm squared is (1+\mathbf{v}^2)v_t^2 (using the assumption that k=1). Now since (c^2 - \mathbf{v}^2)v_t^2 has units distance/time, and we care only about the direction of the tangent vector, not its magnitude, it is most convenient (and usual in physics, although I’ve never seen it clearly explained why) to require, as in axiom (3), that the Lorentzian length is c.

Thus in our universe (c^2-\mathbf{v}^2)v_t^2 = c^2, and so v_t = c/\sqrt{c^2-\mathbf{v}^2} = 1/\sqrt{1-\mathbf{v}^2/c} = \gamma(\mathbf{v}), where \gamma(\mathbf{v}) is the usual dimensionless Lorentz factor. In the Orthogonal universe v_t = 1/\sqrt{1+v^2}; this is again dimensionless because the 1 in the numerator implicitly has units distance/time.

Therefore the 4-vector velocity is

\displaystyle \begin{aligned} (cv_t,v_x,v_y,v_z) &= (c\gamma(\mathbf{v}), \mathbf{v}\gamma(\mathbf{v}) ) \\ &= \bigl(\frac{c}{\sqrt{1-\mathbf{v}^2/c^2}}, \frac{\mathbf{v}}{\sqrt{1-\mathbf{v}^2/c^2}}\bigr) \end{aligned}

in our universe and

\displaystyle (v_t,v_x,v_y,v_z) = \bigl(\frac{1}{\sqrt{1+\mathbf{v}^2}}, \frac{\mathbf{v}}{\sqrt{1+\mathbf{v}^2}} \bigr)

in the Orthogonal universe.

According to (3) the familiar 3-vector momentum is obtained by multiplying the spatial component of each of these 4-vector velocities by the rest mass M. In our universe, this agrees with special relativity: a particle moving at 3-vector velocity \mathbf{v} is heavier according to a stationary observer by the factor \gamma(\mathbf{v}). In the Orthogonal universe, we find instead that the 3-vector momentum is M\mathbf{v}/\sqrt{1+\mathbf{v}^2}, so fast moving particles are lighter according to a stationary observer.

Again by (3), the energy E of the particle is given by E/c = Mc \gamma(\mathbf{v}) in our universe, and E = M/\sqrt{1-\mathbf{v}^2} in the Orthogonal universe. Substituting in the 4-vectors, we find that in our universe (E/c, M\gamma(\mathbf{v})\mathbf{v}) has Lorentzian length Mc, and in the Orthogonal universe, (E, M\mathbf{v}/\sqrt{1+\mathbf{v}^2}) has Euclidean length M. Equivalently, energy and momentum are related in our universe by

(E/c)^2 - \mathbf{p}^2 = M^2c^2

and in the Orthogonal universe by

E^2 + \mathbf{p}^2 = M^2.

The former is one statement of the energy-momentum relation between the energy E and the 3-momentum \mathbf{p}. Observe that the Orthogonal energy-momentum relation can be stated very neatly as E = M \cos \Gamma and || \mathbf{p} || = M \sin \Gamma, where \Gamma \in [0, \pi/2]. The analogue for our universe, using the identity \cosh^2 \Gamma - \sinh^2\Gamma = 1, is E = Mc^2 \sinh \Gamma and ||\mathbf{p} || = Mc \cosh \Gamma. We show both by the triangles below, noting that only the Orthogonal triangle is a genuine geometric representation, rather than a helpful aide-memoire.

Collision setup

We now need one final pair of axioms

  1. In any system of particles, the sum of all 3-vector momenta is conserved;
  2. In any system of particles, the sum of all energies (as recorded in the time component of 4-vector momenta) is conserved.

The diagrams below show a stationary particle with mass m hit by a heavy particle with mass M moving in the x direction (and also in time, of course). Again click on the diagram for a larger pdf; four final zero coordinates got clipped off in the scan.

We can suppose that after the collision motion is contained in txy-space (of which the diagram above shows only the xy-plane), so all z-coordinates are zero: let \phi (oriented clockwise) and \theta (oriented anticlockwise) be the deflection angles of the lighter and heavier particle. The 4-momenta conservation equations are, in our universe,

\begin{aligned} mc + Mc \cosh \Gamma &= mc \cos \delta + Mc \cos \Delta \\ Mc \sinh \Gamma &= mc \sinh \delta \cos \phi + Mc \sinh \Delta \cos \theta \\ 0 &= -mc \sinh \delta \sin \phi + mc \sinh \Delta \sin \theta. \end{aligned}

Notice that these are homogeneous in c. We can therefore divide through by c and obtain the equations for the Orthogonal universe by replacing hyperbolic functions with trigonometric functions throughout. As a further small simplification we introduce T = m + M\, \mathrm{co}\ \Gamma and observe that for our universe

(T-m)^2/M^2 - 1 = \cosh^2 \Gamma - 1 = \sinh^2 \Gamma

and for the Orthogonal universe,

-(T-m)^2/M^2 + 1 = -\cos^2 \Gamma + 1 = \sin^2 \Gamma.

Therefore a unified system of equations is

\begin{aligned} T &= m\, \mathrm{co}\; \delta + M\, \mathrm{co}\; \Delta \\ \sqrt{|(T-m)^2 - M^2|} &= m\, \mathrm{si}\; \delta \cos \phi + M\, \mathrm{si}\; \Delta \cos \theta \\ 0 &= -m\, \mathrm{si}\; \delta \sin \phi + M\, \mathrm{si}\; \Delta \sin \theta \end{aligned}

where \mathrm{co}, \mathrm{si} are \cosh, \sinh in our universe and \cos, \sin in the Orthogonal universe. The expression (T-m)^2 - M^2 is positive in our universe and negative for the Orthogonal universe; it has the three constants m, M and T (all with units of mass) which we regard as determined by the experimental setup. The unknowns are \delta, \Delta, \phi and \theta, appearing only on the right-hand sides.

Numerical solutions

It is routine to solve these equations numerically using NSolve in Mathematica. As an example, we suppose that the heavier particle has three times the mass of the lighter, so m/M = 1/3. The graph below show the post-collision velocity V of the heavier particle for each of the two possible deflection angles \theta, as the energy of the heavier particle increases from least (red) to greatest (violet).

This agrees with the experiment conducted in Orthogonal where the heavier particle is a photon, and the characters observe the deflection angle by visual observation of a system of mirrors. The graph below the has the same axes, comparing the post-collision velocity in the Orthogonal universe (solid) and our universe (dashed), choosing the energies to get comparable shapes. Notice that the maximum deflection angle does not depend on the energy of the heavier particle, or even on which universe we are in: we show below that its sine is the ratio of the masses. Thus in these graphs it is \sin^{-1} 1/3 \approx 0.33984.

The graph below is the equivalent of the first graph showing the final energy of the heavier particle. If you are surprised that red is now at the top, note that in Orthogonal physics, the energy is M/\sqrt{1+V^2}, so is lower for faster moving particles.

Tangent space equation

Differentiating the momentum conservation equations using that \frac{\mathrm{d}}{\mathrm{d}\alpha} \, \mathrm{si}\ \alpha = \mathrm{co}\ \alpha, we obtain

\begin{aligned} 0 &= m\, \mathrm{co}\; \delta \cos \phi\, \mathrm{d}\delta - m\, \mathrm{si}\; \delta \sin \phi \,\mathrm{d}\phi \\ &\qquad\qquad + M\, \mathrm{co}\; \Delta \cos \theta\, \mathrm{d}\Delta - M\, \mathrm{si}\; \Delta \sin \theta\, \mathrm{d} \theta, \\ 0 &= -m\, \mathrm{co}\; \delta \sin \phi \mathrm{d}\delta - m\, \mathrm{si}\; \delta \cos \phi\, \mathrm{d}\phi \\ &\qquad\qquad - M\, \mathrm{co}\; \Delta \sin \theta\, \mathrm{d}\Delta - M\, \mathrm{si}\; \Delta \cos \theta\, \mathrm{d} \theta. \end{aligned}

At an extremum for \theta, we have d \theta = 0. Multiplying the top equation by \cos \phi and the bottom equation by \sin \phi and subtracting, we obtain

\begin{aligned} 0 &= m\, \mathrm{co}\; \delta \cos^2 \phi\, \mathrm{d}\phi - m \, \mathrm{si}\; \delta \sin \phi \cos \phi\, \mathrm{d}\phi \\ & \qquad\qquad\qquad\qquad\qquad\qquad + M\, \mathrm{co}\; \Delta \cos \theta \cos \phi\, \mathrm{d}\Delta \\ &\ + m\, \mathrm{co}\; \delta \sin^2 \phi\, \mathrm{d}\phi + m \, \mathrm{si}\; \delta \cos \phi \sin \phi\, \mathrm{d}\phi \\ &\qquad\qquad\qquad\qquad\qquad\qquad -M\, \mathrm{co}\; \Delta \sin \theta \sin \phi\, \mathrm{d}\Delta \\ & = m\, \mathrm{co}\; \delta (\cos^2 \phi + \sin^2 \phi)\, \mathrm{d}\delta \\ &\qquad\qquad + M\, \mathrm{co}\; \Delta (\cos \theta \sin \phi - \sin \theta \cos \phi)\, \mathrm{d}\Delta \\ &= m\, \mathrm{co}\; \delta\, \mathrm{d}\delta + M \, \mathrm{co}\;\Delta \cos (\theta +\phi) \,\mathrm{d}\Delta. \end{aligned}

Differentiating the equation from energy conservation, using that \frac{\mathrm{d}}{\mathrm{d}\alpha} \cosh \alpha = \sinh \alpha and \frac{\mathrm{d}}{\mathrm{d}\alpha} \cos \alpha = -\sin \alpha, so the same sign appears both times, we get

0 = m \, \mathrm{si} \; \delta \, \mathrm{d}\delta + M \, \mathrm{si}\; \Delta \, \mathrm{d}\Delta.

Comparing these relations between the independent differentials \mathrm{d}\delta and \mathrm{d}\Delta, we see that the coefficients must be in the same ratio: that is

\displaystyle \frac{\mathrm{si}\;\delta}{\mathrm{co}\; \delta} = \frac{\mathrm{si}\;\Delta}{\mathrm{co}\;\Delta \cos (\theta + \phi)},

or equivalently,

\displaystyle \frac{\mathrm{ta}\; \Delta}{\mathrm{ta}\; \delta} = \cos (\theta + \phi)

where \mathrm{ta} is \mathrm{tanh} in our universe and \mathrm{tan} in the Orthogonal universe.

Solution for maximum deflection

First step: solving for \delta and \Delta

The sum of the squares of the two equations for momentum is

\begin{aligned} |(T&-m)^2 - M^2| \\ &= \sqrt{|(T-m)^2 - M^2|}^2 + 0^2 \\ &= (m\, \mathrm{si}\; \delta \cos \phi + M\, \mathrm{si}\; \Delta \cos \theta)^2 \\ &\qquad + (-m\, \mathrm{si}\; \delta \sin \phi + M\, \mathrm{si}\; \Delta \sin \theta)^2 \\ &= m^2\, \mathrm{si}^2\; \delta (\cos^2 \phi + \sin^2\phi) + M^2\, \mathrm{si}^2\; \Delta (\cos^2 \theta + \sin^2\theta) \\ &\qquad + 2mM\, \mathrm{si}\; \delta\, \mathrm{si}\; \Delta (\cos \phi \cos \theta + \sin \phi \sin \theta) \\ &= m^2\, \mathrm{si}^2 \; \delta + M^2 \, \mathrm{si}^2\; \Delta + 2mM\, \mathrm{si}\;\delta \, \mathrm{si}\; \Delta \, \cos(\theta+\phi). \end{aligned}

By the equation from the tangent space, at maximum deflection \cos(\theta + \phi) = \frac{\mathrm{ta}\; \Delta}{\mathrm{ta}\; \delta}. Substituting and using

\mathrm{si}\;\delta \, \mathrm{si}\;\Delta\, \frac{\mathrm{ta}\; \Delta}{\mathrm{ta}\; \delta} = \mathrm{co}\; \delta \, \mathrm{si}\;\Delta\, \mathrm{ta}\; \Delta

we get

|(T-m)^2 - M^2| = m^2\, \mathrm{si}^2 \: \delta + M^2 \, \mathrm{si}^2\: \Delta + 2mM\, \mathrm{co}\; \delta \, \mathrm{si}\;\Delta\, \mathrm{ta}\; \Delta

in which the only unknowns are \delta and \Delta. The only equation we have not yet used is the energy conservation equation

T = m \, \mathrm{co} \; \delta + M \, \mathrm{co}\;\Delta.

It implies that

m \, \mathrm{co}\;\delta = T - M\, \mathrm{co}\; \Delta

and, by squaring this relation and subtracting m^2, that

\pm m^2 \, \mathrm{si}^2 \; \delta = (T - M\, \mathrm{co}\; \Delta)^2 - m^2

where the sign is + for our universe and - for the Orthogonal universe. Using these two relations to eliminate m\, \mathrm{co}\; \delta and m^2\, \mathrm{si}^2 \; \delta and recalling that (T-m)^2 - M^2 is positive for our universe and negative for the Orthogonal universe, we obtain

\begin{aligned} \pm &\bigl((T-m)^2 - M^2\bigr) \\ & = \pm \bigl( (T - M\, \mathrm{co}\; \Delta)^2 - m^2\bigr) + M^2 \, \mathrm{si}^2\: \Delta \\ & \qquad  + 2M (T - M\, \mathrm{co}\; \Delta) \, \mathrm{si}\;\Delta\, \mathrm{ta}\; \Delta \end{aligned}

with the same conventions for the universe dependent signs. Now only \mathrm{\Delta} appears. After multiplying through by \pm we can simplify the right-hand side as follows

\begin{aligned} \bigl((T&-m)^2 - M^2\bigr) \\ &= (T - M\, \mathrm{co}\; \Delta)^2 - m^2 \pm M^2 \, \mathrm{si}^2 \; \Delta \\ &= T^2 + M^2 \, \mathrm{co}^2\; \Delta - 2MT\, \mathrm{co}\; \Delta \\ &\qquad - m^2 \pm M^2\, \mathrm{si}^2 \Delta \pm 2MT\, \frac{\mathrm{si}^2\; \Delta}{\mathrm{co}\; \Delta} \mp 2M^2 \, \mathrm{si}^2 \Delta \\ &= T^2 + M^2 (\mathrm{co}^2\;\Delta \mp \mathrm{si}^2\;\Delta) -m^2 \\ &\hspace*{1in} -\frac{2MT}{\mathrm{co}\;\Delta} (\mathrm{co}^2\;\Delta \mp \mathrm{si}^2\;\Delta) \\ &= T^2 + M^2 - m^2- \frac{2MT}{\mathrm{co}\; \Delta}. \end{aligned}

It seems somewhat remarkable to me that all the signs cancel, and we are left with such a simple equation. Perhaps this is a sign that there is some more direct derivation that I am missing. Anyway, cancelling the factors of T^2 on each side and rearranging, we get the final equation

\displaystyle M^2 -m^2 + mT = \frac{MT}{\mathrm{co} \; \Delta}

which clearly has unique solution

\mathrm{co}\; \Delta \displaystyle = \frac{MT}{M^2-m^2+mT}.

Using the relation from energy conservation m \, \mathrm{co}\; \delta = T - M\, \mathrm{co}\;\Delta we immediately get

\displaystyle m\,\mathrm{co}\; \delta = T - \frac{M^2T}{M^2-m^2+mT} = \frac{-m^2T + mT^2}{M^2-m^2+ mT}

hence

\mathrm{co}\; \delta = \displaystyle \frac{T(T-m)}{M^2-m^2+mT}.

Final step: finding \phi and \theta

Substituting these expressions for \mathrm{co}\; \delta and \mathrm{co}\; \Delta into the equation from the tangent space equation \cos (\theta+\phi) = \frac{\mathrm{ta}\; \Delta}{\mathrm{ta}\; \delta} we obtain

\begin{aligned} \cos^2 & (\theta+\phi)\\ &= \frac{\mathrm{ta}^2\; \Delta}{\mathrm{ta}^2 \; \delta} \\ &= \frac{\mathrm{co}^2 \; \Delta - 1}{\mathrm{co}^2 \; \delta - 1} \frac{\mathrm{co}^2 \; \delta}{\mathrm{co}^2\; \Delta} \\ &= \frac{M^2T^2 - (M^2-m^2+mT)^2}{T^2(T-m)^2 - (M^2-m^2+mT)^2} \frac{T^2(T-m)^2}{M^2T^2}. \end{aligned}

Again there is a surprising simplification: the numerator M^2T^2 - (M^2-m^2+mT)^2 of the left-hand fraction becomes M^2T^2 - (MT)^2 = 0 if m = M, and so vanishes if m = \pm M. (Of course only the specialization m = M is physically meaningful.) The quotient by M^2-m^2 is T^2-M^2+m^2-2mT. Hence we have

\begin{aligned} M^2&T^2 - (M^2- m^2+mT)^2 \\ & \quad = (M-m)(M+m)(T+M-m)(T-M-m).\end{aligned}

Similarly one can show that the denominator factorizes as

\begin{aligned} & T^2(T-m)^2  - (M^2-m^2+mT)^2 \\ &\ = (T^2+M^2-m^2)(T+M-m)(T-M-m). \end{aligned}

Therefore the expression above for \cos^2 (\theta + \phi) simplifies to

\cos^2 (\theta + \phi) = \displaystyle \frac{(M-m)(M+m)}{T^2+M^2-m^2} \frac{(T-m)^2}{M^2}.

Using the momentum conservation equation

m\,\mathrm{si}\;\delta \sin \phi = M\, \mathrm{si}\; \Delta \sin \theta

and the same factorizations to simplify \frac{\mathrm{co}^2\;\Delta -1}{\mathrm{co}^2 \; \delta -1} we get

\begin{aligned} 1 &= \frac{M^2\, \mathrm{si}^2 \; \Delta \sin^2 \theta}{m^2\,\mathrm{si}^2 \; \delta \sin^2 \phi} \\ &= \frac{M^2(\mathrm{co}^2 \;\Delta - 1)\sin^2 \theta}{m^2(\mathrm{co}^2 \;\delta - 1)\sin^2 \phi} \\ &= \frac{M^2(M-m)(M+m)}{m^2(T^2+M^2-m^2)} \frac{\sin^2\theta}{\sin^2\phi}.\end{aligned}

Hence

\displaystyle \sin^2 \phi = \frac{M^2(M^2-m^2)}{m^2(T^2+M^2-m^2)} \sin^2 \theta.

Rewriting the equation for \cos^2 (\theta + \phi) so it uses only sines and then substituting for \sin \phi using the equation above gives

\begin{aligned} &\frac{T-m}{M}\sqrt{\frac{M^2-m^2}{T^2+M^2-m^2}} \\ &= \sqrt{1-\sin^2 \theta}\sqrt{1-\sin^2\phi} - \sin \theta \sin \phi \\ &= \sqrt{1-\sin^2 \theta}\sqrt{1 - \frac{M^2(M^2-m^2)}{m^2(T+M^2-m^2)} \sin^2 \theta} \\ & \qquad\qquad\qquad\qquad\qquad - \sin^2 \theta \frac{M}{m} \sqrt{\frac{M^2-m^2}{T^2+M^2-m^2}}. \end{aligned}

Hence writing S for \sin^2 \theta we get by a rearrangement and squaring the following quadratic equation in S:

\begin{aligned} (1-S)\bigl(1 - & \frac{M^2(M^2-m^2)}{m^2(T^2+M^2-m^2)} S \bigr) \\ &\quad = \bigl( \frac{T-m}{M} + \frac{SM}{m}\bigr)^2 \frac{M^2-m^2}{T^2+M^2-m^2}. \end{aligned}

It is routine to check that the difference between the two sides is

\displaystyle \frac{(SM^2 - m^2)(M^2-m^2+mT)^2}{M^2m^2(T^2+M^2-m^2)}.

and so the unique solution is S = m^2/M^2. (This is one of several places where we can see that there is a maximum deflection angle if and only if m \le M.) Since \theta is positive, we have \sin \theta = m/M, and, as claimed some time ago, the sine of the maximum deflection angle is simply the ratio of the masses.

The equation for \sin^2 \phi above now implies that

\displaystyle \sin^2 \phi = \frac{M^2(M^2-m^2)}{m^2(T^2+M^2-m^2)} \frac{m^2}{M^2} = \frac{M^2-m^2}{T^2+M^2-m^2}

and hence

\displaystyle \sin \phi = \sqrt{\frac{M^2-m^2}{T^2+M^2-m^2}}.

Summary

A somewhat remarkable feature of the unique solution for the maximum deflection

\begin{aligned} \sin \phi &=\sqrt{\frac{M^2-m^2}{T^2+M^2-m^2}} \\ \sin \theta &= \frac{m}{M} \\ \mathrm{co}\; \delta &= \frac{T(T-m)}{M^2-m^2+mT} \\ \mathrm{co}\; \Delta &= \frac{TM}{M^2-m^2+mT} \end{aligned}

where T = m + M \, \mathrm{co}\; \Gamma is the total energy (up to the factor c^2 in our universe), is that it depends on which universe we are working in only by a change from the hyperbolic function \cosh (our universe) to the trigonometric function \cos (Orthogonal universe). Moreover, the maximum deflection angle \theta does not depend on the universe at all.


Signed binomial matrices of small order

December 28, 2020

Let B(n) denote the n \times n Pascal’s Triangle matrix defined by B(n)_{xy} = \binom{x}{y}, where, as in the motivating paper, we number rows and columns of matrices from 0. More generally, let B_a(n) be the shifted Pascal’s Triangle matrix defined by B_a(n)_{xy} = \binom{x+a}{y+a}. Let J(n)_{xy} = [x+y=n-1] be the involution reversing the order of rows (or columns) of a matrix, and let

D^\pm(n) = \mathrm{Diag}(1,-1,\ldots, (-1)^{n-1}).

Since n is fixed throughout, we shall write B, B_a, J and D^\pm for readability in proofs.

Claim 1. For any a \in \mathbb{N}, and any n \ge 2, the matrix D^\pm(n)B_a(n) is an involution.

Proof. Since n\ge 2, the matrix is not the identity. Since \bigl( D^\pm B_a \bigr)_{xy} = (-1)^x \binom{x+a}{y+a} we have

\begin{aligned} \bigl( D^\pm B_a D^\pm B_a \bigr)_{xz} &= (-1)^x \sum_y (-1)^y \binom{x+a}{y+a} \binom{y+a}{z+a} \\ &= (-1)^x \sum_y \binom{x+a}{z+a}\binom{x-z}{y-z}(-1)^y \\ &= (-1)^{x+z} \binom{x+a}{z+a} \sum_r \binom{x-z}{r}(-1)^r \\ &= (-1)^{x+z} \binom{x+a}{z+a} [x=z] (-1)^{x+z} \\ &= [x=z] \end{aligned}

as required. \quad\Box

Claim 2. We have B(n)^{-1} J(n) B(n) = D^\pm(n) J(n) B(n) J(n) and B(n) J(n) B(n)^{-1} = J(n) B(n) J(n) D^\pm(n).

Proof. The alternating sum used to prove Claim 1 can be used to show that B(n)^{-1}_{xy} = (-1)^{x+y} \binom{x}{y}. Hence

\begin{aligned} \bigl(B(n)&J(n)B(n)^{-1} \bigr)_{xz} \\ &= \sum_{y} (-1)^{x+y} \binom{x}{y} \binom{n-1-y}{z} \\ &= (-1)^x\sum_y (-1)^y \binom{x}{y} (-1)^{n-1-y-z} \binom{-z-1}{n-1-y-z} \\ &= (-1)^{x+n-1-z} \binom{x-z-1}{n-1-z} \\ &= (-1)^{x+n-1-z} \binom{n-1-x}{n-1-z} (-1)^{n-1-z} \\ &= (-1)^x \binom{n-1-x}{n-1-z} \\ &= \bigl( D^\pm(n)J(n)B(n)J(n) \bigr)_{xz}\end{aligned}

as required. The second identity is proved very similarly. \quad\Box

Claim 3. For any n \ge 2, the matrix D^\pm(n)J(n)B(n) has order 3 if n is odd and order 6 if n is even. \quad\Box

Proof. Since \bigl( D^\pm J B \bigr)_{xy} = (-1)^x \binom{n-1-x}{y} we have

\begin{aligned} \bigl( D^\pm & J B D^\pm J B D^\pm J B \bigr)_{xw} \\ &= (-1)^x \sum_y \sum_z (-1)^{y+z} \binom{n-1-x}{y}\binom{n-1-y}{z} \\ & \hspace*{3in} \binom{n-1-z}{w} \\ &= (-1)^{x+n-1} \sum_z \Bigl( \sum_y \binom{n-1-x}{y} \binom{-z-1}{n-1-y-z} \Bigr) \\& \hspace*{3in} \binom{n-1-z}{w} \\ &= (-1)^{x+n-1} \sum_z \binom{n-2-x-z}{n-1-z}\binom{n-1-z}{w} \\ &= (-1)^{x} \sum_z (-1)^{z}\binom{x}{n-1-z}\binom{n-1-z}{w} \\ &= (-1)^{x+n-1} \sum_r (-1)^r \binom{x}{r} \binom{r}{w} \\&= (-1)^{x+n-1} (-1)^x [x=w] = (-1)^{n-1}. \end{aligned}

It is easily seen that the matrix is not a scalar multiple of the identity, therefore it has the claimed order. \quad\Box

Alternative proof. Another proof uses Claims 1 and 2, as follows

\begin{aligned} (JBD^\pm)^3 &= JBD^\pm JBD^\pm (D^\pm J)B D^\pm \\ &= JB\bigl( D^\pm JBJ\bigr) D^\pm (-1)^{n-1}BD^\pm \\ &= (-1)^{n-1} JB((B^{-1}JB ) JD^\pm BD^\pm \\ &= (-1)^{n-1}BD^\pm BD^\pm \\ &= (-1)^{n-1}I\end{aligned}

with the end as before. \quad\Box

Let X^\circ denote the half-turn rotation of an n \times n matrix X, as defined by X^\circ = J(n)XJ(n). By Claim 3,

B(n)D^\pm(n)B(n)^\circ D^\pm = BD^\pm J B J D^\pm

is conjugate to JD^\pm BD^\pm JB and so to (-1)^{n-1} (D^\pm JB)(D^\pm JB). Hence this matrix has order 3 when n is odd and order 6 when n is even. We state without proof some further identities along these lines.

Claim 4. The matrix B(n)D^\pm(n)B_n(n)^\circ D^\pm(n) has order 3. If n is even then B(n)D^\pm(n)B_1(n)^\circ D^\pm(n) has order 12. If n is odd then B_1(n)D^\pm(n)B_1(n)^\circ D^\pm(n) has order 3.

The second case seems particularly remarkable. There are some obstructions (related to root systems) to integer matrices having finite order. These identities were discovered as a spin-off of a somewhat involved construction with linear algebra; I have no idea how to motivate them in any other way. For instance, how looking at

B(6)D^\pm(6)B_1(6)D^\pm(6) = \left( \begin{matrix} 1 & -6 & 15 & -20 & 15 & -6 \\ 1 & -5 & 10 & -10 & 5 & -1 \\ 1 & -4 & 6 & -4 & 1 & 0 \\ 1 & -3 & 3 & -1 & 0 & 0 \\ 1 & -2 & 1 & 0 & 0 & 0 \\ 1 & -1 & 0 & 0 & 0 & 0 \end{matrix} \right)

would one guess that it has order 12? A computer search found many more examples involving more lengthy products of the signed binomial matrices and their rotations, for instance

B(5)D^\pm B_4(5)^\circ D^\pm B(5)D^\pm B_8(5)^\circ D^\pm 

has order 12.


Completely monotone sequences

December 20, 2020

A sequence \lambda_0, \lambda_1, \ldots, \lambda_n is monotone decreasing if \lambda_0 \ge \lambda_1 \ge \lambda_2 \ge \ldots, or equivalently, if \lambda_j - \lambda_{j+1} \ge 0 for all j \in \mathbb{N}_0. We say that such a sequence is completely monotone if all its iterated difference, including the zero differences, are non-negative. That is \lambda_j \ge 0, \lambda_j - \lambda_{j+1} \ge 0, \lambda_j - 2\lambda_{j+1} + \lambda_{j+2} \ge 0, \lambda_j - 3\lambda_{j+1} + 3\lambda_{j+2} - \lambda_{j+3} \ge 0 for all j \in \mathbb{N}_0, and so on. Equivalently,

\displaystyle \sum_{i=0}^k (-1)^i \binom{k}{i}\lambda_{j+i} \ge 0

for all j, k \in \mathbb{N}_0. One family of examples I stumbled across, in what seemed like a completely unrelated context of random walks defined on posets with an anti-involution, is the two-parameter family

\displaystyle \lambda^{(a,b)}_j = \frac{\binom{a+b}{a}(a+b+1)}{\binom{a+b+j}{b}(a+b+j+1)}.

For instance, \lambda^{(0,0)}_j = \frac{1}{j+1} for each j \in \mathbb{N}_0. For a direct proof that this sequence is completely monotone, we use the \beta-integral \int_0^1 x^a(1-x)^b \mathrm{d}x = \frac{1}{\binom{a+b}{a}(a+b+1)} to write

\displaystyle\begin{aligned}\sum_{i=0}^k& (-1)^i \binom{k}{i} \frac{\lambda^{(a,b)}_j}{\binom{a+b}{a}(a+b+1)} \\ &= \sum_{i=0}^k (-1)^i \binom{k}{i} \int_0^1 x^{a+j+i}(1-x)^b \mathrm{d}x \\ &= \sum_{i=0}^k \int_0^1 x^{a+j}(1-x)^{b+k} \\ &= \frac{1}{\binom{a+b+j+k}{b+k}(a+b+j+k+1)} \\ &= \frac{\lambda^{(a,b+k)}_j}{\binom{a+b+k}{a}(a+b+k+1)}.\end{aligned}

In the same context, I came across some multivariable polynomials that I conjecture are always positive when evaluated at a completely monotone sequence. These polynomials include

\displaystyle \begin{aligned}f_1(x_0,x_1) &= x_0-x_1 \\ f_2(x_0,x_1,x_2) &= x_0^2 - 2x_0x_1 + 2x_0x_2 - x_1x_2 \\ f_3(x_0,x_1,x_2,x_3) &= x_0^3 - 3 x_0^2 x_1 + 5 x_0^2 x_2 - 3 x_0 x_1 x_2\\ & - 3 x_0^2 x_3 + 5 x_0 x_1 x_3 - 3 x_0 x_2 x_3 + x_1 x_2 x_3. \end{aligned}

Their positivity is obvious for f_1, because f_1 is equal to a difference. For f_2 positivity follows from

\displaystyle f_2(x_0,x_1,x_2) = x_0(x_0-2x_1+x_2) + (x_0-x_1)x_2.

Despite much struggling, I was unable to find a similar expression for f_3 as a sum of products of positive differences. The main purpose of this post is to show that such expressions exist for linear functions, but not in general. I therefore may well have been on a wild-goose chase, hardly for the first time.

Linear polynomials

Fix n \in \mathbb{N}. Let \mathcal{C} \subseteq \mathbb{R}^n be the cone of all completely monotone sequences, as defined by the inequalities at the top of this post. Let \mathcal{D} \subseteq \mathbb{R}^n be the cone spanned by the coefficients in these inequalities.

To make these cones more explicit, the following notation is helpful. Let

\Delta^k \lambda_j = \sum_{i=0}^k (-1)^i \binom{k}{i}\lambda_{j+i}.

Then \Delta^{k-1} \lambda_j - \Delta^{k-1} \lambda_{j+1} = \Delta^{k} \lambda_j, and so

\Delta^k \lambda_{n-1-k} + \Delta^{k-1} \lambda_{n-k} = \Delta^{k-1}\lambda_{n-1-k}.

It follows inductively that \mathcal{D} is the set of non-negative linear combinations of the vectors v^{(k)} defined by

v^{(k)}_{n-1-i} = (-1)^i\binom{k}{i}

For instance, if n=5 then v^{(0)}, v^{(1)}, v^{(2)}, v^{(3)}, v^{(4)} are the rows of the 5 \times 5 matrix below

\left( \begin{matrix} \cdot & \cdot & \cdot & \cdot & 1 \\ \cdot & \cdot & \cdot & 1 & -1 \\ \cdot & \cdot & 1 & -2 & 1 \\ \cdot & 1 & -3 & 3 & -1 \\ 1 & -4 & 6 & -4 & 1 \end{matrix} \right)

Suppose that \sum_{i=0}^{n-1} a_i \lambda_i \ge 0 for all completely monotone sequences. That is, a \cdot \lambda \ge 0 for all \lambda \in \mathcal{C}; we write this as a \cdot \mathcal{C} \ge 0. By Farkas’ Lemma applied to the cone \mathcal{D}, either

  • a \in \mathcal{D}, or
  • there exists \lambda \in \mathbb{R}^n such that \lambda \cdot \mathcal{D} \ge 0 and a \cdot \lambda < 0.

In the `either’ case, since \lambda \cdot \mathcal{D} \ge 0, the sequence \lambda is completely monotone. But then a \cdot \lambda < 0 contradicts the hypothesis on a. Therefore a \in \mathcal{D}. So we have a simple necessary and sufficient condition for a linear polynomial to be non-negative on all completely monotone sequences: this is the case if and only if the coefficients are a linear coefficients of the vectors v^{(0)}, v^{(1)}, \ldots, v^{(n-1)}.

Quadratic example

I had hoped that something similar might work for quadratic and higher degree polynomials based on analogously defined cones coming from the coefficients of products in the differences. Here is an counterexample to this hope.

Take n=3 and order monomials lexicographically x_0^2, x_0x_1, x_0x_2, x_1^2, x_1x_2, x_2^2. The homogeneous quadratics in the difference polynomials x_0, x_1,x_2, x_0-x_1, x_1-x_2, x_0-2x_1+x_2 span the same subspace as the rows of the matrix below.

\left( \begin{matrix} 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & -1 \\ 0 & 0 & 0 & 1 & -2 & 1 \\ 0 & 0 & 1 & 0 & -2 & 1 \\ 0 & 1 & -1 & -2 & 3 & -1 \\ 1 & -4 & 2 & 4 & -4 & 1 \end{matrix}\right)

For instance, the penultimate row has the coefficients for (\lambda_0-2\lambda_1+\lambda_2)(\lambda_1-\lambda_2). Define

g(x_0,x_1,x_2) = (x_0-x_1)(x_0-x_1-x_2)+x_1^2.

Claim. g(\lambda_0, \lambda_1, \lambda_2) \ge 0 for all completely monotonic sequences (\lambda_0,\lambda_1,\lambda_2).

Proof. Since the polynomial is homogeneous, we may assume that \lambda_0 = 1. Since g is linear in x_2, when evaluated at 1,\lambda_1,\lambda_2, it is negative if and only if

\displaystyle \lambda_2 \ge \frac{(1-\lambda_1)^2+\lambda_1^2}{1-\lambda_1} = \frac{1-2\lambda_1 + 2\lambda_1^2}{1-\lambda_1}

This inequality implies that

\displaystyle \lambda_1 - \lambda_2 \le \frac{-1 + 3\lambda_1 - 3\lambda_1^2}{1-\lambda_1}

but the quadratic 1 - 3y + 3y^2 - 3y = 3(y-1/2)^2 + \frac{1}{4} is always at least \frac{1}{4}. Hence \lambda_1 -\lambda_2 \le -\frac{1}{4}, contradicting complete monotonicity. \qquad\Box

Claim. g is not a positive linear combination of homogeneous quadratics in the differences x_0,x_1,x_2, x_0-x_1,x_1-x_2, x_0-2x_1+x_2.

Proof. It is equivalent to show that the coefficients of g, namely (1, -2, -1, 2, 1, 0) are not in the cone of positive linear combinations of the 6 \times 6 matrix above. Using the computer algebra Magma, one can compute its dual cone, which turns out to be the set of positive linear combinations of the rows of the matrix below.

\left(\begin{matrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 4 & 1 & 0 & 0 & 0 & 0 \\ 2 & 1 & 1 & 0 & 0 & 0 \\ 4 & 2 & 0 & 1 & 0 & 0 \\ 4& 3 & 2 & 2 & 1 & 0 \\ 1 & 1 & 1 & 1 & 1 & 1 \end{matrix}\right)

In particular, the dual cone contains (2,1,1,0,0,0), and since

(1, -2, -1, 2, 1, 0) \cdot (2,1,1,0,0,0) = -1,

(1,-2,-1,2,1,0) is not in the dual of the dual cone, as required.\qquad\Box

I suspect that there is no easy way (or maybe even no algorithmic way?) to decide if a homogeneous polynomial in a completely monotone sequence is non-negative, but would be very happy to be proved wrong on this.


Immanants and representations of the infinite symmetric group

October 12, 2020

This is a reminder to the author of a wild question. (I won’t even call it an idea.) Given a n \times n matrix X and a symmetric group character \chi, the immanant d_\chi(X) is defined by

\displaystyle \sum_{\sigma \in \mathrm{Sym}_n} \chi(\sigma) \prod_{i=1}^n X_{(i,\sigma(i))}.

Thus when \chi is the sign character the immanant is the familiar determinant, and when \chi is the trivial character, it is the permanent, of interest partly because of the Permanent dominance conjecture and its starring role in Valiant’s programme to prove the algebraic analogue of the P \not= NP conjecture. Very roughly stated, a more general conjecture is that all immanants, except for the determinant (or multiples of it), are hard to compute.

My wild question is: can one generalize immanants to representations of infinite symmetric groups, and does this give any extra freedom to in some sense `interpolate’ from the trivial to the sign character?


A game on tournaments with multiple edges

August 21, 2020

Here is a quick post to record a combinatorial game that I think might be of interest, but I don’t have time to think about seriously now.

When Person A plays Person B on the online chess server on which I spend far too much time, the server looks to see who has the longest run of consecutive games as black, and chooses that person to be white. Ties are broken randomly. This is a simple but effective rule: for instance it guarantees that in a series of games, the players alternate colours.

Now suppose a team of people want to manipulate the server into giving one of their members a long run of consecutive games as white. (This would of course be strictly against the server policies.) How big must the team be to guarantee that some member gets n consecutive games as white?

A quick look at the small cases reveals the answer. With three people, having arbitrary past histories (this matters below), A plays B and the black player, B say, then plays C. If B is white, then when A plays B again, either A or B establishes a run of two games as white. If B is black (for instance this always happens if C has had a run of two black games), then when A plays C, either A or C gets the same run. Note that in the first case, A never plays C, but instead plays B twice.

With four people, we use the strategy above to give A a run of two white games, and then using B, C, D, give B say, another run of two white games. Now when A and B play, a run of three white games is established. Continuing inductively we see that n people can force n-1 consecutive white games.

One might also ask for the analogous result where multiple games between the same people are not allowed. If the aim is instead to maximize the number of games as white, then we obtain the combinatorial game where the ‘Picker’ picks an edge of the complete graph K_n and the ‘Chooser’ chooses its orientation. Since in the end all edges are picked exactly once, the role of the ‘Picker’ is defunct: the Chooser can choose an n vertex tournament before play begins, and follow its recommendations. By Theorem 1.2 in this article of Alspach and Gavlas, when n is odd, K_n can be decomposed into n(n-1)/2m edge cycles of length m if and only if m divides n. Directing each cycle in an arbitrary way then gives a tournament where every vertex has the same in- and out-degree. A simple solution when n is prime is to take m=n and use the cycles (0,d,2d,\ldots,(n-1)d) for 1 \le d \le (n-1)/2 on the vertex set \{0,1,\ldots, n-1\}, working modulo n. When n is even, a discrepancy of 1 between the in- and out-degrees of every vertex is inevitable, and it’s not hard to use the construction for n odd to see that this can be achieved.

Returning to the original setting where we care about runs of white games, this shows that the Chooser can guarantee the longest run is at most the maximum out-degree, namely \lfloor n/2 \rfloor.

I’ll end with two natural questions suggested by this quick analysis.

Question 1

Let G be a finite graph. In each step the Picker chooses an edge of G, and the Chooser directs it; multiple picks are allowed, and the Chooser may choose a different orientation on different picks. Say that vertex v has a run of length m if in the most recent m picks involving this vertex, it was always the source. What is the maximum run length that the Picker can force somewhere in this graph?

Question 2

As Question 1, but now each edge may only be picked once.

For the first in the case of the complete graph, the analysis above shows that the Picker can force a run of length n-1, and since the chooser can make the next directed edge involving this vertex an in-edge, this is best possible. For the second in the case of the complete graph, the Picker cannot do better than \lfloor n/2 \rfloor, but I do not know if this is always possible.