## Representation theory is Fourier analysis

At the Mathematics Department at Royal Holloway we’ve started to read Persi Diaconis’ lecture notes Group Representations in Probability and Statistics. Chapter 2 of his book presents an interesting take on representation theory, in which the essential object is a matrix-valued Fourier transform. I found it helpful to think about how this definition relates to (1) Fourier coefficients of complex functions and (2) group algebras and algebra homomorphisms.

Here is Diaconis’ definition. Let $G$ be a finite group and let $P : G \rightarrow \mathbf{C}$ be a function. (In practice, $P$ is often a probability measure on $G$, but we don’t need this for the definition to make sense.) Let $\rho : G \rightarrow \mathrm{GL}_n(\mathbf{C})$ be representation of $G$; that is, $\rho$ is a group homomorphism from $G$ into the group of invertible $n \times n$ matrices. The Fourier transform of $P$ at the representation $\rho$ is

$\hat{P}(\rho) = \sum_{s \in G} P(s) \rho(s).$

So the Fourier transform is a function from representations to matrices.

(1) All this works fine if we take $G = \mathbf{R}$, provided we replace the sum with an integral and require the functions $P$ to be periodic, say with period $1$. (So really we are looking at representations of the unit circle.) For each $n \in \mathbf{N}$ we may define a representation $\rho_n : \mathbf{R} \rightarrow \mathrm{GL}_1(\mathbf{C})$ by

$\rho_n(x) = \bigl( \exp (2\pi i nx) \bigr).$

(The right-hand side is a $1 \times 1$-matrix.) If $f : \mathbf{R} \rightarrow \mathbf{C}$ is a periodic function, then, according to Diaconis’ definition, the Fourier transform of $f$ at $\rho_{-n}$ is the $1 \times 1$ matrix with entry

$\int_0^{1} f(x) \exp(-2\pi i n x).$

Up to scaling factors, this is one of the Fourier coefficients of $f$. The coefficient above measures the correlation of $f$ with the function $x \mapsto \exp(2\pi i n x)$. In the general setup, we can define, and measure, the correlation of a probability distribution on a finite group with a matrix representation of that group.

(2) To explain the group algebra, I’ll use the example where $G$ is the symmetric group on set $\{1,2,3\}$. The group algebra $\mathbf{C}G$ of $G$ over $\mathbf{C}$ consists of all formal $\mathbf{C}$-linear combinations of the elements of $G$. For example, one element of $\mathbf{C}G$ is

$(12) - \mathrm{i} (123) + (13).$

The product in $\mathbf{C}G$ comes from the group $G$, so for instance

$(\mathrm{id} + (12))(\mathrm{id} - (123)) = \mathrm{id} + (12) - (123) - (23)$

where $\mathrm{id}$ denotes the identity element of $G$.

The interesting thing is that if we identity probability measures $G \rightarrow \mathbf{C}$ with elements of $\mathbf{C}G$, then the product in the group algebra corresponds to the convolution product on probability measures

$(P \star Q)(s) = \sum_{t \in G} P(st^{-1})Q(t).$

Moreover under this identification, the Fourier transform with respect to a representation $\rho : G \rightarrow \mathrm{GL}_n(\mathbf{C})$ becomes an algebra homomorphism from $\mathbf{C}G$ into the algebra of all $n \times n$-matrices. So, as we’d hope, the Fourier transform of a convolution of two functions is just the product of the Fourier transforms of each function. In symbols:

$\widehat{P \star Q}(\rho) = \hat{P}(\rho) \hat{Q}(\rho).$

The previous equation can be checked without reference to the group algebra in a few lines; still it’s nice that there is a decent algebraic explanation for why it holds.