Here are my lecture notes for a talk I gave yesterday on invariants of finite groups in the graduate student algebra seminar here. Next week I’m talking about quotients of varieties by finite groups, and I’ll post those here as well. As a side note, does anyone know how to get good commutative diagrams in the parts of Latex that are available for use on wordpress?

We start out by taking G to be a finite group and fixing \rho:G\to GL(n) a representation of G on V=k^n. Because we are fixing the representation, we will identify G with its image and treat the elements like matrices.

So now, we are interested in studying the polynomial invariants of this action. We define a polynomial invariant to be a polynomial f\in k[x_1,\ldots,x_n] such that f(x)=f(g\cdot x) for all g\in G. A classical example is the action of S_n on k^n by permuting the basis vectors, then the polynomial invariants are the symmetric polynomials. We denote the ring of invariant polynomials by k[x_1,\ldots,x_n]^G.

We will, in fact, only work with homogeneous polynomials, because if f is an invariant, then so are the homogeneous components of f, because the action preserves degree.

Definition: We define the Reynolds (or averaging) operator to by the map R_G:k[x_1,\ldots,x_n]\to k[x_1,\ldots,x_n]^G given by f\mapsto \frac{1}{|G|}\sum_{g\in G} f(g\cdot x).

At this point we must restrict ourselves to characteristic not dividing the order of the group for this to make sense, and so for simplicity we will restrict to k of characteristic zero.

Now, one easy way to find invariants is to take monomials x^\alpha and apply the Reynolds operator to them. These will give us either zero or invariants of total degree |\alpha|. Thanks to Noether, in 1916 (?), we have the following

Theorem(Noether): Given a finite matrix group G\subset GL(n), we have k[x_1,\ldots,x_n]^G=k[R_G(x^\beta):|\beta|\leq |G|]. In particular, the ring of invariants is generated by finitely many homogeneous invariants.

Proof: Let f\in k[x_1,\ldots,x_n]^G. Then f=\sum_\alpha c_\alpha x^\alpha, and then f=R_G(f)=\sum_\alpha c_\alpha R_G(x^\alpha), and so every invariant is a linear combination of R_G(x^\alpha)‘s, so it suffices to show that for each \alpha, R_G(x^\alpha) is a polynomial in the R_G(x^\beta) for |\beta|\leq |G|.

Here’s where Noether showed her brilliance. Rather than look at one \alpha at a time, she chose to look at them all at once, and then apply her knowledge of symmetric polynomials to the problem.

The first step is to look at (x_1+\ldots+x_n)^k=\sum_{|\alpha|=k} a_\alpha x^\alpha, with a_\alpha all positive integers. Now we need a bit of notation. If A\in G is a matrix with entries a_{ij}, we denote by A_i the ith row of A. So A_i\cdot x=a_{i1}x_1+\ldots+a_{in}x_n. So now if \alpha=(\alpha_1,\ldots,\alpha_n) we have (Ax)^\alpha=(A_1\cdot x)^{\alpha_1}\ldots (A_n\cdot x)^{\alpha_n}. Thus, written this way, R_G(x^\alpha)=\frac{1}{|G|}\sum_{A\in G} (A x)^\alpha.

Now we introduce a pile of new variables, u_1,\ldots,u_n and substitute u_iA_i\cdot x for x_i into (x_1+\ldots+x_n)^k. Thus, we have (u_1A_1\cdot x+\ldots +u_nA_n\cdot x)^k=\sum_{|\alpha|=k} a_\alpha (A x)^\alpha u^\alpha.

Summing over A\in G, we get S_k=\sum_{A\in G} (u_1A_1\cdot x+\ldots +u_nA_n\cdot x)^k, which is \sum_{|\alpha|=k} a_\alpha \sum_{A\in G}\left((A x)^\alpha \right)u^\alpha=\sum_{|\alpha|=k} |G| a_\alpha R_G(x^\alpha)u^\alpha. This includes all of the R_G(x^\alpha) because the u^\alpha prevent any cancellation.

Next we note that S_k is the kth power sum of the quantities u_1A_1\cdot x+\ldots +u_nA_n\cdot x=U_A indexed by A\in G. Now, every symmetric function in the |G| quantities U_A is a polynomial in S_1,\ldots,S_{|G|}, because they generate the ring of symmetric polynomials.

Thus, S_k=F(S_1,\ldots,S_{|G|}), and so we obtain that \sum_{|\alpha|=k} |G| a_\alpha R_G(x^\alpha)u^\alpha is equal to F(\sum_{|\beta|=1} |G|a_\beta R_G(x^\beta)u^\beta,\ldots,\sum_{|\beta|=|G|}|G|a_\beta R_G(x^\beta) u^\beta). Expanding and equating coefficients of u^\alpha, we see that |G|a_\alpha R_G(x^\alpha) is a polynomial in the R_G(x^\beta) for |\beta|\leq |G|. QED.

While this is nice and constructive, it is rather unwieldy. For instance, to compute the n generating invariants of the permutation representation of S_n, we would need to compute the Reynolds operator for every single monomial of degree less than or equal to n!, and no one wants to do that. There is, however, a way to see in advance how many linearly independent invariants of a given degree there are, and it is due to Molien in 1898.

First we look at the special case of linear invariants. The is, invariant polynomials of degree 1. We define Sym^n V to be the representation induced by G on the degree n homogeneous polynomials by change of variables.

Lemma: The number of linear independent linear invariants is given by tr(Sym^1 R_G)=tr(R_G)=\frac{1}{|G|}\sum_{g\in G} tr(g).

Proof: Let f be a linear function on k^n. The Sym^1 R_G(f) is either zero or a linear invariant for G. Now we look at the action of R_G on the vector space of linear functionals. As Sym^1 R_G^2=Sym^1 R_G, we can write the space as \ker Sym^1 R_G\oplus \mathrm{im} Sym^1 R_G and Sym^1 R_G restricts to the identity on \mathrm{im} R_G. Thus, Sym^1 R_G is equivalent to a matrix with just ones and zeros along the diagonal. Thus, tr(R_G)=tr(Sym^1 R_G) is the number of linearly independent linear invariants. QED

Before moving on, lets look at the case of quadratic invariants. This is very similar. As our representation induced one on (k^n)^\vee=Sym^1 V, it induces one on Sym^2 (k^n)^\vee, that is, on the polynomials of degree two on k^n. If we call this representation Sym^2 V, it is hardly a surprise to obtain that the number of quadratic invariants is \frac{1}{|G}|\sum_{g\in g} tr(Sym^2(g)). That is, the trace of the Reynolds operator for this representation. This generalizes to all n.

We can get a little bit more mileage out of this by noting that we can find the trace of Sym^2(g) by noting that if w_1,\ldots,w_n are the eigenvalues of g, then the eigenvalues of Sym^2(g) are w_iw_j for 1\leq i\leq j\leq n. So the trace is just the sum of these.

So now we can prove

Theorem(Molien): Suppose that a_d is the number of linearly independent homogeneous invariants of G with degree d and let \Phi(\lambda)=\sum_{d=0}^\infty a_d\lambda^d be the generating function for this sequence. Then \Phi(\lambda)=\frac{1}{|G|}\sum_{g\in G}\frac{1}{\det(I-\lambda g)}.

Proof: We note that a_d=\frac{1}{|G|}\sum_{g\in G} tr(Sym^d(g)) for all d, and so to prove the theorem, we must merely compare this sum to the one asserted. To do so, we can just compare terms. Fix g\in G and set w_1,\ldots,w_n to be the eigenvalues of g.

Then tr(Sym^d(g))=\sum_{1\leq i_1\leq\ldots\leq i_d\leq n} w_{i_1}\ldots w_{i_d}.

The corresponding term is going to be \frac{1}{\det(I-\lambda d)}=\frac{1}{(1-\lambda w_1)\ldots(1-\lambda w_n)}. Expanding this gives (1+\lambda w_1+\lambda^2w_1^2+\ldots)\ldots(1+\lambda w_n+\lambda^2 w_n^2+\ldots). The result follows by computing the coefficient of \lambda^d to be \sum_{1\leq i_1\leq\ldots\leq i_d\leq n} w_{i_1}\ldots w_{i_d}. QED.

To see Molien’s Theorem in action, take the three dimensional permutation representation of S_3. Then \det(I-\lambda e)=(1-\lambda)^3, \det(I-\lambda\sigma)=(1-\lambda)(1-\lambda^2) and \det(I-\lambda \tau)=(1-\lambda^3), where e is the identity, \sigma is (123) and \tau is (12). As this is constant on conjugacy classes, we get \frac{1}{|S_3|}\sum_{g\in S_3} \frac{1}{\det(I-\lambda g)}=\frac{1}{6}\left(\frac{1}{(1-\lambda)^3}+\frac{3}{(1-\lambda)(1-\lambda^2)}+\frac{2}{(1-\lambda^3)}\right), which simplifies to \frac{1}{(1-\lambda)(1-\lambda^2)(1-\lambda^3)}.

Expanding, we get (1+\lambda+\lambda^2+\ldots)(1+\lambda^2+\lambda^4+\ldots)(1+\lambda^3+\lambda^6+\ldots) which is just 1+\lambda+2\lambda^2+3\lambda^3+4\lambda^4+\ldots.

We find that the only linear invariant is x+y+z=I_1. We can take I_2=x^2+y^2+z^2 and the I_1^2 and I_2 form a basis for the quadratic invariants. Similary, we can take I_3=x^3+y^3+z^3, and a basis for the cubic invariants is I_1^3,I_1I_2,I_3. After this, everything can be expressed with the earlier invariants. For instance, the quartic invariants are I_1^4,I_1^2I_2,I_2^2,I_1I_3.

About these ads