Here are my lecture notes for a talk I gave yesterday on invariants of finite groups in the graduate student algebra seminar here. Next week I’m talking about quotients of varieties by finite groups, and I’ll post those here as well. As a side note, does anyone know how to get good commutative diagrams in the parts of Latex that are available for use on wordpress?
We start out by taking to be a finite group and fixing a representation of on . Because we are fixing the representation, we will identify with its image and treat the elements like matrices.
So now, we are interested in studying the polynomial invariants of this action. We define a polynomial invariant to be a polynomial such that for all . A classical example is the action of on by permuting the basis vectors, then the polynomial invariants are the symmetric polynomials. We denote the ring of invariant polynomials by .
We will, in fact, only work with homogeneous polynomials, because if is an invariant, then so are the homogeneous components of , because the action preserves degree.
Definition: We define the Reynolds (or averaging) operator to by the map given by .
At this point we must restrict ourselves to characteristic not dividing the order of the group for this to make sense, and so for simplicity we will restrict to of characteristic zero.
Now, one easy way to find invariants is to take monomials and apply the Reynolds operator to them. These will give us either zero or invariants of total degree . Thanks to Noether, in 1916 (?), we have the following
Theorem(Noether): Given a finite matrix group , we have . In particular, the ring of invariants is generated by finitely many homogeneous invariants.
Proof: Let . Then , and then , and so every invariant is a linear combination of ‘s, so it suffices to show that for each , is a polynomial in the for .
Here’s where Noether showed her brilliance. Rather than look at one at a time, she chose to look at them all at once, and then apply her knowledge of symmetric polynomials to the problem.
The first step is to look at , with all positive integers. Now we need a bit of notation. If is a matrix with entries , we denote by the th row of . So . So now if we have . Thus, written this way, .
Now we introduce a pile of new variables, and substitute for into . Thus, we have .
Summing over , we get , which is . This includes all of the because the prevent any cancellation.
Next we note that is the th power sum of the quantities indexed by . Now, every symmetric function in the quantities is a polynomial in , because they generate the ring of symmetric polynomials.
Thus, , and so we obtain that is equal to . Expanding and equating coefficients of , we see that is a polynomial in the for . QED.
While this is nice and constructive, it is rather unwieldy. For instance, to compute the generating invariants of the permutation representation of , we would need to compute the Reynolds operator for every single monomial of degree less than or equal to , and no one wants to do that. There is, however, a way to see in advance how many linearly independent invariants of a given degree there are, and it is due to Molien in 1898.
First we look at the special case of linear invariants. The is, invariant polynomials of degree 1. We define to be the representation induced by on the degree homogeneous polynomials by change of variables.
Lemma: The number of linear independent linear invariants is given by .
Proof: Let be a linear function on . The is either zero or a linear invariant for . Now we look at the action of on the vector space of linear functionals. As , we can write the space as and restricts to the identity on . Thus, is equivalent to a matrix with just ones and zeros along the diagonal. Thus, is the number of linearly independent linear invariants. QED
Before moving on, lets look at the case of quadratic invariants. This is very similar. As our representation induced one on , it induces one on , that is, on the polynomials of degree two on . If we call this representation , it is hardly a surprise to obtain that the number of quadratic invariants is . That is, the trace of the Reynolds operator for this representation. This generalizes to all .
We can get a little bit more mileage out of this by noting that we can find the trace of by noting that if are the eigenvalues of , then the eigenvalues of are for . So the trace is just the sum of these.
So now we can prove
Theorem(Molien): Suppose that is the number of linearly independent homogeneous invariants of with degree and let be the generating function for this sequence. Then .
Proof: We note that for all , and so to prove the theorem, we must merely compare this sum to the one asserted. To do so, we can just compare terms. Fix and set to be the eigenvalues of .
The corresponding term is going to be . Expanding this gives . The result follows by computing the coefficient of to be . QED.
To see Molien’s Theorem in action, take the three dimensional permutation representation of . Then , and , where is the identity, is and is . As this is constant on conjugacy classes, we get , which simplifies to .
Expanding, we get which is just .
We find that the only linear invariant is . We can take and the and form a basis for the quadratic invariants. Similary, we can take , and a basis for the cubic invariants is . After this, everything can be expressed with the earlier invariants. For instance, the quartic invariants are .