We continue our quest to understand when quotients of schemes by actions of group schemes exist. Last time we defined group schemes, group actions, geometric quotients, and gave some examples. In this post, I’ll define what it means to be a reductive group scheme.  To give a full treatment of the theory of reductive group schemes here is impossible, so we’ll pick one way to define the notion of a reductive group scheme, state some equivalences, and give some examples.  I apologize for the long time between the posts, I’m on vacation and have been have been spending more time outside than at a computer!

Fix a field k (for simplicity assume that k algebraically closed), and let the letter G always stand for an affine group scheme over k.  The group G is called a torus if G \cong \mathbb{G}_{m,k}^n.  (See the comment below about non-split tori which arrise when working over non-algebraically closed fields).  First, we’ll define the most general notion of a reductive group scheme. 

Definition:  An algebraic subgroup N of affine group scheme G is called normal if h_N(R) is a normal subgroup of h_G(R) for all k-algebras R.  Recall that for an affine group scheme Gh_G(R) denoted the functor of R-points, and is itself a group. 

Example: Obviously the kernel of any algebraic group homomorphism is normal.

Fact: The Quotient of G by a normal subgroup scheme N always exists as a group scheme.

Definition: The derived group \mathcal{D}G is the intersection of all normal subgroups N of G such that G / N is commutative.  Note that this is the smallest normal subgroup of G such that the quotient is commutative.  Denote by \mathcal{D}^2G = \mathcal{D}(\mathcal{D}(G)) and \mathcal{D}^i(G) similarly. 

Definition: The group G is called solvable if the derived series G \supset \mathcal{D}G \supset \mathcal{D}^2(G) \supset \ldots terminates with 1.   Note that this is equivalent to the existence of a composition series of algebraic subgroups.

Example: Consider the group T_2 of upper triangular 2×2 matrices \begin{pmatrix} * & * \\ 0 & * \end{pmatrix}.  Then \mathcal{D}G is equal to the group of matrices of the form \begin{pmatrix} 1 & * \\ 0 & 1 \end{pmatrix} and the second derived group is trivial.  Exercise: the group of upper triangular nxn matrices is also solvable.

Remark 1: In fact, over an algebraically closed field, the Lie-Kolchin Theorem states that if G is a connected, smooth, and solvable algebraic sub-group of GL(V) then there exists a basis of V such that G \subset T_n.  Also, the condition is necessary since closed subgroups of solvable groups are solvable (quotients of solvable groups of quotients are as well).

Remark 2: One fact that we probably should have mentioned in the last post, is that actually every affine algebraic group can be realized as a subgroup of GL_n for some number n (though highly non-canonically).  So the above theorem isn’t as restrictive as it might sound. 

Definition: Suppose G is a subgroup of GL(V) consisting of unipotent endomorphisms, then there is a basis of V so that G is a subgroup of U_n \subset T_n, the group of upper triangular matrices with ones on the diagonal.  If G is isomorphic to a subgroup of U_n for some n then G is called unipotent.

Remark 3: This is equivalent to: For every non-zero representation V of G there is a non-zero fixed vector.

Remark 4: If N, H are algebraic subgroups of G with N normal, and H and N are solvable, then so is HN.  This implies that G contains a largest connected normal solvable (smooth) subgroup which is called the radical of G.

Definition:  Referring to Remark 4, if the algebraic group G has trivial radical, it is called semisimple.  It is called reductive if its radical is a torus.  Being reductive is equivalent to the condition that the largest connected normal unipotent subgroup is trivial.

Examples:  A torus is reductive.  The groups SL_n, SO_n, SP_{2n} are all semisimple (so reductive).  The group GL_n is reductive but not semisimple.  The group \mathbb{G}_a is neither.

We are now ready to give other versions of reductivity:

Definition:  An algebraic group scheme G is called linearly reductive if for any finite-dimensional representation V of G, and any v \in V^G, there is a G-invariant linear function f: V \rightarrow k such that f(v) \neq 0.  If there is only a G-invariant homogeneous polynomial f such that f(v) \neq 0 then G is called geometrically reductive.

Lemma:  Linear Reductivity of G is equivalent to the conditions:
i) Given a finite dimensional representation V of G and a surjective G invariant linear form f: V \rightarrow k there is an invariant vector w \in V^G such that f(W) \neq 0.
ii) For each surjection of G representations V \rightarrow W the induced map on G invariants is surjective.
ii’) For each surjection of finite dimensional representations as above, the induced map on invariants is surjective.
iii) For any finite-dimensional representation V, if v \in V is G-invariant modulo a proper sub-representation U \subset V, then the coset v + U contains a non-trivial G-invariant vector.

Proof:  Linear Reductivity is equivalent to condition i) by replacing V with the dual vector space V^* and noting that the space of G invariant forms is Hom_G(V^*, k) = V^G where G acts trivially on k
ii) implies ii’) is trivial.  ii’) implies iii) is easy by looking at the quotient map V \rightarrow V/U.  For iii) implies ii): if L: V \rightarrow W is a surjection of representations, suppose that L(v) = w \in W^G.  The vector v is contained in a finite-dimensional subrepresentation V_0 \subset V (Exercise).  This vector is invariant modulo the U_0 = V_0 \cap ker(L) so by iii), there is a G invariant vector v_0 such that v - v_0 \in U_0.  Then L(v_0) = w, so that the map on invariants is surjective.
ii) implies i) is clear by taking W = k with trivial action.
i) implies ii’): If w \in W^G then W decomposes as a representation of G as W = k \cdot w \oplus W' (Exercise).  Then by i), the composition V^G \rightarrow V \rightarrow W \rightarrow k \cdot w is surjective. 

Time for some Examples:
1)  A finite group schemes G with order prime to the characteristic of the field are linearly reductive:
Proof:
Suppose that V is a finite dimensional representation and v \in V is invariant modulo a proper subrepresentation U.  Set v' = (1/|G|) \Sigma_{g \in G} (g \cdot v).  The vector v' is G invariant by construction and v' - v is contained in U (exercise) so that condition iii) above is verified.

2)  Algebraic tori (\mathbb{G}_m)^N are linearly redutive.
Proof:
It is enough to show that \mathbb{G}_m is linearly reductive.  Suppose as above that V is a finite dimensional representation and there is a vector v that is invariant modulo a proper subrepresentation U.  As discussed in the last post, to give a representation of G is to give weight decompositions V = \bigoplus V_m and U = \bigoplus U_m.  Now, to say that v = \Sigma v_m is invariant modulo U means that v_m \in U_m for all m \neq 0.  Thus v_0 is torus invariant and an element of the coset v + U as required.

3)  The group G = \mathbb{G}_a is not linearly reductive.
Proof:
Consider the 2-dimensional representation given by \mathbb{G}_a \rightarrow GL_2 which corresponds to s \mapsto \begin{pmatrix} 1 & s \\ 0 & 1 \end{pmatrix}.  The map k[x,y] \rightarrow k[x,y]/(y) = k[x] is a surjection of G representations but the map on invariants is not surjective.

4)  In characteristic 0, SL_n is linearly reductive.  In characteristic p, it is geometrically reductive. 
Outline of proof when n = 2:
We have that k[SL_2] = k[x,y,z,t]/ (xt - yz - 1).  We’ll leave it as an exercise to show that for T = \mathbb{G}_m \subset SL_2 (the diagonal torus), then k[SL_2]^T = k[xy, xt, zy, zt] / (xt - yz - 1)
Now consider the set of rational functions R_n := \{ f(u,v)/(u - v)^n | deg_u f < n, deg_v f < n \}.  Set R = \cup R_n.  Then we’ll leave it as another exercise to show that R \cong k[SL_2]^T by sending u \rightarrow x/z, v \rightarrow y/t, (u - v)^{-1} \rightarrow zt
Suppose that V is a finite dimensional representation of G = SL_2 and that v \in V is an invariant vector.  By linear reductivity of T, there is a T invariant linear map h: V \rightarrow k such that h(v) = 1.  Define a map \alpha: V \rightarrow k[G] by the composition V \rightarrow V \otimes k[G] \rightarrow k \otimes k[G] = k[G] where the first arrow is the action and the second arrow is the map h \otimes 1.  Check that \alpha(v) = 1, \alpha(V) \subset k[G]^T and that \alpha is a homomorphism of G representations.  Now, the image of \alpha is contained in R_n for some n.  A general element of R_n looks like f(u,v)/(u-v)^n where f(u,v) = \Sigma a_{ij}u^iv^j.  Consider the map det: R_n \rightarrow k which takes the determinant of the square (n + 1) by (n + 1) matrix given by the a_{ij}; it is homogeneous and G invariant.  Check that det (1) = \prod {n \choose i}

Let f: R_n \rightarrow k be the linear coefficient in the expansion of det(1 + \epsilon f(u,v)/(u-v)^n) (the formal differential of det at 1).  Then f is a G invariant linear map and in characteristic 0, f(1) \neq 0 (exercise), and so the composition V \rightarrow R_n \rightarrow k (the first map is \alpha, the second is f) is the required invariant linear form which doesn’t vanish at v
In characteristic p: in the above argument, choose n = p^a - 1 for sufficiently large a. Then check that det(1) = 1 and this is the homogeneous invariant form required.

Now unfortunately we can only list some non-trivial facts without going too deep into more theory. 
Facts:
1)  Clearly linear reductivity implies geometric reductivity.  In characteristic 0, the converse is true.  In characteristic p, as mentioned by Greg in the comments last time, linear reductivity is very restrictive; in fact, the only connected linear reductive groups are tori. 
2)  The property of reductivity implies geometric reductivity.  This is a Theorem due to Haboush (1975).  I see that there’s an outline of the proof on wikipedia!

It is actually the property of geometric reductivity of G that can be used to prove that the invariant ring of an action on a affine scheme is finitely generated, and that closed orbits are separated by invariants.  We’ll go more into what sorts of quotients exist under different hypotheses next time!

About these ads