Back in the second post, we defined projective space to be the collection of lines through the origin in affine space. A natural generalization is to look at -planes through the origin in affine space. At first glance, we might think that these spaces are legitimate generalizations of projective space, and that it might be valuable to do algebraic geometry on them.

While it is useful to use these varieties, called Grassmanians, as a place to start, they are actually projective varieties in the first place. First off, we’ll define a few notations for these objects, and we’ll start with quite a few of them because they’re all used in slightly different contexts.

Let be the *Grassmannian* of -dimensional subspaces of an -dimensional vector space. If we want to think of the vector space with no specific isomorphism with , we will write . If we want to think about the dimensional linear subspaces of projective space, we write . And finally, if we want to look at the space of surjective maps , we write . This last one is the same by looking at the -dimensional subspaces that are the kernels of the maps.

A good thing to know is that there are automatically (noncanonical) isomorphisms by choosing some inner product on and taking orthogonal subspaces. We will sometimes refer to these as *dual* Grassmannians. A special case is , which is dual to , and denoted by .

We should also quickly note that . This is because the Grassmannian can be seen as the matrices with rank . An open subset of these are the ones where the first square is the identity matrix, and so there are free parameters. We need the fact that, for varieties, an open set has the same dimension as the whole variety.

Now we’ll start on a bit of algebra needed to prove that Grassmannians are, in fact, projective varieties. Let and be vector spaces over a field . We define the *tensor product* to be the vector space generated by symbols of the form for with the relations , and . In fact, if we have two modules over a commutative ring , this same process defines , though it is simpler to see the result for vector spaces. In fact, if is a basis for and is a basis for , then is a basis for . So , this process just multiplies dimension. Tensor products show up all over the place, and are useful for making several other objects, which we’ll now do. We’ll define things in terms of vector spaces for now, and when we need them for something else, we’ll make the necessary corrections.

We define the *tensor algebra* of to be , with each term involving another tensor product. It is an algebra (that is, a vector space which is also a ring) because we can define the product over and to be . This gives us a graded ring, but one which is noncommutative. We denote it by , and the part of degree is .

Now we define the *symmetric algebra* to be modulo the ideal generated by for all . This gives us a commutative ring, which we denote by or , and the part of degree is .

Last for now, we define the *exterior algebra* to be , which is modulo the ideal for all . This gives us that , and so , and we have an anti-commutative ring. Here, we denote multiplication by . The part of degree is denoted by .

Though the exterior algebra is legitimately new, the symmetric algebra is a ring that we’ve seen before. If we pick a basis for , then is just the polynomial ring in variables. So this is a coordinate free way of expressing the functions on a vector space. The exterior algebra is a bit trickier. This is partly because we’re not as used to anti-commutativity. Though in the case where the field has characteristic two, these two algebras are, in fact, the same.

The exterior algebra is the one that is more important for our current purposes. As can be checked by choosing a basis and doing a nasty computation, any exterior product of vectors is zero if and only if the vectors are linearly dependent. With a bit more work it can be shown that if you have and you choose vectors , then is equal to the determinant of the matrix using these vectors as columns. Also note that this implies that the parts graded above are all zero, because vectors can’t be linearly independent.

So now we’ll use the exterior algebra to describe an embedding of the Grassmannian into projective space. Fix so that we’re looking at . Then each point can be thought of as a -dimensional subspace of . If we take , an -dimensional subspace such that they are independent, then the wedge product is determined up to a non-zero scalar by the subspace. So this defines a function . This map is injective, which can be seen by a nasty computation after choosing a basis, and so we regard as a subset of . Though they are not pleasant to write out, this subset is the zero set of a collection of quadratic polynomials, and so is a projective variety.

We’ve spoken quite a bit now about Grassmannians, but the title also mentions Flag varieties. Well, these are fairly simple to define in terms of Grassmannians. Let . Then we can define to be the subset of where is contained in . This is a two-step *flag variety*. The fact that incidence relationships are algebraic is extremely useful and comes up regularly. We will finish up for the day with the definition of the general Flag variety. Let . The Flag Variety is given by the points in such that with . We’ll come back to Grassmannians and Flag varieties in the future, because they’re used not only because they are of intrinsic interest themselves, but to construct other useful things.

Pingback: The Hilbert Scheme « Rigorous Trivialities

Pingback: Dual Curves « Rigorous Trivialities

Pingback: Grassmannians, Redux « Rigorous Trivialities

The injectivity of $G(r,n)\to{\mathbb P}(\bigwedge^rV)$ is not as nasty as you suggest: just note that $w\in W$ iff $w$ linearly depends on $v_1,\dots,v_r$ iff $w\wedge v_1\wedge\dots\wedge v_r=0$, so that $W$ is actually determined by $v_1\wedge\dots\wedge v_r$.