After all that technical work and careful but rather abstract technique developed for the construction of a bunch of moduli spaces, I’ve decided to take a break from that line of reasoning and to look to one that’s rather dear to me. It’s probably just that these methods were my first real introduction to algebraic geometry, and I use them to construct examples routinely. We’re not just dropping back to varieties here, we’re going all the way back to affine varieties over an algebraically closed field for a bit, and eventually will be stepping all the way back to characteristic zero for some stuff related to representation theory. Well, truthfully, it’s time for algebra in the coordinate ring.

We start as Cox, Little and O’Shea did in their book *Ideals, Varieties and Algorithms* by asking a bunch of questions. The same questions, actually. Well, mostly. We assume that we’re given an ideal in . First up, can we determine if a given polynomial is in ? Well, if , then the answer is easy: use long division. If the remainder is zero, then yes, if not, then no. Simple. But what about if ? Is there an algorithmic way to check? The answer turns out to be yes, but requires some care in the set up.

First we need a nice ordering of all monomials in . By this, what we mean is a relation on the monomials such that it is a total order (ie, given two monomials, one is always greater than the other), such that if , and any monomial, we have . That is, it is preserved by multiplication of monomials. Finally, we want it to be a well-ordering. That is, every nonempty subset has a smallest element. We’ll sometimes, for simplicity, refer to the exponents (which are multi-indices, so ) as the elements being ordered. Being finite lists of natural numbers, we can add and subtract them nicely.

There are many natural orderings to use. The first one to come to mind is the lexicographic ordering. It says that if and only if has its leftmost nonzero entry positive. That is, the first term in which they disagree, has the bigger entry. Another good one is called graded lexicographic order, which first orders by degree, and then lexicographically within each degree.

However, for reasons that require a grounding in computational complexity, the best one (ie, it has the average best runtime for the algorithm we’ll talk about shortly) is graded reverse lexicographic. First it orders by degree. Then, it has if and only if the **right**most nonzero entry of is negative. I haven’t read the paper that proves that it’s the best, and probably won’t, but from having worked with it, I’m willing to believe it. (Partly because I’m not trying to prove anything based on it being the best)

Now, we define the multidegree of a polynomial to be the maximal which appears in it. the lead monomial is then , and the lead coefficient is the coefficient of . Now, these orderings all reduce to the normal ordering on , and there we use the grading to do division. It turns out that a monomial ordering is just what we needed for bigger rings.

Let in and fix a monomial ordering. Then there is a division algorithm. We start with the generators of the ideal, in some order, and the polynomial to be divided by them. Then we proceed by taking the lead term of , which is the lead coefficient times the lead monomial, and we look to see if the lead monomial of divides it, and failing that, we check all the other lead terms, in order. The goal is to write , so what we do is we take (all of which start at zero) and replace it by , and then we replace with . Now, if none of the lead terms divide the lead term of , we replace (which starts as zero, of course) with and replace with . We continue this process until is gone. In fact, the remainder is a linear combination of monomials, none of which is divisible by any of the lead terms.

Now, this does tell us if lies in . If will if . However, there are some problems with this algorithm. For one, the choice of generating set for the ideal matters, and, even worse, the ordering of the generating set matters. Try dividing by and with the lexicographic order. The result will be . Reversing the order, the division gives .

To solve this problem, we define another ideal. Given , we define to be the ideal generated by the elements of the form for . Now, this is finitely generated, which is helpful. We define a Groebner basis for to be a set if the ideal is equal to . Now, every ideal other than the zero ideal has a Groebner basis, and a Groebner basis necessarily generates the ideal.

Groebner bases have the nice property that for any and , if you take a Groebner basis for , then there’s a unique such that no term of is divisible by the lead term of an element of the Groebner basis and there is such that . The point of this is that it doesn’t matter what order you perform the division in! This gives us the fact that if we divide a polynomial by a Groebner basis, then iff , which is great, it solves the ideal membership problem completely, so long as we can find the Groebner bases.

So for now we set up the following notation: if is an ordered -tuple of polynomials, then we denote by the remainder upon division by . Now, given two monomials and , we will denote their least common multiple by . In fact, given and polynomials, we’ll use to denote the least common multiple of their leat monomials. So then we define the S-polynomial to by . The S stands for syzygy, and we’ll talk about sygygies more at some point in the future. It just happens that all the syzygies in this situation are given by S-polynomials.

Now, Buchberger gave a criterion for when a basis is a Groebner basis, which can then be turned into an algorithm. If is a basis, it is a Groebner basis if and only if for all , we have .

So now Buchberger’s algorithm is a simple idea. Take any basis and toss things into it until it’s a Groebner basis. Start out with and take a pair in it. Then look at . If isn’t zero, then append it to . Continue until done. This process can be shown to terminate, and so shows that any basis can be extended to a Groebner basis.

Now, we get all sorts of extra stuff in our basis this way, stuff that isn’t needed even for it to be Groebner. To get a smaller one, we note that if such that is in the ideal generated by the lead terms of the other elements of , then is also a Groebner basis.

Now, we can use Groebner bases to do all sorts of fun things. They let us put together computational versions of the Elimination and Extension Theorems (we’re heading in that direction) and let us compute the kernels of certain ring homomorphisms, which means that we can get the ideals for varieties defined parametrically. We can even use them to perform Primary Decomposition. This means that Groebner bases give computational proofs of several of the problems in the first section of the dreaded book of Hartshorne. We’ll see more consequences of these ideas as we go.

But the first section isn’t the dreaded part. It’s the hugely unmotivated leap from the first to the second.

I’m aware, but still, these techniques helped when I sat down and did all the problems in chapter 1 in order to convince myself that I had enough classical geometry to understand the second part. I was way off in that belief, but still, it WAS helpful.

As for the unmotivated leap, I hope that I’ve at least helped motivate it by talking about nonreduced things, as well as schemes that are like nonreduced varieties but not of finite type (ie, the Hilbert scheme). Hartshorne even mentions the Hilbert scheme in his section on flatness, but is vague about it and doesn’t even define moduli spaces.

Plus, these techniques can be used on some of the problems later in the book, or at least generalizations of them can, at least in the special case of abstract varieties.

I remember working through chunks of Hartshorne in graduate school when I took a course in algebraic geometry; since you say you’ve done all the problems in chapter 1, maybe I could ask you whether the techniques described in this post have any bearing on problem 1.11 [which as I recall gave me a lot of trouble -- I don't think I managed to solve it :-( ]?

The techniques specifically don’t, but the ideas of looking at lead terms of polynomials do. Let me know if you want the trick. Was going to use these techniques explicitly to solve the problems involving cuspidal and twisted cubics.

I especially like how Groebner bases allow you to do the elimination theory to compute images of projective varieties. According to Hartshorne, to find the image in P^n of a projective subvariety of P^n x P^m, you should do something crazy with valuation rings. According to Harris’s book, you should compute some huge number of resultants, which is also unreasonable. But in fact, you should just compute a Groebner basis.

Yeah, a lot of problems like that are just tricks involving writing down the right ring homomorphism and computing the kernel. This is, I believe, my favorite use of Groebner bases, and sadly it doesn’t appear in Cox, Little, O’Shea, though it’s easy to prove it as a consequence of Elimination.

Pingback: Elimination and Extension Theorems « Rigorous Trivialities