## Resultants

Let’s say that you have two polynomials, and you REALLY need to know if they have a common root. Now, if they’re quadratic, you’re in luck, because we can solve them both completely and just check. In fact, if you’re patient, you can do it whenever the Galois group is solvable, in particular, you can do it for cubics and quartics. But in general? What if I gave you a degree 100 polynomial and a degree 103 polynomial? Well, you can still do it, without having to solve anything.

The object that solves this problem is called the Resultant. Let $f=a_0x^\ell+\ldots+a_\ell$ and $g=b_0x^m+\ldots +b_m$ with $a_0,b_0\neq 0$ and $\ell,n>0$. Then the resultant is the determinant of the $(\ell+m)\times(\ell+m)$ matrix $Res(f,g)=\det\left[\begin{array}{cccccccc}a_0&&&&b_0&&&\\a_1&a_0&&&b_1&b_0&&\\a_2&a_1&\ddots&&b_2&b_1&\ddots&\\\vdots&a_2&\ddots&a_0&\vdots&b_2&\ddots&b_0\\a_\ell&\vdots&\ddots&a_1&b_m&\vdots&\ddots&b_1\\&a_\ell&&a_2&&b_m&&b_2\\&&\ddots&\vdots&&&\ddots&\vdots\\&&&a_\ell&&&&b_m\\\end{array}\right]$

Now, this is actually a rather good thing to use for computations, though the determinant gets big, there’s a lot of zeros floating around, and computers can handle large determinants. Now, resultants have three really nice properties:

1. The resultant is an integer polynomial in the coefficients of $f,g$.
2. $Res(f,g)=0$ iff $f,g$ have a common factor.
3. There are polynomials $a,b$ such that $af+bg=Res(f,g)$.

The last property is called the Elimination property, and it ties in with the Elimination Theorem from last time. If $f,g\in k[x_1,\ldots,x_n]$, then we choose a variable and write $f,g$ as polynomials in that variable with coefficients polynomials in the others. Then $Res(f,g,x_i)$ (written to point out the dependence on the variable chosen) is in the ideal generated by $f,g$, but also doesn’t depend on the variable $x_i$, and so is in the elimination ideal.

Now, we’ve done resultants for one variable. But algebra in one variable is the same as algebra of homogeneous polynomials in two variables, so we extend the resultant to these in the most obvious way possible. Then the condition that the resultant is zero turns out to be equivalent to having a nontrivial solution. That is, one other than $(0,0)$. But if they have one, then there’s a line worth, and so this turns out to determine if the two polynomials share any zeros on $\mathbb{P}^1$.

Now, we generalize to get some nice stuff. To start, we denote the above resultants, as integer polynomials, by $Res_{\ell,m}$. We can handle pairs of polynomials fairly well with these. But what about a whole pile of polynomials of various degrees in more variables? Say, take $F_0,\ldots,F_n$ to be homogeneous polynomials in the variables $x_0,\ldots,x_{n+1}$.

Well, we can fix degrees $d_0,\ldots,d_n$, the degrees of the $F_0,\ldots,F_n$. Now, there is a unique integer polynomial satisfying the properties:

1. $F_0=\ldots=F_n=0$ has a nontrivial solution if adn only if $Res_{d_0,\ldots,d_n}(F_0,\ldots,F_n)=0$.
2. $Res_{d_0,\ldots,d_n}(x_0^{d_0},\ldots,x_n^{d_n})=1$ (This is just to normalize things, generally we’ll only care if a resultant is zero or not).
3. $Res_{d_0,\ldots,d_n}$ is an irreducible polynomial over $\mathbb{C}$.

Now, proving existence is hard and long, but we can still look at these polynomials and see what we can say about them.

For instance, in the case that all the $F_i$ are linear, with $F_i=\sum_{j=0}^n c_{ij}x_j$, then $Res_{1,\ldots,1}(F_0,\ldots,F_n)=\det(c_{ij})$. Also, the resultant of two polynomials is a special case here.

Now, these general resultants, when computable, are rather nice for working out if a hypersurface is smooth. Let $f$ be a homogeneous polynomial of degree $d+1$ on $\mathbb{P}^n$. Then a singular point is a common solution to the $n+1$ partial derivatives, which are in $n+1$ variables. So we look at $Res_{d,\ldots,d}(f_{x_0},\ldots,f_{x_n})$, and it is zero if and only if $V(f)$ is singular.

As an aside and an application, the hypersurfaces of degree $d$ on $\mathbb{P}^n$ naturally form a projective space, and the zero locus of the resultant as above is precisely the set of singular hypersurfaces. As this locus is cut out by a single polynomial, the collection of singular degree $d$ hypersurfaces is a divisor on the space of all degree $d$ hypersurfaces.

Now, before stopping, we’ll define one more thing in terms of resultants, and we’re even going to go back to the case of just two polynomials in one variable. Let $f$ be a polynomial in $x$, then we define the discriminant $\Delta$ to be $Res\left( f,f'\right)$. This turns out to be wonderful, as it tells us if $f$ has a multiple root, because that’s the same thing as $f$ and $f'$ to have a common root. Well, strictly, we define the discriminant to be $\frac{(-1)^{\ell(\ell-1)/2}}{a_0}Res(f,f')$, to make the signs all work out. So now, if you plug in a quadratic, you get the expected $b^2-4ac$.

We’re a group of graduate students in mathematics at several universities just starting out. We’ve all found some interesting things floating out there that no one seems to know about, or just that we’d like a good place to post a rant about. So we’ve decided to start this blog. Header is taken from the larger work by fdecomite under the creative commons license.
This entry was posted in AG From the Beginning, Algebraic Geometry, Computational Methods. Bookmark the permalink.

### 8 Responses to Resultants

1. Maurizio says:

I’d like to say that if you just want to know if two fixed polinomials (eg. in Q[x]) have a common root you can just compute the GCD, but the resultant additionally gives you a polinomials in the coeficients that is zero if and only if the two polinomials have a common root, and as you said this is very useful if the polinomials actually is in, say, Q[x,y,z]. Another very instructive thing is working out the gaussian elimination of the sylvester matrix subtracting multiples of the colums with the smaller polynomial from the columns with the bigger one, and so on, and this amounts exactly to calculating the GCD of the two polinomials by repeted subtractions (ie if f_0 has degree n_0 and f_1 degree n_1, with n_0>=n_1, and their leading coefficients are a_0 and a_1, we get f_2 = f_0 – x^(n_0-n_1)*f_1*a_0/a_1, and so on).

2. Charles says:

Yeah, I think that working out the resultant in $\mathbb{Q}[x]$ or the like is approximately the same thing as working out the GCD, but the real value of resultants is that it gives you an expression which you can just plug polynomials into and compute, and all you need to check is whether or not it’s zero. Also, things like the discriminant mentioned above are rather useful in classical invariant theory, because they are easily computed from your polynomial and give you information about the geometry of the roots (ie, is there a multiple root?).

3. Pingback: Dual Curves « Rigorous Trivialities

4. George says:

Hi, nice article.

Do you know whether the multivariate resultant Res in the ideal generated by the F_i, just it is in the univariate case (3. above)?

Thanks, George

5. shaffaf says:

How can you decide if two quartic polynomial have a common root by the concept of Galois group?we know that all the roots of these two polynomials are complex roots.

• Sorry this took so long. Two show that two polynomials have a common root you just compute the resultant, as above. Galois theory has nothing to do with it, and the Galois group doesn’t say what the roots are, only what the symmetries of the roots are.