This is, sadly, the last part of my posts about P-adic arithmetic. It’s not for lack of information, but rather for lack of competence on my part. The area of math that I’m worst at is analysis, and an awful lot of the interesting things that you can do with p-adic numbers is analysis.
For an explanation of what p-adic numbers are, and how arithmetic on them works, you can refer to my first post on the p-adics, and for an explanation of p-adic metrics and distance, you can look at this post.
Today, we’re going to look at Hensel’s lemma. This is really cool. Hensel’s lemma shows you a simple way of finding polynomial roots in P-adic arithmetic. It’s sometimes called Newton’s method for p-adics, which is both a good description of how it’s used (but a complete misnomer because the lemma is a description of of a property, and Newton’s method is a description of a procedure).
Hensel’s lemma says that if you’ve got a polynomial, and for a prime number p, you can find a root using modulo arithmetic using base p, then you can use the base-p root to find the corresponding root using modulo base p2, and given the root using base p2, you can find it for modulo p3, and so on!
So far, this is just a statement about modulo arithmetic – not about p-adic arithmetic. But what ends up happening is that the result modulo p gives you a first approximation of the root in p-adic. Using the process defined by Hensel’s lemma, you can take that root and extend it to a better approximation. It ends up being the case that each time you lift the result to a higher exponent of , you’re performing the exactly analog of one step in Newton’s method of finding roots! But it’s actually better than that. In you look at Hensel’s lemma in the p-adic numbers, then each step extends the approximation by exactly one digit!
Let’s look at the lemma in formal terms:
Suppose that you have a polynomial .
If there is an integer, and a prime number such that for in mod , such that in mod . In general, if there’s an exponent where in mod , then there’s a where in mod .
In fact, we can do even better than that! If we know the root for an exponent , then we can easily compute using a process called lifting:
(In this, is the first derivative of at .)
You can see, looking at that equation, that the derivative of at can’t be zero mod , or else the denominator of the fraction would be zero, and everything would go kablooie in your face!
In P-adic arithemetic, this becomes absolutely amazing. If is the root mod , then the root mod is:
It can’t be simpler. And you can clearly see the relation to Newton’s method.
For fun, let’s try looking at an example: let’s take the square root of 2 in 7-adic. In base-10, the square root of 2 is roughly 1.4142135624. In p-adic, it’s going to bo something completely different. We’ll start by phrasing it as a polynomial: . So, what’s the solution to that mod-7? 3: in mod-7, 3*3=2.
So, our first approximation of a square root of 2 is 3.
We can extend to the next digit using Hensel’s lemma, and we get . We’re looking at mod 7 arithmetic here, so 6 = -1. So .
Repeated, we’d get 213, 6213, and so on.
And in fact, your original mod p solution will thus lift to an honest solution in Z_p.
If you want to avoid the analysis but still learn and write about the p-adics, you can explore their interesting topology.
Really enjoyed reading your articles on p-adic numbers thanks. But I don’t quite understand the last few lines. Why are we able to take the fractional part of j in mod7, giving “6=-1”? I thought we were not working in mod7, otherwise j=10(mod7)=3?
Thanks in advance
would the formula j=i+[f'(i)(modp)]f(i) be better?
I’m sure I’m being remarkably dim here, but could you walk us through your penultimate calculation of j? I can’t see how it reduces to (9 – 2)/6