# Faster computation of p-adic log

As I see it, $$p$$-adic integers work very similar to formal power series over $$x$$ (e g. with regards to Hensel lifting).

When it comes to computing $$\log P(x)$$, one may use the formula

$$(\log P)' = \frac{P'}{P}$$

to compute the expansion of the logarithm of $$P(x)$$ with $$P(0)=1$$ as

$$\log P \equiv \int \frac{P'}{P} dx \pmod{x^n}.$$

This is the main trick to compute first $$n$$ coefficients of $$\log P(x)$$ in $$O(M(n))$$, where $$M(n)$$ is the maximum time needed to compute the product of two polynomials of degree at most $$n$$.

Is there any similar way to compute modulo $$p^n$$ the expansion of the $$p$$-adic logarithm of the $$p$$-adic integer $$r$$ such that $$r \equiv 1 \pmod p$$ in $$O(M(n))$$?

As I see it, the regular notion of polynomial derivatives can't be applied directly to $$p$$-adic integers, as $$(uv)' = u'v+uv'$$ woudln't hold. So, maybe there is a way to define some other reasonably invertible function $$d(r)$$ which is simple enough to compute and such that

$$d(uv) = u d(v) + v d(u)$$

for any $$p$$-adic numbers $$u$$ and $$v$$?

• You'll want $r \equiv 1 \bmod 4$ if $p = 2$. May 21 at 18:29
• Thanks for pointing that out! But doesn't logarithm converge with $r \equiv 3 \pmod 4$ as well? Only that $\exp \log x = -x$ in this case. May 21 at 18:58
• Ah, whoops! I was confusing the disc of convergence of he $2$-adic exponential and logarithm. But if you want to accelerate convergence, you might as well focus on $1 + 4\mathbf Z_2$ since, as you note, $\log(x) = \log(-x)$ for $x \in \mathbf Z_2^\times$. Moreover, for $r \in 1 + p\mathbf Z_p$, $\log(r) = (1/p^n)\log(r^{p^n})$ and $r^{p^n} \equiv 1 \bmod p^n\mathbf Z_p$, so if you can rapidly compute the $p$-adic logarithm on numbers very close to $1$, then that formula shows how you can reduce logarithm calculations elsewhere in $1+p\mathbf Z_p$ to the case of numbers very close to $1$. May 21 at 19:41

There is a method with bit complexity $$O(n \log^3 n)$$, which is an adaptation of the "bit-burst algorithm" for real and complex functions. The idea here is to integrate the solution of a differential equation using binary splitting evaluation of power series, using integration steps that converge exponentially to the evaluation point (in steps of $$2^{-2^n}$$ in the real setting, and $$p$$-adically in steps of $$p^{2^n}$$).

This generalizes to many functions satisfying simple differential equations. In that sense, it is related to the integration method for formal power series, though not exactly the same. The technique is discussed in significant detail in the recent (2021) preprint "Fast evaluation of some p-adic transcendental functions" by Xavier Caruso, Marc Mezzarobba, Nobuki Takayama, and Tristan Vaccon (https://hal.archives-ouvertes.fr/hal-03263044v1).

I don't know who came up with this method in the $$p$$-adic setting originally, and the authors of the above paper describe it as "folklore". David Harvey certainly knew about it in 2007 (remark after Proposition 9 in https://arxiv.org/abs/0708.3404), and I learned about it talking to David in 2012. Sebastian Pancratz and I then implemented asymptotically fast $$p$$-adic exponentials and logarithms in FLINT, confirming that the method works very well in practice.

• Thanks for the answer and telling about the background of the algorithm, it was an interesting read. The algorithm in the preprint seems to be very generic and abstruse and it was difficult for me to grasp it. I tried to recap it in a separate answer, I would highly appreciate if you would comment on whether I managed to grasp the core ideas! May 21 at 22:00

Fredrik Johansson gave a very good answer, but I feel like preprint's notation is a bit too dense and abstract (with finite extensions, uniformizers, ramifications and such), so I'll try to recap its content in a bit more plain manner here. It turned out to be somewhat lengthy, not fit for a comment, so I will write it as another answer.

For $$x \equiv 0 \pmod p$$ we define a sequence $$y_1, y_2, \dots$$ such that

$$1 - y_{k} \equiv (1-y_{k-1})(1+y_{k-1}) \pmod{p^{2^k}}.$$

with a starting number $$y_1 \equiv x \pmod{p^2}$$. It can be proven by induction that $$y_k$$ is divisible by $$p^{2^{k-1}}$$ and

$$(1-y_k)(1+y_k) \equiv 1 \pmod{p^{2^k}}.$$

For $$m > \log_2 n$$ it holds that $$y_m$$ is divisible by $$p^{n}$$, hence

$$1-y_m \equiv 1 \equiv (1-y_{m-1})(1+y_{m-1}) \pmod{p^n}.$$

Expanding it further, we get

$$1 \equiv (1-x)(1+y_1)(1+y_2)\dots(1+y_{m-1}) \pmod{p^n}.$$

Taking logarithms on both sides, we arrive at

$$\log(1-x) = -\sum\limits_{i=1}^{m-1} \log(1+y_i) \pmod{p^n}.$$

Now, $$y_i$$ is divisible by $$p^{2^{i-1}}$$ but itself is at most $$p^{2^i}$$. The logarithm of $$1+y_i$$ in such case can be computed from the direct expression

$$\log(1 + y_i) = \sum\limits_{k=1}^{\infty} \frac{(-1)^k y_i^k}{k}.$$

In this expression, only roughly first $$\frac{n}{2^{i-1}}$$ terms are of interest because $$y_i$$ is divisible by $$p^{2^{i-1}}$$ and they can be computed in $$O(n \log^2 n)$$ overall with divide and conquer algorithm using the fact that $$y_i$$ has only $$2^{i-1}$$ significant terms.

To be a bit more specific, to compute $$\sum\limits_{k=1}^N \frac{y^k}{k_0+k}$$, the following two sums are computed recursively:

$$S_1 = \sum\limits_{k=1}^{\lfloor N/2 \rfloor} \frac{y^k}{k_0+k}$$

and

$$S_2 = \sum\limits_{k=1}^{\lfloor N/2 \rfloor} \frac{y^k}{(k_0+\lfloor N/2\rfloor)+k}.$$

Then, the second sum is multiplied by $$y^{\lfloor N/2 \rfloor}$$ and is added to the first one.

Since $$m$$ is roughly $$\log_2 n$$, summing this up for all $$i$$, we get $$O(n \log^3 n)$$.

I still wonder if it is possible to do it faster or simpler...