Try to optimize integral basis computation for global function fields over prime finite fields#41242
Conversation
|
It looks good to me! Do you have any idea about when or why one algorithm is faster than the other depending on cases? In other words, any criterion to choose an algorithm for a particular case? By the way, why doc build failed? |
No clue about why (I actually expected the new version to always be faster since both call Singular but the new version calls a Singular function intended for this exact computation). As for when, I have a guess, but I'm not certain. If we let the defining polynomial of the function field be Singular does have two algorithms to compute this, and I am using the slower one ("global") here that works on a larger variety of function fields. The faster version is "hensel", but some tests fail when I try to use it (I don't think "hensel" works for all cases that "global" works for). There is definitely more optimization that should be done in future PRs (hence the
Docbuild has been broken for a few weeks (see #40929). I don't think it was broken by any code changes, but rather we are hitting CI usage limits so we need to be smarter now about how we use the CI. #41156 supposedly should help with that once it is ready. |
|
Your tests show similar results on my machine. I think this is a major improvement. Thanks! |
#41248 fixes doc-build. It needs review. |
|
Clean merge, setting back to positive review. (I did the merge to trigger CI to test the behaviour of the CI fix label and #41248.) |
sagemathgh-41242: Try to optimize integral basis computation for global function fields over prime finite fields Some work on sagemath#40147 for a case that Singular handles well. This PR modifies the `_maximal_order_basis` method for function fields to call Singular's integralBasis function directly when possible. This is slightly slower for some function fields that were previously fast (but is still fast enough), and is significantly faster for some function fields that previously were very slow. A few timing examples follow. The performance of `F_inv._maximal_order_basis()` is important because it is needed to do computations with infinite places of `F`. # Example 1 Using the same example from sagemath#40147: ```python K = GF(7) Kx.<x> = FunctionField(K) t = polygen(Kx) F.<y> = Kx.extension(t^5 + (2*x + 5)*t^4 + (5*x^2 + 4)*t^3 + (4*x^3 + 2*x^2 + 2)*t^2 + (3*x^4 + 2*x^3 + 3*x^2 + 4*x + 6)*t + 6*x^5 + 6*x^4 + 5*x^3 + 3*x^2 + 6) F_inv = F._inversion_isomorphism()[0] print(timeit('F._maximal_order_basis()', number=5, repeat=5, preparse=False)) print(timeit('F_inv._maximal_order_basis()', number=5, repeat=5, preparse=False)) ``` Before: ``` 5 loops, best of 5: 3.3 ms per loop 5 loops, best of 5: 6.93 s per loop ``` After: ``` 5 loops, best of 5: 5.29 ms per loop 5 loops, best of 5: 477 ms per loop ``` The new version computes `F_inv._maximal_order_basis()` in 1/14 the time as the old version. # Example 2 New code performance noticeably worse here, but is still fast enough considering this computation is performance once per function field. ```python K = GF(3) Kx.<x> = FunctionField(K) y = polygen(Kx) phi = y^3 + (x^3 + x^2)*y^2 + x^3*y + 1 F.<y> = Kx.extension(phi) F_inv = F._inversion_isomorphism()[0] print(timeit('F._maximal_order_basis()', number=5, repeat=3, preparse=False)) print(timeit('F_inv._maximal_order_basis()', number=5, repeat=3, preparse=False)) ``` Before: ``` 5 loops, best of 3: 3.55 ms per loop 5 loops, best of 3: 4.55 ms per loop ``` After: ``` 5 loops, best of 3: 26.5 ms per loop 5 loops, best of 3: 49.5 ms per loop ``` New version is about 10 times slower here. # Example 3 This example is genus 25. ```python K = GF(37) Kx.<x> = FunctionField(K) y = polygen(Kx) phi = y^3 + (x^3 + x^2)*y^2 + x^3*y + 1 F.<y> = Kx.extension(y^3 + (18*x^9 + 4*x^8 + 17*x^7 + 36*x^6 + 10*x^5 + 32*x^4 + 16*x^3 + 7*x^2 + 17*x + 18)*y^2 + (14*x^17 + 17*x^16 + 9*x^14 + 36*x^13 + 22*x^12 + 6*x^11 + 17*x^10 + 17*x^9 + 23*x^8 + 23*x^7 + 11*x^6 + 14*x^5 + 28*x^4 + 24*x^3 + x^2 + 4*x + 30)*y + 6*x^26 + 8*x^25 + 2*x^24 + 18*x^23 + 19*x^22 + 26*x^21 + 10*x^20 + 32*x^19 + 5*x^18 + 6*x^17 + 35*x^16 + 32*x^15 + 10*x^14 + 32*x^13 + 33*x^12 + 30*x^11 + 9*x^10 + 20*x^9 + 26*x^8 + 9*x^7 + 7*x^6 + 10*x^5 + 36*x^4 + 15*x^3 + 28*x^2 + 22*x + 28) F_inv = F._inversion_isomorphism()[0] print(timeit('F._maximal_order_basis()', number=5, repeat=3, preparse=False)) print(timeit('F_inv._maximal_order_basis()', number=1, repeat=1, preparse=False)) ``` Before: ``` 5 loops, best of 3: 6.53 ms per loop [Did not finish within a few minutes before I gave up and killed the process] ``` After: ``` 5 loops, best of 3: 7.2 ms per loop 1 loop, best of 1: 485 ms per loop ``` So this speeds up the worst-case performance by a lot, but slows down performance for other cases. This function only needs to be computed once per function function, so maybe this is a worthwhile trade-off. ### 📝 Checklist - [x] The title is concise and informative. - [x] The description explains in detail what this PR is about. - [x] I have linked a relevant issue or discussion. - [x] I have created tests covering the changes. - [ ] I have updated the documentation and checked the documentation preview. URL: sagemath#41242 Reported by: Vincent Macri Reviewer(s): Kwankyu Lee
Some work on #40147 for a case that Singular handles well.
This PR modifies the
_maximal_order_basismethod for function fields to call Singular's integralBasis function directly when possible.This is slightly slower for some function fields that were previously fast (but is still fast enough), and is significantly faster for some function fields that previously were very slow.
A few timing examples follow. The performance of
F_inv._maximal_order_basis()is important because it is needed to do computations with infinite places ofF.Example 1
Using the same example from #40147:
Before:
After:
The new version computes
F_inv._maximal_order_basis()in 1/14 the time as the old version.Example 2
New code performance noticeably worse here, but is still fast enough considering this computation is performance once per function field.
Before:
After:
New version is about 10 times slower here.
Example 3
This example is genus 25.
Before:
After:
So this speeds up the worst-case performance by a lot, but slows down performance for other cases. This function only needs to be computed once per function function, so maybe this is a worthwhile trade-off.
📝 Checklist