-
Notifications
You must be signed in to change notification settings - Fork 18
Pull requests: google-ai-edge/ai-edge-quantizer
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
Add Kokoro configurations for automated ai-edge-quantizer releases.
#437
opened Mar 5, 2026 by
copybara-service
bot
Loading…
Enable A16W8 quantization options and update scripts.
#436
opened Mar 5, 2026 by
copybara-service
bot
Loading…
Add script for inference comparison and make prompt formatting optional.
#435
opened Mar 5, 2026 by
copybara-service
bot
Loading…
Add script and tool to merge calibration results.
#434
opened Mar 5, 2026 by
copybara-service
bot
Loading…
Add calibration scripts and XManager launcher for NanoV4 models.
#433
opened Mar 5, 2026 by
copybara-service
bot
Loading…
Implement bash script for AEQ stable release with uv tool
#431
opened Mar 5, 2026 by
copybara-service
bot
Loading…
Pre-compute the producer/consumers Ops for each Tensor in a
dict instead of computing them on the fly.
#425
opened Mar 4, 2026 by
copybara-service
bot
Loading…
Use a
set[str] instead of a list[str] for tensor_names for faster look-ups with the in operator.
#424
opened Mar 4, 2026 by
copybara-service
bot
Loading…
Clean up
tfl_flatbuffer_utils.py and flatbuffer_utils.py a bit.
#423
opened Mar 4, 2026 by
copybara-service
bot
Loading…
Don't rely on the
Interpreter when fixing SignatureDefs in _update_signature_defs.
#422
opened Mar 4, 2026 by
copybara-service
bot
Loading…
Reduce the number of data copies in
numpy by:
#421
opened Mar 4, 2026 by
copybara-service
bot
Loading…
Change how
ModelModifier builds the result, i.e.
#420
opened Mar 4, 2026 by
copybara-service
bot
Loading…
Try to load the TFLite file with
mmap, and fall back to gfile.Open if this fails.
#419
opened Mar 3, 2026 by
copybara-service
bot
Loading…
Add weightonly_wi2_afp32 and static_wi2_ai8 quantization schemes fro BatchMatMul and FC layers. Also add static_wi4_ai8 scheme for BatchMatMul.
#417
opened Mar 3, 2026 by
copybara-service
bot
Loading…
Add support for QUANTIZE from INT8/UINT8 to INT16.
#407
opened Feb 26, 2026 by
copybara-service
bot
Loading…
Update Gemma3n 2b quantization colab to utilize **RE-QUANTIZATION**
#377
opened Feb 13, 2026 by
copybara-service
bot
Loading…
Add max_hadamard_size parameter for Hadamard rotations.
#374
opened Feb 11, 2026 by
copybara-service
bot
Loading…
Centralize the creation of Hadamard matrices, allow for a bit more than just powers of 2.
#373
opened Feb 10, 2026 by
copybara-service
bot
Loading…
Add progress bars and progress report to quantizer library.
#370
opened Feb 3, 2026 by
copybara-service
bot
Loading…
Keep tensorflow installed in Colab and nightly workflows.
#358
opened Dec 22, 2025 by
copybara-service
bot
Loading…
Throw error if a single tensor is quantized multiple ways during static quantization.
#340
opened Oct 28, 2025 by
copybara-service
bot
Loading…
Update AI Edge Torch to use
BLOCKWISE_XX interface in AEQ to achieve blockwise quantization.
#339
opened Oct 28, 2025 by
copybara-service
bot
Loading…
Previous Next
ProTip!
Adding no:label will show everything without a label.