Skip to content

Pull requests: huggingface/transformers

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Reviews
Assignee
Filter by who’s assigned
Assigned to nobody Loading
Sort

Pull requests list

fix: KV cache sharing
#45328 opened Apr 8, 2026 by RyanMullins Loading…
5 of 6 tasks
[docs] modular transformers
#45327 opened Apr 8, 2026 by stevhliu Draft
3 tasks
feat[vLLM × v5]: Add vLLM compatibility for audio models
#45326 opened Apr 8, 2026 by harshaljanjani Loading…
3 tasks done
Gemma4 resizing per layer inputs
#45324 opened Apr 8, 2026 by zucchini-nlp Loading…
[CB] Fix capture of max_seqlen
#45323 opened Apr 8, 2026 by remi-or Draft
fix: dont download artifacts from the test hub
#45319 opened Apr 8, 2026 by tarekziade Loading…
Fix softmaxing router logits
#45315 opened Apr 8, 2026 by Rocketknight1 Loading…
resize_token_embeddings does not effect to output_embeddings
#45311 opened Apr 8, 2026 by KoichiYasuoka Loading…
2 of 6 tasks
docs maintenance for transformers repository 979e8
#45301 opened Apr 7, 2026 by sahildando Loading…
6 tasks done
Fix Nemotron-H: add mlp layer type support
#45300 opened Apr 7, 2026 by w4nderlust Loading…
2 of 6 tasks
Fix mutable default arguments in quantization config classes
#45297 opened Apr 7, 2026 by EhteshamSid Loading…
2 tasks done
Add GGUF support to Gemma4 (31B & 26B-A4B) text
#45296 opened Apr 7, 2026 by UsamaKenway Loading…
4 of 6 tasks
feat: add Gemma4ForSequenceClassification
#45294 opened Apr 7, 2026 by jesperschlegel Loading…
Less unnecessary RoPE warnings
#45289 opened Apr 7, 2026 by zucchini-nlp Loading…
ProTip! Type g p on any issue or pull request to go back to the pull request listing page.