Fix AttributeError in AssistantToTargetTranslator.unmap_input_ids with cross-vocab models#45320
Merged
zucchini-nlp merged 4 commits intohuggingface:mainfrom Apr 10, 2026
Conversation
map_input_embeddings is only initialized when _suppress_input_ids is non-empty (line 723-740), but unmap_input_ids() only checked assistant_prune_lm_head. This caused an AttributeError when using assisted generation with models that have different vocab sizes but share the same tokenizer family (e.g., Qwen2.5-7B + Qwen2.5-0.5B). Added len(self._suppress_input_ids) > 0 check to match the initialization guard.
Closed
zucchini-nlp
approved these changes
Apr 9, 2026
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
Contributor
Author
|
Thanks for the review. Learned a lot doing this. |
sirzechs66
pushed a commit
to sirzechs66/transformers
that referenced
this pull request
Apr 18, 2026
…h cross-vocab models (huggingface#45320) * Fix AssistantToTargetTranslator crash with cross-vocab models map_input_embeddings is only initialized when _suppress_input_ids is non-empty (line 723-740), but unmap_input_ids() only checked assistant_prune_lm_head. This caused an AttributeError when using assisted generation with models that have different vocab sizes but share the same tokenizer family (e.g., Qwen2.5-7B + Qwen2.5-0.5B). Added len(self._suppress_input_ids) > 0 check to match the initialization guard. * Add comment explaining cross-vocab guard in unmap_input_ids * Add Comment Explaining Cross-Vocab guard in unmap_input_ids --------- Co-authored-by: Raushan Turganbay <raushan@huggingface.co>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What does this PR do?
Fixes a crash in assisted generation when using model pairs with different vocabulary sizes but the same tokenizer family (e.g., Qwen2.5-7B + Qwen2.5-0.5B).
map_input_embeddingsis only initialized whenlen(self._suppress_input_ids) > 0(line 723), butunmap_input_ids()only checkedself.assistant_prune_lm_head. This caused anAttributeErrorwhen theassistant vocab is a subset of the target vocab (no suppressed IDs), but
assistant_prune_lm_headis enabled.The fix adds the same
len(self._suppress_input_ids) > 0guard tounmap_input_ids(), matching the initialization condition.Reproduction