Replies: 2 comments
-
|
If your issue is not resolved in latest version, I suggest raising a new issue in the issue section. |
Beta Was this translation helpful? Give feedback.
-
|
The symptom pattern suggests the model is generating output correctly since the vllm logs show content, but something is failing between generation and UI rendering. That usually points to either a response-shape mismatch, a streaming parse issue, or the frontend expecting fields that this model integration does not populate the same way. It would help to inspect the actual API response in the browser network tab and compare it against a known working model. If the payload looks right there, then the next place to check is the frontend console for render-time errors or schema assumptions tied to the older qwen entries. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I added the qwen3.5-35b-a3b model to ragflow/conf/llm_factories.json, following the format of qwen3-vl as a reference. After deploying and adding the model via vllm, I changed the knowledge base model to qwen3.5-35b-a3b. However, no content is displayed in the Q&A interface. Checking the vllm inference logs shows that the logs for embedding, rerank, and qwen3.5-35b-a3b all have updates—among them, the qwen3.5-35b-a3b logs clearly show output content (the output display function has been enabled). How to resolve this display issue?
Beta Was this translation helpful? Give feedback.
All reactions