Skip to content

[Bug]: LM Studio / Mac / GPTOSS20B gives "Error in outline streaming" #483

@jhagenk

Description

@jhagenk

Bug type

Generation bug (slides/content incorrect)

Summary

Hello,

Issue description:
Presenton errors out during generation with a "Error in outline streaming" if using LM studio locally as the model backend.

Steps to reproduce

Steps to reproduce
I am using Presention Version 0.6.3-beta (0.6.3-beta) on an M1 Macbook pro with 64GB of RAM. It is likely that the machine being used isn't terribly important, but the main point here is that there is plenty of RAM to run the model.

I am running LM studio locally with llm openai/gpt-oss-20b loaded. This model supports tool usage. The server is running. Context length is set to the maximum.

Image Image

The local server is running with the following options enabled.
Image

Presenton is able to connect and find the model backend:
Image
Image

I upload a single (~800kb) test PDF for presentation generation:
Image
Image

It uploads fine, but when moving to the next screen, I'm given an error prompt:

Image

The logs in LM studio do not indicate any kind of error:

2026-03-30 19:12:44 [DEBUG]
Received request: GET to /v1/models
2026-03-30 19:12:44 [INFO]
Returning {
"data": [
{
"id": "openai/gpt-oss-20b",
"object": "model",
"owned_by": "organization_owner"
},
{
"id": "mistralai/magistral-small-2509",
"object": "model",
"owned_by": "organization_owner"
},
{
"id": "qwen/qwen3.5-35b-a3b",
"object": "model",
"owned_by": "organization_owner"
},
{
"id": "google/gemma-3-4b",
"object": "model",
"owned_by": "organization_owner"
},
{
"id": "text-embedding-nomic-embed-text-v1.5",
"object": "model",
"owned_by": "organization_owner"
}
],
"object": "list"
}
2026-03-30 19:15:27 [DEBUG]
Received request: POST to /v1/chat/completions with body {
"messages": [
{
"role": "system",
"content": "\n You are an expert presentation creator. G... ...b to get latest information about the topic**\n "
},
{
"role": "user",
"content": "\n Input:\n - User provided conten... ...g onto a friend’s keys when they are drunk\n54\n "
}
],
"model": "openai/gpt-oss-20b",
"max_completion_tokens": null,
"response_format": null,
"stream": true,
"tools": [
{
"type": "function",
"function": {
"name": "ResponseSchema",
"description": "Provide response to the user",
"strict": true,
"parameters": {
"$defs": {
"SlideOutlineModelWithNSlides": {
"properties": {
"content": {
"description": "Markdown content for each slide",
"maxLength": 300,
"minLength": 100,
"title": "Content",
"type": "string"
}
},
"required": [
"content"
],
"title": "SlideOutlineModelWithNSlides",
"type": "object",
"additionalProperties": false
}
},
"properties": {
"slides": {
"description": "List of slide outlines",
"items": {
"$ref": "#/$defs/SlideOutlineModelWithNSlides"
},
"maxItems": 8,
"minItems": 8,
"title": "Slides",
"type": "array"
}
},
"required": [
"slides"
],
"title": "PresentationOutlineModelWithNSlides",
"type": "object",
"additionalProperties": false
}
}
}
],
"enable_thinking": false
}
2026-03-30 19:15:32 [DEBUG]
[batched_model_kit][INFO]: Loading model from /Users/jhagen/.lmstudio/models/mlx-community/gpt-oss-20b-MXFP4-Q8...
2026-03-30 19:15:39 [DEBUG]
[batched_model_kit][INFO]: BatchedModelKit loaded successfully

During the process described above, presenton made exactly 2 calls into the LLM backend. Both logs are included.

I tried the same procedure using a different model (llm qwen/qwen3.5-35b-a3b) and it had the same results.

Expected behavior

It should not throw an error.

Actual behavior

It throws an error and give up on any content generation.

Presenton version

0.6.3-beta (0.6.3-beta)

Operating system

26.4 (25E246)

How are you running Presenton?

Electron Desktop App

LLM provider

Custom OpenAI-compatible API

Logs / screenshots

Included above.

Impact

Severity: Application is not usable with this backend model.

Additional information

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions