Skip to content

Recommend clients expose resource read to models#2527

Open
pja-ant wants to merge 4 commits intomainfrom
pja/resources-model-read-should
Open

Recommend clients expose resource read to models#2527
pja-ant wants to merge 4 commits intomainfrom
pja/resources-model-read-should

Conversation

@pja-ant
Copy link
Copy Markdown
Contributor

@pja-ant pja-ant commented Apr 6, 2026

Adds a SHOULD to the draft Resources spec recommending that clients give the language model a way to read a resource by URI (with an optional MAY for listing).

Rationale

Resources are currently described as application-driven, but the spec offers no guidance on whether the model itself should be able to pull resource content. In practice this leaves many clients without a model-facing read path, limiting how useful resources are once a URI shows up in context. This change provides clearer guidance on how resources should be implemented in clients to make them more broadly useful.

The recommendation is intentionally mechanism-agnostic ("such as a tool") rather than referencing MCP tools specifically, since the mechanism is typically a host-level LLM tool rather than an MCP server tool.

pja-ant added 2 commits April 6, 2026 17:44
Adds SHOULD guidance that clients provide a mechanism (e.g., a tool)
for the language model to read resources by URI, with optional listing,
so models can pull resource context directly rather than relying solely
on application-driven inclusion.
@mintlify
Copy link
Copy Markdown

mintlify Bot commented Apr 6, 2026

Preview deployment for your docs. Learn more about Mintlify Previews.

Project Status Preview Updated (UTC)
mcp-staging 🟢 Ready View Preview Apr 6, 2026, 4:51 PM

@pja-ant pja-ant marked this pull request as ready for review April 6, 2026 16:56
@pja-ant pja-ant requested a review from a team as a code owner April 6, 2026 16:56
Copy link
Copy Markdown
Contributor

@SamMorrowDrums SamMorrowDrums left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So excited, I really hope this gets approved.

One note though, semantically this probably would need to be considered same as open world hint, if there's no resource annotations.

I thought model loading would be the case from day 1, and to be honest I also thought that models would be easily able to complete resource templates also. For GitHub that could have meant access to any file at any revision of any repo without the need for a get file tool.

The annotations/open world risk should definitely be considered though. Definitely could be a prompt injection risk to allow it to be arbitrary without any mechanism for annotations, or guidance for how clients should handle response.

Comment thread docs/specification/draft/server/resources.mdx Outdated
@localden
Copy link
Copy Markdown
Contributor

localden commented Apr 9, 2026

Looking closer at this proposal from a security perspective (and somewhat echoing @SamMorrowDrums), I am more worried about this giving unbounded resources/read capabilities on arbitrary URIs.

What if the returned URI is file:///~/.ssh/id_rsa or https://attacker-controlled.example.com/foo?

We might need to call out that model-initiated resource reads carry the same trust and safety considerations as tool invocations - that is, we need to make sure that we treat model-facing resource reads we open-world by default unless there is some kind of scoping guarantee.

Mirrors the client SHOULD guidance from the tools security section so
that model-initiated resource reads carry the same confirmation,
visibility, validation, and audit expectations as tool invocations.
@pja-ant
Copy link
Copy Markdown
Contributor Author

pja-ant commented Apr 9, 2026

Added some additional security notes.

Comment thread docs/specification/draft/server/resources.mdx
@dend dend added enhancement New feature or request spec labels Apr 14, 2026
@olaservo
Copy link
Copy Markdown
Member

olaservo commented Apr 20, 2026

Out of curiosity, I invesigated several open-source MCP hosts to see which already expose a model-facing resources/read path, as a data point on how well-supported this SHOULD is by implementations in the wild. Here's a few notable ones:

Client Tool name(s) Notes
codex read_mcp_resource, list_mcp_resources, list_mcp_resource_templates Handler calls session.read_resource(). Not mentioned in user-facing docs; an internal steer tells the model to prefer tool_search.
gemini-cli read_mcp_resource, list_mcp_resources Gated on ≥1 connected server exposing a resource. Documented for end users (docs/tools/mcp-resources.md) and covered by integration tests.
goose read_resource, list_resources Registered only when an enabled extension reports ServerCapabilities::resources. Documented.
fast-agent get_resource (and list_resources) get_resource multiplexes bundled (internal://) and MCP URIs behind one tool. Relevant to the "mechanism is typically a host-level LLM tool" framing in the PR rationale.

Examples of popular open source clients that don't yet follow this pattern: In VS Code and opencode, a readResource RPC exists, but it's gated to user-initiated attachment.

And for others who aren't already aware, Claude Code exposes a model-facing ReadMcpResourceTool that takes server and uri parameters: https://code.claude.com/docs/en/tools-reference.

Comment thread docs/specification/draft/server/resources.mdx
@connor4312
Copy link
Copy Markdown
Contributor

connor4312 commented Apr 20, 2026

We actually support resource reading as well in VS Code, through the standard readFile/listDirectory tool (we have a filesystem provider for MCP resource schemes, so although it all looks like it goes through 'fs', it can go to an MCP server under the hood.) Similar to fast-agent we namespace URIs we get from MCP servers, so that a URI is self-identifying as which MCP server it came from.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request spec

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants