Escape the limitations of no-code builders and the complexity of starting from scratch.
This example demonstrates how to integrate VoltAgent with Hugging Face's Model Context Protocol (MCP) server, allowing your agent to access and interact with various AI models and services hosted on Hugging Face Spaces.
npm create voltagent-app@latest -- --example with-hugging-face-mcp- Hugging Face MCP Integration: Configure VoltAgent to connect with Hugging Face's MCP server
- Access to AI Models: Connect to various models hosted on Hugging Face Spaces
- File Handling: Upload and download files for vision, audio, and other multimodal tasks
- Simple Authentication: Use your Hugging Face token for secure access
- Node.js (v20 or later recommended)
- pnpm (or npm/yarn)
- An OpenAI API key (or setup for another supported LLM provider)
- A Hugging Face account and API token (sign up at https://huggingface.co/)
-
Create Environment File: Create a
.envfile in the project directory:# .env OPENAI_API_KEY=your_openai_api_key_here HUGGING_FACE_TOKEN=your_huggingface_token_here
Replace the placeholder values with your actual API keys.
-
Get Your Hugging Face Token:
- Visit https://huggingface.co/settings/tokens
- Create a new token with read access
- Copy the token and add it to your
.envfile
Start the agent in development mode:
pnpm dev
# or npm run dev / yarn devYou should see logs indicating the MCP connection and tool fetching, followed by the standard VoltAgent startup message.
- Open the VoltAgent VoltOps Platform:
https://console.voltagent.dev - Find the agent named
Hugging Face MCP Agent - Click on the agent name, then click the chat icon
- Try sending messages that require interaction with Hugging Face models
The agent will use the Hugging Face MCP tools to perform these actions.
In the src/index.ts file, you'll see how the MCP configuration is set up:
const mcpConfig = new MCPConfiguration({
servers: {
"hf-mcp-server": {
url: "https://huggingface.co/mcp",
requestInit: {
headers: { Authorization: `Bearer ${process.env.HUGGING_FACE_TOKEN}` },
},
type: "http",
},
},
});This configuration connects to the Hugging Face MCP server and authenticates using your Hugging Face token.
The Hugging Face MCP server allows you to interact with various AI models hosted on Hugging Face Spaces. Here are some examples of what you can do:
You can generate images using models like FLUX.1-schnell:
Can you generate an image of a mountain landscape?
Convert text to speech using models like StyleTTS2:
Can you convert this text to speech: "Hello, this is a test of the text-to-speech system."
Analyze images using vision models:
What can you tell me about this image? [Upload an image]
For more advanced use cases, you can explore the mcp-hfspace project, which provides additional configuration options for working with Hugging Face Spaces through MCP.
Some advanced features include:
- Setting up a working directory for file uploads/downloads
- Connecting to specific Hugging Face Spaces
- Using private Spaces with authentication
- Configuring multiple server instances
Here are some recommended Hugging Face Spaces that work well with MCP:
- black-forest-labs/FLUX.1-schnell
- shuttleai/shuttle-jaguar
- styletts2/styletts2
- Qwen/QVQ-72B-preview
For more information on using these models and others, visit the Hugging Face Spaces directory.