Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 

README.md


VoltAgent is an open source TypeScript framework for building and orchestrating AI agents.
Escape the limitations of no-code builders and the complexity of starting from scratch.

npm version Contributor Covenant Discord Twitter Follow


VoltAgent: Working Memory Example

This example shows how to enable and use Working Memory in a VoltAgent. Working Memory lets agents persist important facts either per-conversation or per-user, and includes built‑in tools so models can read/update this context during a chat.

Try Example

npm create voltagent-app@latest -- --example with-working-memory

Highlights

  • Structured context with Zod schema (JSON), or Markdown template
  • Auto‑injected system instructions guiding the model to use context
  • Built‑in tools: get_working_memory, update_working_memory, clear_working_memory
  • LibSQL storage with optional semantic search via embeddings

Snippet

import { Agent, Memory, VoltAgent } from "@voltagent/core";
import { LibSQLMemoryAdapter, LibSQLVectorAdapter } from "@voltagent/libsql";
import { honoServer } from "@voltagent/server-hono";
import { z } from "zod";

const workingMemorySchema = z.object({
  userProfile: z.object({ preferredTone: z.enum(["casual", "formal", "technical"]).optional() }),
  preferences: z.object({ likes: z.array(z.string()).optional() }),
});

const memory = new Memory({
  storage: new LibSQLMemoryAdapter(),
  embedding: "openai/text-embedding-3-small",
  vector: new LibSQLVectorAdapter(),
  workingMemory: { enabled: true, scope: "conversation", schema: workingMemorySchema },
});

const agent = new Agent({ name: "Working Memory Agent", model: "openai/gpt-4o-mini", memory });

new VoltAgent({ agents: { agent }, server: honoServer({ port: 3141 }) });

Run Locally

  1. Copy .env.example to .env and set OPENAI_API_KEY.
  2. Install deps and start:
pnpm i
pnpm dev

Then POST messages to http://localhost:3141/api/agent/generateText with userId and conversationId to see the model use the working‑memory tools.