TypeScript Getting Started
This guide walks you through a minimal TypeScript agent using Vercel AI SDK, then adds incremental memory-service integration. The goal is to keep code changes small while unlocking memory features one step at a time.
Make sure you have completed TypeScript Dev Setup first.
Step 1: Start with a Minimal Agent
Starting checkpoint: typescript/examples/vecelai/doc-checkpoints/01-basic-chat
Create a minimal Vercel AI SDK agent and expose it over HTTP with Express (no memory-service imports yet):
import express from "express";
import { createOpenAI } from "@ai-sdk/openai";
import { generateText } from "ai";
const app = express();
app.use(express.text({ type: "*/*" }));
function openAIBaseUrl(): string | undefined {
const raw = process.env.OPENAI_BASE_URL;
if (!raw) {
return undefined;
}
const trimmed = raw.replace(/\/$/, "");
return trimmed.endsWith("/v1") ? trimmed : `${trimmed}/v1`;
}
app.get("/ready", (_req, res) => {
res.json({ status: "ok" });
});
app.post("/chat", async (req, res) => {
const userMessage = String(req.body ?? "").trim();
if (!userMessage) {
res.status(400).send("message is required");
return;
}
const provider = createOpenAI({
baseURL: openAIBaseUrl(),
apiKey: process.env.OPENAI_API_KEY ?? "not-needed-for-tests",
});
const model = provider.chat(process.env.OPENAI_MODEL ?? "mock-gpt-markdown");
const result = await generateText({
model,
messages: [
{
role: "system",
content: "You are a TypeScript memory-service demo agent.",
},
{ role: "user", content: userMessage },
],
});
const text = result.text;
res.type("text/plain").send(text);
});
const port = Number(process.env.PORT ?? 9090);
app.listen(port, "0.0.0.0", () => {
console.log(`listening on ${port}`);
}); What this shows: A baseline POST /chat handler that calls Vercel AI SDK directly and returns plain text.
Why needed: This is the control point for later checkpoints, so you can clearly see each memory-service delta.
Add Vercel AI SDK dependencies:
{
"name": "01-basic-chat",
"private": true,
"type": "module",
"scripts": {
"build": "tsc -p tsconfig.json",
"start": "exec node dist/app.js",
"dev": "tsx src/app.ts",
"prettier": "prettier -w \"src/**/*.{ts,tsx,js}\""
},
"dependencies": {
"@ai-sdk/openai": "^2.0.0",
"ai": "^5.0.0",
"express": "^4.21.2"
},
"devDependencies": {
"@types/express": "^5.0.1",
"@types/node": "^22.13.0",
"prettier": "^3.8.1",
"tsx": "^4.19.3",
"typescript": "^5.8.2"
}
} What this shows: The minimum dependencies for the baseline app.
Why needed: Later checkpoints add memory-service integration on top of this dependency set.
Run the app:
cd typescript/examples/vecelai/doc-checkpoints/01-basic-chat
npm install
npm run devTest it with curl:
curl -NsSfX POST http://localhost:9090/chat \
-H "Content-Type: text/plain" \
-d "Hi, who are you?" Example output:
I am a TypeScript memory-service demo agent. Step 2: Enable Memory-Backed Conversations
Starting checkpoint: typescript/examples/vecelai/doc-checkpoints/02-with-memory
Checkpoint 02 is built from checkpoint 01 with three additions.
Import the context memory helper
memoryServiceConfigFromEnv,
withMemoryService,
} from "@chirino/memory-service-vercelai";
const app = express();
app.use(express.text({ type: "*/*" })); What changed: The checkpoint imports memoryServiceConfigFromEnv(...) and withMemoryService(...) from @chirino/memory-service-vercelai, then creates a shared memoryServiceConfig.
Why needed: memoryServiceConfigFromEnv(...) keeps env lookups in one explicit place, while withMemoryService(...) manages memory/history persistence around your model call so app code stays focused on chat behavior.
The endpoint also forwards Authorization: Bearer ... to Memory Service calls so access control remains user-scoped.
Change the endpoint to accept a conversationId
app.post("/chat/:conversationId", async (req, res) => {
const conversationId = req.params.conversationId;
const userMessage = String(req.body ?? "").trim();
if (!userMessage) { What changed: The route changes from POST /chat to POST /chat/{conversationId}.
Why: The conversation ID in the URL becomes the storage key for context entries. Calls to the same ID reuse context; a new ID starts fresh context.
Use context memory in the chat flow
const authorization = req.header("authorization") ?? null;
const result = await withMemoryService(
{
...memoryServiceConfig,
conversationId,
authorization,
memoryContentType: "vercelai",
},
async (contextMemory) => {
contextMemory.append({ role: "user", content: userMessage });
const generated = await generateText({
model,
messages: [
{
role: "system",
content: "You are a TypeScript memory-service demo agent.",
},
...contextMemory.get(),
],
});
contextMemory.append({ role: "assistant", content: generated.text });
return generated;
}, What changed: The handler wraps generation in withMemoryService(...), reads prior messages via contextMemory.get(), and appends USER/assistant messages to context memory.
Why needed: withMemoryService(...) keeps model context in sync with Memory Service and handles persistence strategy for you.
You can also replace memory explicitly:
contextMemory.clear();
contextMemory.set([
{ role: "USER", text: userMessage },
{ role: "AI", text: assistantText },
]);
Make sure Memory Service and Keycloak are running, then define a helper to get a user token:
function get-token() {
curl -sSfX POST http://localhost:8081/realms/memory-service/protocol/openid-connect/token \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "client_id=memory-service-client" \
-d "client_secret=change-me" \
-d "grant_type=password" \
-d "username=bob" \
-d "password=bob" \
| jq -r '.access_token'
}
curl -NsSfX POST http://localhost:9090/chat/7f6c3b21-9d6f-4f2b-bf19-1f3f2f0a8e11 \
-H "Authorization: Bearer $(get-token)" \
-H "Content-Type: text/plain" \
-d "Hi, I'm Hiram, who are you?" Example output:
Hi Hiram! I am a TypeScript memory-service demo agent. curl -NsSfX POST http://localhost:9090/chat/7f6c3b21-9d6f-4f2b-bf19-1f3f2f0a8e11 \
-H "Authorization: Bearer $(get-token)" \
-H "Content-Type: text/plain" \
-d "Who am I?" Example output:
You are Hiram. Next Steps
Continue to Conversation History to record user/AI turns in the history channel.