LangGraph Indexing and Search
This guide continues from Conversation History and shows how to add search indexing to history entries and expose a search API for frontend applications.
New to indexing and search concepts? Read Indexing & Search first to understand
indexedContent, redaction, and search types.
Prerequisites
Starting checkpoint: This guide starts from python/examples/langgraph/doc-checkpoints/03-with-history
Make sure you’ve completed the LangGraph Conversation History guide first.
Also complete Step 2 in LangGraph Dev Setup (build local memory-service-langchain wheel + UV_FIND_LINKS); this is temporary until the package is released.
How Search Indexing Works
Conversation history content is encrypted at rest, so text search needs a separate cleartext index field.
Checkpoint 07 configures MemoryServiceHistoryMiddleware with an indexed_content_provider. The provider transforms each history message into indexedContent before the entry is written.
This gives you a redaction point. You can remove sensitive data from indexedContent while still storing the full encrypted message content.
Add an Indexed Content Provider
The simplest provider is pass-through:
checkpointer = MemoryServiceCheckpointSaver.from_env()
def pass_through_indexed_content(text: str, role: str) -> str:
del role
return text
history_middleware = MemoryServiceHistoryMiddleware.from_env(
indexed_content_provider=pass_through_indexed_content,
) What changed: A pass_through_indexed_content(text, role) function is defined and passed to MemoryServiceHistoryMiddleware.from_env(indexed_content_provider=pass_through_indexed_content).
Why: Passing indexed_content_provider to MemoryServiceHistoryMiddleware activates indexing: the function is called for every message before the history entry is written, and its return value becomes indexedContent. A pass-through provider indexes everything as-is. You can swap it for a redaction function to strip sensitive data from the search index while keeping the full encrypted message in storage.
Security warning
indexedContentis not encrypted. Redact or minimize sensitive values before returning indexed text.
For production, replace pass-through with redaction logic. Example:
def redacting_indexed_content(text: str, role: str) -> str:
del role
return re.sub(r"\b\d{4}[- ]?\d{4}[- ]?\d{4}[- ]?\d{4}\b", "[REDACTED]", text)
Expose the Search API
Expose POST /v1/conversations/search so frontend apps can query across conversations:
@app.get("/v1/conversations/{conversation_id}/entries")
async def get_entries(conversation_id: str, request: Request):
response = await proxy.list_conversation_entries( Why: Frontend apps need a single endpoint to run full-text and semantic search across all conversations the user has access to. searchType can be auto, a concrete type (semantic or fulltext), or an array of concrete types such as ["semantic","fulltext"]. Proxying through the agent app means the user’s bearer token is forwarded to Memory Service so only conversations the user can read are returned in results.
Make sure you define a shell function that can get the bearer token for the bob user:
function get-token() {
curl -sSfX POST http://localhost:8081/realms/memory-service/protocol/openid-connect/token \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "client_id=memory-service-client" \
-d "client_secret=change-me" \
-d "grant_type=password" \
-d "username=bob" \
-d "password=bob" \
| jq -r '.access_token'
}First, create searchable conversation content:
curl -NsSfX POST http://localhost:9090/chat/b344ba48-6958-41c9-a8e3-c641ea633dab \
-H "Authorization: Bearer $(get-token)" \
-H "Content-Type: text/plain" \
-d "Give me a random number between 1 and 100." Example output:
Sure, I can help with that. Search conversations:
curl -sSfX POST http://localhost:9090/v1/conversations/search \
-H "Authorization: Bearer $(get-token)" \
-H "Content-Type: application/json" \
-d '{"query": "random number", "searchType": "auto"}' | jq Example output:
{
"data": [
{
"conversationId": "b344ba48-6958-41c9-a8e3-c641ea633dab"
}
],
"afterCursor": null
} Completed Checkpoint
Completed code: View the full implementation at python/examples/langgraph/doc-checkpoints/07-with-search
Next Steps
Continue to:
- Conversation Forking - Branch conversations to explore alternative paths
- Response Recording and Resumption - Streaming responses with resume and cancel support
- Episodic Memories - Persistent per-user memories using the LangGraph BaseStore