LangGraph Conversation Forking

This guide covers conversation forking, letting users branch from any point in a conversation to explore alternative paths.

New to forking concepts? Read Forking first to understand how conversation forking works. This guide focuses on the Python LangGraph implementation.

Prerequisites

Starting checkpoint: This guide starts from python/examples/langgraph/doc-checkpoints/03-with-history

Make sure you’ve completed the previous guides:

Also complete Step 2 in LangGraph Dev Setup (build local memory-service-langchain wheel + UV_FIND_LINKS); this is temporary until the package is released.

Conversation Forking

How Forking Works

Keep the chat endpoint the same shape (text/plain input). Fork creation is done by appending the first entry to a new conversation with fork metadata.

app.py
proxy = MemoryServiceProxy.from_env()


@app.post("/chat/{conversation_id}")
async def chat(conversation_id: str, request: Request) -> PlainTextResponse:
    user_message = (await request.body()).decode("utf-8").strip()
    if not user_message:
        raise HTTPException(400, "message is required")

    forked_at_conversation_id = request.query_params.get("forkedAtConversationId")
    forked_at_entry_id = request.query_params.get("forkedAtEntryId")

    with memory_service_scope(
        conversation_id,
        forked_at_conversation_id,
        forked_at_entry_id,
    ):
        result = await graph.ainvoke(
            {"messages": [{"role": "user", "content": user_message}]},
            config={"configurable": {"thread_id": conversation_id}},
        )

    message = result["messages"][-1]
    content = getattr(message, "content", "")
    return PlainTextResponse(content if isinstance(content, str) else str(content))

What changed: The chat endpoint now reads optional forkedAtConversationId and forkedAtEntryId query parameters, and passes them as additional arguments to memory_service_scope(conversation_id, forked_at_conversation_id, forked_at_entry_id).

Why: When three arguments are passed to memory_service_scope, the first history entry written in that scope is tagged with fork metadata (forkedAtConversationId, forkedAtEntryId). Memory Service uses that metadata to link the new conversation back to the branch point in the source conversation.

Listing Forks

Expose a forks endpoint so frontend clients can discover branches from a conversation:

app.py
async def list_conversation_forks(conversation_id: str, request: Request):
    response = await proxy.list_conversation_forks(
        conversation_id,
        after_cursor=request.query_params.get("afterCursor"),
        limit=int(limit) if (limit := request.query_params.get("limit")) is not None else None,
    )
    return to_fastapi_response(response)

Why: Frontend apps need to know what branches exist off a given conversation so they can display a fork tree or let users navigate to a branched conversation. The endpoint forwards optional afterCursor and limit parameters to support paginated listing of forks.

Try It With Curl

Define a helper to get a bearer token for bob:

function get-token() {
  curl -sSfX POST http://localhost:8081/realms/memory-service/protocol/openid-connect/token \
    -H "Content-Type: application/x-www-form-urlencoded" \
    -d "client_id=memory-service-client" \
    -d "client_secret=change-me" \
    -d "grant_type=password" \
    -d "username=bob" \
    -d "password=bob" \
    | jq -r '.access_token'
}

Create a turn on the source conversation:

curl -NsSfX POST http://localhost:9090/chat/43b16f9d-a995-40e2-8538-c010ea2276ac \
  -H "Content-Type: text/plain" \
  -H "Authorization: Bearer $(get-token)" \
  -d "Hello from the root conversation."

Example output:

Sure, I can help with that.

Fetch the entry id to fork from:

curl -sSfX GET http://localhost:9090/v1/conversations/43b16f9d-a995-40e2-8538-c010ea2276ac/entries \
  -H "Authorization: Bearer $(get-token)" | jq

Example output:

{
  "data": [
    {
      "id": "014764af-b5d0-49da-932f-76e985f5ecbc"
    },
    {
      "id": "3eb5e5cd-3ce0-4e50-80b8-7bfe754ce0e2"
    }
  ],
  "afterCursor": null
}

Manual bash equivalent (stores the value in a shell variable):

FORK_ENTRY_ID="$(curl -sSfX GET http://localhost:9090/v1/conversations/43b16f9d-a995-40e2-8538-c010ea2276ac/entries \
  -H "Authorization: Bearer $(get-token)" | jq -r '.data[0].id')"
echo "$FORK_ENTRY_ID"

Create the forked conversation by calling chat with fork metadata:

curl -NsSfX POST "http://localhost:9090/chat/bdb74451-1164-47f5-823e-b71cff4b7855?forkedAtConversationId=43b16f9d-a995-40e2-8538-c010ea2276ac&forkedAtEntryId=${FORK_ENTRY_ID}" \
  -H "Content-Type: text/plain" \
  -H "Authorization: Bearer $(get-token)" \
  -d "Continue from this fork."

Example output:

{
  "afterCursor": null,
  "data": [
    {
      "id": "014764af-b5d0-49da-932f-76e985f5ecbc",
      "conversationId": "43b16f9d-a995-40e2-8538-c010ea2276ac",
      "userId": "bob",
      "clientId": "checkpoint-agent",
      "channel": "history",
      "contentType": "history",
      "createdAt": "2026-03-06T14:58:32.669566Z",
      "content": [
        {
          "role": "USER",
          "text": "Hello from the root conversation."
        }
      ]
    },
    {
      "id": "3eb5e5cd-3ce0-4e50-80b8-7bfe754ce0e2",
      "conversationId": "43b16f9d-a995-40e2-8538-c010ea2276ac",
      "userId": "bob",
      "clientId": "checkpoint-agent",
      "channel": "history",
      "contentType": "history",
      "createdAt": "2026-03-06T14:58:32.712486Z",
      "content": [
        {
          "role": "AI",
          "text": "I am a Python memory-service demo agent."
        }
      ]
    }
  ]
}

List forks for the source conversation through the Python proxy endpoint:

curl -sSfX GET http://localhost:9090/v1/conversations/43b16f9d-a995-40e2-8538-c010ea2276ac/forks \
  -H "Authorization: Bearer $(get-token)" | jq

Example output:

{
  "data": [
    {
      "conversationId": "43b16f9d-a995-40e2-8538-c010ea2276ac"
    },
    {
      "conversationId": "bdb74451-1164-47f5-823e-b71cff4b7855"
    }
  ],
  "afterCursor": null
}

Next Steps