Runner provides the interface to run Agents, responsible for session management and event stream processing. The core responsibilities of Runner are: obtain or create sessions, generate an Invocation ID, call Agent.Run, process the returned event stream, and append non-partial response events to the session.
๐ฏ Key Features
๐พ Session Management: Obtain/create sessions via sessionService, using inmemory.NewSessionService() by default.
๐ Event Handling: Receive Agent event streams and append non-partial response events to the session.
๐ ID Generation: Automatically generate Invocation IDs and event IDs.
๐ Observability Integration: Integrates telemetry/trace to automatically record spans.
โ Completion Event: Generates a runner-completion event after the Agent event stream ends.
# Enter the example directory.cdexamples/runner
# Set API key.exportOPENAI_API_KEY="your-api-key"# Basic run.gorunmain.go
# Use Redis session.dockerrun-d-p6379:6379redis:alpine
gorunmain.go-sessionredis
# Custom model.gorunmain.go-model"gpt-4o-mini"
๐ฌ Interactive Features
After running the example, the following special commands are supported:
/history - Ask AI to show conversation history.
/new - Start a new session (reset conversation context).
/exit - End the conversation.
When the AI uses tools, detailed invocation processes will be displayed:
// Execute a single conversation.eventChan,err:=r.Run(ctx,userID,sessionID,message,options...)
Resume Interrupted Runs (tools-first resume)
In long-running conversations, users may interrupt the agent while it is still
in a tool-calling phase (for example, the last message in the session is an
assistant message with tool_calls, but no tool result has been written yet).
When you later reuse the same sessionID, you can ask the Runner to resume
from that point instead of asking the model to repeat the tool calls:
eventChan,err:=r.Run(ctx,userID,sessionID,model.Message{},// no new user messageagent.WithResume(true),// enable resume mode)
When WithResume(true) is set:
Runner inspects the latest persisted session event.
If the last event is an assistant response that contains tool_calls and
there is no later tool result, Runner will execute those pending tools first
(using the same tool set and callbacks as a normal step) and persist the
tool results into the session.
After tools finish, the normal LLM cycle continues using the updated session
history, so the model sees both the original tool calls and their results.
If the last event is a user or tool message (or a plain assistant reply
without tool_calls), WithResume(true) is a no-op and the flow behaves like
todayโs Run call.
Provide Conversation History (auto-seed + session reuse)
If your upstream service maintains the conversation and you want the agent to
see that context, you can pass a full history ([]model.Message) directly. The
runner will seed an empty session with that history automatically and then
merge in new session events.
Option A: Use the convenience helper runner.RunWithMessages
msgs:=[]model.Message{model.NewSystemMessage("You are a helpful assistant."),model.NewUserMessage("First user input"),model.NewAssistantMessage("Previous assistant reply"),model.NewUserMessage("Whatโs the next step?"),}ch,err:=runner.RunWithMessages(ctx,r,userID,sessionID,msgs,agent.WithRequestID("request-ID"))
Example: examples/runwithmessages (uses RunWithMessages; runner auto-seeds and
continues reusing the session)
Option B: Pass via RunOption explicitly (same philosophy as ADK Python)
msgs:=[]model.Message{/* as above */}ch,err:=r.Run(ctx,userID,sessionID,model.Message{},agent.WithMessages(msgs))
When []model.Message is provided, the runner persists that history into the
session on first use (if empty). The content processor does not read this
option; it only derives messages from session events (or falls back to the
single invocation.Message if the session has no events). RunWithMessages
still sets invocation.Message to the latest user turn so graph/flow agents
that inspect it continue to work.
โ Detecting End-of-Run and Reading Final Output (Graph-friendly)
When driving a GraphAgent workflow, the LLMโs โfinal responseโ is not the end of
the workflowโnodes like output may still be pending. Instead of checking
Response.IsFinalResponse(), always stop on the Runnerโs terminal completion
event:
For convenience, Runner now propagates the graphโs final snapshot into this last
event. You can extract the final textual output via graph.StateKeyLastResponse:
// Configuration options supported by Redis.sessionService,err:=redis.NewService(redis.WithRedisClientURL("redis://localhost:6379"),redis.WithSessionEventLimit(1000),// Limit number of session events.// redis.WithRedisInstance("redis-instance"), // Or use an instance name.)
๐ค Agent Configuration
Runner's core responsibility is to manage the Agent execution flow. A created Agent needs to be executed via Runner.
// Create a basic Agent (see agent.md for detailed configuration).agent:=llmagent.New("assistant",llmagent.WithModel(model),llmagent.WithInstruction("You are a helpful AI assistant."))// Execute Agent with Runner.r:=runner.NewRunner("my-app",agent)
Generation Configuration
Runner passes generation configuration to the Agent:
// Create tools (see tool.md for detailed configuration).tools:=[]tool.Tool{function.NewFunctionTool(myFunction,function.WithName("my_tool")),// More tools...}// Add tools to the Agent.agent:=llmagent.New("assistant",llmagent.WithModel(model),llmagent.WithTools(tools))// Runner runs the Agent configured with tools.r:=runner.NewRunner("my-app",agent)
Tool invocation flow: Runner itself does not directly handle tool invocation. The flow is as follows:
Pass tools: Runner passes context to the Agent via Invocation.
Agent processing: Agent.Run handles the tool invocation logic.
Event forwarding: Runner receives the event stream returned by the Agent and forwards it.
Session recording: Append non-partial response events to the session.
Multi-Agent Support
Runner can execute complex multi-Agent structures (see multiagent.md for details):
import"trpc.group/trpc-go/trpc-agent-go/agent/chainagent"// Create a multi-Agent pipeline.multiAgent:=chainagent.New("pipeline",chainagent.WithSubAgents([]agent.Agent{agent1,agent2}))// Execute with the same Runner.r:=runner.NewRunner("multi-app",multiAgent)
// The Invocation created by Runner contains the following fields.invocation:=agent.NewInvocation(agent.WithInvocationAgent(r.agent),// Agent instance.agent.WithInvocationSession(&session.Session{ID:"session-001"}),// Session object.agent.WithInvocationEndInvocation(false),// End flag.agent.WithInvocationMessage(model.NewUserMessage("User input")),// User message.agent.WithInvocationRunOptions(ro),// Run options.)// Note: Invocation also includes other fields such as AgentName, Branch, Model,// TransferInfo, AgentCallbacks, ModelCallbacks, ToolCallbacks, etc.,// but these fields are used and managed internally by the Agent.
// Handle errors from Runner.Run.eventChan,err:=r.Run(ctx,userID,sessionID,message,agent.WithRequestID("request-ID"))iferr!=nil{log.Printf("Runner execution failed: %v",err)returnerr}// Handle errors in the event stream.forevent:=rangeeventChan{ifevent.Error!=nil{log.Printf("Event error: %s",event.Error.Message)continue}// Handle normal events.}
Resource Management
๐ Closing Runner (Important)
You MUST call Close() when the Runner is no longer needed to prevent goroutine leaks(trpc-agent-go >= v0.5.0).
Runner Only Closes Resources It Created
When a Runner is created without providing a Session Service, it automatically creates a default inmemory Session Service. This service starts background goroutines internally (for asynchronous summary processing, TTL-based session cleanup, etc.). Runner only manages the lifecycle of this self-created inmemory Session Service. If you provide your own Session Service via WithSessionService(), you are responsible for managing its lifecycleโRunner won't close it.
If you don't call Close() on a Runner that owns an inmemory Session Service, the background goroutines will run forever, causing resource leaks.
// โ Recommended: Use defer to ensure cleanupr:=runner.NewRunner("my-app",agent)deferr.Close()// Ensure cleanup on function exit (trpc-agent-go >= v0.5.0)// Use the runnereventChan,err:=r.Run(ctx,userID,sessionID,message)iferr!=nil{returnerr}forevent:=rangeeventChan{// Process eventsifevent.IsRunnerCompletion(){break}}
// You create and manage the session service lifecyclesessionService:=redis.NewService(redis.WithRedisClientURL("redis://localhost:6379"))defersessionService.Close()// YOU are responsible for closing it// Runner uses but doesn't own this session servicer:=runner.NewRunner("my-app",agent,runner.WithSessionService(sessionService))deferr.Close()// This will NOT close sessionService (you provided it) (trpc-agent-go >= v0.5.0)// ... use the runner
typeServicestruct{runnerrunner.RunnersessionServicesession.Service// If you manage it yourself}funcNewService()*Service{r:=runner.NewRunner("my-app",agent)return&Service{runner:r}}func(s*Service)Start()error{// Service startup logicreturnnil}// Call Close when shutting down the servicefunc(s*Service)Stop()error{// Close runner (which closes its owned inmemory session service)// trpc-agent-go >= v0.5.0iferr:=s.runner.Close();err!=nil{returnerr}// If you provided your own session service, close it hereifs.sessionService!=nil{returns.sessionService.Close()}returnnil}
Important Notes:
โ Close() is idempotent; calling it multiple times is safe
โ Runner only closes the inmemory Session Service it creates by default
โ If you provide your own Session Service via WithSessionService(), Runner won't close it (you manage it yourself)
โ Not calling Close() when Runner owns an inmemory Session Service will cause goroutine leaks
// Use context to control the lifecycle of a single runctx,cancel:=context.WithCancel(context.Background())defercancel()// Ensure all events are consumedeventChan,err:=r.Run(ctx,userID,sessionID,message)iferr!=nil{returnerr}forevent:=rangeeventChan{// Process eventsifevent.Done{break}}
import("context""fmt""trpc.group/trpc-go/trpc-agent-go/model""trpc.group/trpc-go/trpc-agent-go/runner")// Check whether Runner works properly.funccheckRunner(rrunner.Runner,ctxcontext.Context)error{testMessage:=model.NewUserMessage("test")eventChan,err:=r.Run(ctx,"test-user","test-session",testMessage)iferr!=nil{returnfmt.Errorf("Runner.Run failed: %v",err)}// Check the event stream.forevent:=rangeeventChan{ifevent.Error!=nil{returnfmt.Errorf("Received error event: %s",event.Error.Message)}ifevent.Done{break}}returnnil}
๐ Summary
The Runner component is a core part of the tRPC-Agent-Go framework, providing complete conversation management and Agent orchestration capabilities. By properly using session management, tool integration, and event handling, you can build powerful intelligent conversational applications.