Agent is the core execution unit of the tRPC-Agent-Go framework, responsible for processing user input and generating corresponding responses. Each Agent implements a unified interface, supporting streaming output and callback mechanisms.
The framework provides multiple types of Agents, including LLMAgent, ChainAgent, ParallelAgent, CycleAgent, and GraphAgent. This document focuses on LLMAgent. For detailed information about other Agent types and multi-Agent systems, please refer to Multi-Agent.
Quick Start
Recommended Usage: Runner
We strongly recommend using Runner to execute Agents instead of directly calling Agent interfaces. Runner provides a more user-friendly interface, integrating services like Session and Memory, making usage much simpler.
📖 Learn More: For detailed usage methods, please refer to Runner
This example uses OpenAI's GPT-4o-mini model. Before starting, please ensure you have prepared the corresponding OPENAI_API_KEY and exported it through environment variables:
import"trpc.group/trpc-go/trpc-agent-go/model/openai"modelName:=flag.String("model","gpt-4o-mini","Name of the model to use")flag.Parse()// Create OpenAI model instance.modelInstance:=openai.New(*modelName,openai.Options{})
Configuring Generation Parameters
Set the model's generation parameters, including maximum tokens, temperature, and whether to use streaming output:
import"trpc.group/trpc-go/trpc-agent-go/model"maxTokens:=1000temperature:=0.7genConfig:=model.GenerationConfig{MaxTokens:&maxTokens,// Maximum number of tokens to generate.Temperature:&temperature,// Temperature parameter, controls output randomness.Stream:true,// Enable streaming output.}
Creating LLMAgent
Use the model instance and configuration to create an LLMAgent, while setting the Agent's Description and Instruction.
Description is used to describe the basic functionality and characteristics of the Agent, while Instruction defines the specific instructions and behavioral guidelines that the Agent should follow when executing tasks.
import"trpc.group/trpc-go/trpc-agent-go/agent/llmagent"llmAgent:=llmagent.New("demo-agent",// Agent name.llmagent.WithModel(modelInstance),// Set model.llmagent.WithDescription("A helpful AI assistant for demonstrations"),// Set description.llmagent.WithInstruction("Be helpful, concise, and informative in your responses"),// Set instruction.llmagent.WithGenerationConfig(genConfig),// Set generation parameters.// Set the filter mode for messages passed to the model. The final messages passed to the model must satisfy both WithMessageTimelineFilterMode and WithMessageBranchFilterMode conditions.// Timeline dimension filter conditions// Default: llmagent.TimelineFilterAll// Optional values:// - llmagent.TimelineFilterAll: Includes historical messages as well as messages generated in the current request// - llmagent.TimelineFilterCurrentRequest: Only includes messages generated in the current request// - llmagent.TimelineFilterCurrentInvocation: Only includes messages generated in the current invocation contextllmagent.WithMessageTimelineFilterMode(llmagent.BranchFilterModeAll),// Branch dimension filter conditions// Default: llmagent.BranchFilterModePrefix// Optional values:// - llmagent.BranchFilterModeAll: Includes messages from all agents. Use this when the current agent interacts with the model and needs to synchronize all valid content messages generated by all agents to the model.// - llmagent.BranchFilterModePrefix: Filters messages by prefix matching Event.FilterKey with Invocation.eventFilterKey. Use this when you want to pass messages generated by the current agent and related upstream/downstream agents to the model.// - llmagent.BranchFilterModeExact: Filters messages where Event.FilterKey == Invocation.eventFilterKey. Use this when the current agent interacts with the model and only needs to use messages generated by the current agent.llmagent.WithMessageBranchFilterMode(llmagent.TimelineFilterAll),)
Placeholder Variables (Session State Injection)
LLMAgent automatically injects session state into Instruction and the optional SystemPrompt via placeholder variables. Supported patterns:
{key}: Replace with the string value of session.State["key"]
{key?}: Optional; if missing, replaced with an empty string
{user:subkey} / {app:subkey} / {temp:subkey}: Use user/app/temp scoped keys (session services merge app/user state into session with these prefixes)
Notes:
If a non-optional key is not found, the original {key} is preserved (helps the LLM notice missing context)
Values are read from invocation.Session.State (Runner + SessionService set/merge this automatically)
llm:=llmagent.New("research-agent",llmagent.WithModel(modelInstance),llmagent.WithInstruction("You are a research assistant. Focus: {research_topics}. "+"User interests: {user:topics?}. App banner: {app:banner?}.",),)// Initialize session state (Runner + SessionService)_=sessionService.UpdateUserState(ctx,session.UserKey{AppName:app,UserID:user},session.StateMap{"topics":[]byte("quantum computing, cryptography"),})_=sessionService.UpdateAppState(ctx,app,session.StateMap{"banner":[]byte("Research Mode"),})// Unprefixed keys live directly in session.State_,_=sessionService.CreateSession(ctx,session.Key{AppName:app,UserID:user,SessionID:sid},session.StateMap{"research_topics":[]byte("AI, ML, DL"),})
import"trpc.group/trpc-go/trpc-agent-go/runner"// Create Runner.runner:=runner.NewRunner("demo-app",llmAgent)// Send message directly without creating complex Invocation.message:=model.NewUserMessage("Hello! Can you tell me about yourself?")eventChan,err:=runner.Run(ctx,"user-001","session-001",message)iferr!=nil{log.Fatalf("Failed to execute Agent: %v",err)}
Delegation Visibility Options
When building multi‑Agent systems (task delegation between Agents), LLMAgent provides a unified fallback option for delegation events. Transfer events always include announcement text and are tagged transfer so UIs (User Interfaces) can filter them if desired.
llmagent.WithDefaultTransferMessage(string)
Configure the default message used when a model calls a SubAgent without a message.
Pass an empty string to disable injecting a default message; pass a non‑empty string to enable and override it.
coordinator:=llmagent.New("coordinator",llmagent.WithModel(modelInstance),llmagent.WithSubAgents([]agent.Agent{mathAgent,weatherAgent}),// Transfer announcement events are always emitted (tagged `transfer`). Filter in the UI if needed.// Customize the default message when the model omits it (empty string disables)llmagent.WithDefaultTransferMessage("Handing off to the specialist"),)
Notes:
These options do not change the actual handoff logic; they only affect user‑visible texts or whether a fallback message is injected.
Transfer announcements are emitted as Events with Response.Object == "agent.transfer". If your UI should not display system‑level notices, filter this object type at the renderer/service layer.
Handling Event Stream
The eventChan returned by runner.Run() is an event channel. The Agent continuously sends Event objects to this channel during execution.
Each Event contains execution state information at a specific moment: LLM-generated content, tool call requests and results, error messages, etc. By iterating through the event channel, you can get real-time execution progress (see Event section below for details).
Receive execution results through the event channel:
// 1. Get event channel (returns immediately, starts async execution)eventChan,err:=runner.Run(ctx,userID,sessionID,message)iferr!=nil{log.Fatalf("Failed to start: %v",err)}// 2. Handle event stream (receive execution results in real-time)forevent:=rangeeventChan{// Check for errorsifevent.Error!=nil{log.Printf("Execution error: %s",event.Error.Message)continue}// Handle response contentiflen(event.Response.Choices)>0{choice:=event.Response.Choices[0]// Streaming content (real-time display)ifchoice.Delta.Content!=""{fmt.Print(choice.Delta.Content)}// Tool call informationfor_,toolCall:=rangechoice.Message.ToolCalls{fmt.Printf("Calling tool: %s\n",toolCall.Function.Name)}}// Check if completed (note: should not break on tool call completion)ifevent.IsFinalResponse(){fmt.Println()break}}
The complete code for this example can be found at examples/runner
Why is Runner recommended?
Simpler Interface: No need to create complex Invocation objects
Integrated Services: Automatically integrates Session, Memory and other services
Better Management: Unified management of Agent execution flow
Production Ready: Suitable for production environment use
💡 Tip: Want to learn more about Runner's detailed usage and advanced features? Please check Runner
Advanced Usage: Direct Agent Usage
If you need more fine-grained control, you can also use the Agent interface directly, but this requires creating Invocation objects:
Core Concepts
Invocation (Advanced Usage)
Invocation is the context object for Agent execution flow, containing all information needed for a single call. Note: This is advanced usage, we recommend using Runner to simplify operations.
import"trpc.group/trpc-go/trpc-agent-go/agent"// Create Invocation object (advanced usage).invocation:=agent.NewInvocation(agent.WithAgentName(agent),// Agent.agent.WithInvocationMessage(model.NewUserMessage("Hello! Can you tell me about yourself?")),// User message.agent.WithInvocationSession(&session.Session{ID:"session-001"}),// session object.agent.WithInvocationEndInvocation(false),// Whether to end invocation.agent.WithInvocationModel(modelInstance),// Model to use.)// Call Agent directly (advanced usage).ctx:=context.Background()eventChan,err:=llmAgent.Run(ctx,invocation)iferr!=nil{log.Fatalf("Failed to execute Agent: %v",err)}
// Invocation is the context object for Agent execution flow, containing// all information needed for a single call.typeInvocationstruct{// Agent specifies the Agent instance to call.AgentAgent// AgentName identifies the name of the Agent instance to call.AgentNamestring// InvocationID provides a unique identifier for each call.InvocationIDstring// Branch is a branch identifier for hierarchical event filtering.Branchstring// EndInvocation indicates whether to end the invocation.EndInvocationbool// Session maintains the context state of the conversation.Session*session.Session// Model specifies the model instance to use.Modelmodel.Model// Message is the specific content sent by the user to the Agent.Messagemodel.Message// RunOptions are option configurations for the Run call.RunOptionsRunOptions// TransferInfo supports control transfer between Agents.TransferInfo*TransferInfo// Structured output configuration (optional).StructuredOutput*model.StructuredOutputStructuredOutputTypereflect.Type// Services injected for this invocation.MemoryServicememory.ServiceArtifactServiceartifact.Service// Internal signaling: notify when events are appended.noticeChanMapmap[string]chananynoticeMu*sync.Mutex// Internal: event filter key and parent linkage for nested flows.eventFilterKeystringparent*Invocation// Invocation-scoped state (lazy-init, thread-safe via stateMu).statemap[string]anystateMusync.RWMutex}
Invocation State
Invocation provides a general-purpose state storage mechanism for sharing data within the lifecycle of a single invocation. This is useful for callbacks, middleware, or any scenario that requires storing temporary data at the invocation level.
// Set a state valueinv.SetState(keystring,valueany)// Get a state valuevalue,ok:=inv.GetState(keystring)// Delete a state valueinv.DeleteState(keystring)
Features:
Invocation-scoped: State is automatically scoped to a single invocation
Thread-safe: Built-in RWMutex protection for concurrent access
Lazy initialization: Memory allocated only on first use
General-purpose: Can be used for callbacks, middleware, custom logic, and more
Usage Example:
Version Requirement
The structured callback API (recommended) requires trpc-agent-go >= 0.6.0.
// Store data in BeforeAgentCallback// Note: Structured callback API requires trpc-agent-go >= 0.6.0callbacks:=agent.NewCallbacks()callbacks.RegisterBeforeAgent(func(ctxcontext.Context,args*agent.BeforeAgentArgs)(*agent.BeforeAgentResult,error){args.Invocation.SetState("agent:start_time",time.Now())args.Invocation.SetState("custom:request_id","req-123")returnnil,nil})// Read data in AfterAgentCallbackcallbacks.RegisterAfterAgent(func(ctxcontext.Context,args*agent.AfterAgentArgs)(*agent.AfterAgentResult,error){ifstartTime,ok:=args.Invocation.GetState("agent:start_time");ok{duration:=time.Since(startTime.(time.Time))log.Printf("Execution took: %v",duration)args.Invocation.DeleteState("agent:start_time")}returnnil,nil})
Recommended Key Naming Convention:
Agent callbacks: "agent:xxx"
Model callbacks: "model:xxx"
Tool callbacks: "tool:toolName:xxx"
Middleware: "middleware:xxx"
Custom logic: "custom:xxx"
For detailed usage and more examples, please refer to Callbacks.
Event
Event is the real-time feedback generated during Agent execution, reporting execution progress in real-time through Event streams.
// Event is the real-time feedback generated during Agent execution, reporting execution progress in real-time through Event streams.typeEventstruct{// Response contains model response content, tool call results and statistics.*model.Response// InvocationID is associated with a specific invocation.InvocationIDstring`json:"invocationId"`// Author is the source of the event, such as Agent or tool.Authorstring`json:"author"`// ID is the unique identifier of the event.IDstring`json:"id"`// Timestamp records the time when the event occurred.Timestamptime.Time`json:"timestamp"`// Branch is a branch identifier for hierarchical event filtering.Branchstring`json:"branch,omitempty"`// RequiresCompletion identifies whether this event requires a completion signal.RequiresCompletionbool`json:"requiresCompletion,omitempty"`// LongRunningToolIDs is a set of IDs for long-running function calls. Agent clients can understand which function calls are long-running through this field, only valid for function call events.LongRunningToolIDsmap[string]struct{}`json:"longRunningToolIDs,omitempty"`}
The streaming nature of Events allows you to see the Agent's working process in real-time, just like having a natural conversation with a real person. You only need to iterate through the Event stream, check the content and status of each Event, and you can completely handle the Agent's execution results.
Agent Interface
The Agent interface defines the core behaviors that all Agents must implement. This interface allows you to uniformly use different types of Agents while supporting tool calls and sub-Agent management.
typeAgentinterface{// Run receives execution context and invocation information, returns an event channel. Through this channel, you can receive Agent execution progress and results in real-time.Run(ctxcontext.Context,invocation*Invocation)(<-chan*event.Event,error)// Tools returns the list of tools that this Agent can access and execute.Tools()[]tool.Tool// Info method provides basic information about the Agent, including name and description, for easy identification and management.Info()Info// SubAgents returns the list of sub-Agents available to this Agent.// SubAgents and FindSubAgent methods support collaboration between Agents. An Agent can delegate tasks to other Agents, building complex multi-Agent systems.SubAgents()[]Agent// FindSubAgent finds sub-Agent by name.FindSubAgent(namestring)Agent}
The framework provides multiple types of Agent implementations, including LLMAgent, ChainAgent, ParallelAgent, CycleAgent, and GraphAgent. For detailed information about different types of Agents and multi-Agent systems, please refer to Multi-Agent.
Callbacks
Callbacks provide a rich callback mechanism that allows you to inject custom logic at key points during Agent execution.
Version Requirement
The structured callback API (recommended) requires trpc-agent-go >= 0.6.0.
Callback Types
The framework provides three types of callbacks:
Agent Callbacks: Triggered before and after Agent execution
// Create Agent callbacks (using structured API)// Note: Structured callback API requires trpc-agent-go >= 0.6.0callbacks:=agent.NewCallbacks()callbacks.RegisterBeforeAgent(func(ctxcontext.Context,args*agent.BeforeAgentArgs)(*agent.BeforeAgentResult,error){log.Printf("Agent %s started execution",args.Invocation.AgentName)returnnil,nil})callbacks.RegisterAfterAgent(func(ctxcontext.Context,args*agent.AfterAgentArgs)(*agent.AfterAgentResult,error){ifargs.Error!=nil{log.Printf("Agent %s execution error: %v",args.Invocation.AgentName,args.Error)}else{log.Printf("Agent %s execution completed",args.Invocation.AgentName)}returnnil,nil})// Use callbacks in llmAgentllmagent:=llmagent.New("llmagent",llmagent.WithAgentCallbacks(callbacks))
The callback mechanism allows you to precisely control the Agent's execution process and implement more complex business logic.
Advanced Usage
The framework provides advanced features like Runner, Session, and Memory for building more complex Agent systems.
Runner is the recommended usage, responsible for managing Agent execution flow, connecting Session/Memory Service capabilities, and providing a more user-friendly interface.
Session Service is used to manage session state, supporting conversation history and context maintenance.
Memory Service is used to record user preference information, supporting personalized experiences.
import("context""trpc.group/trpc-go/trpc-agent-go/agent/llmagent""trpc.group/trpc-go/trpc-agent-go/model""trpc.group/trpc-go/trpc-agent-go/model/openai""trpc.group/trpc-go/trpc-agent-go/runner")// 1) Build model and agent once at startup.mdl:=openai.New("gpt-4o-mini",openai.Options{})llm:=llmagent.New("support-bot",llmagent.WithModel(mdl),llmagent.WithInstruction("Be helpful and concise."),)run:=runner.NewRunner("my-app",llm)// 2) Later, change behavior at runtime (e.g., user updates prompt in UI).llm.SetInstruction("Translate all user inputs to French.")llm.SetGlobalInstruction("System: Safety first. No PII leakage.")// 3) Subsequent runs use the new instructions.msg:=model.NewUserMessage("Where is the nearest museum?")ch,err:=run.Run(context.Background(),"u1","s1",msg)_=ch;_=err
Notes
Thread‑safe: the setters are concurrency‑safe and can be called while the service is handling requests.
Mid‑turn behavior: if an Agent’s current turn triggers more than one model request (e.g., due to tool calls), updates may apply to subsequent requests in the same turn. If you need per‑run stability, set or freeze the text at the start of the run.
Per‑session personalization: for per‑user or per‑session data, prefer placeholders in the instruction and session state injection (see the “Placeholder Variables” section above).
Alternative: Placeholder‑Driven Dynamic System Prompts
If you’d rather not call setters, you can make the instruction itself a template and feed values via session state. The instruction processor replaces placeholders using session/app/user state on each turn.
Patterns
Persistent per‑user value: store under user:* and reference {user:key}.
Persistent per‑app value: store under app:* and reference {app:key}.
Per‑turn ephemeral value: write into the session’s temp:* namespace and reference {temp:key} (not persisted).
import("context""trpc.group/trpc-go/trpc-agent-go/agent/llmagent""trpc.group/trpc-go/trpc-agent-go/runner""trpc.group/trpc-go/trpc-agent-go/session""trpc.group/trpc-go/trpc-agent-go/session/inmemory")svc:=inmemory.NewSessionService()app,user,sid:="my-app","u1","s1"// 1) Instruction template references a user-scoped key.llm:=llmagent.New("dyn-agent",llmagent.WithInstruction("{user:system_prompt}"),)run:=runner.NewRunner(app,llm,runner.WithSessionService(svc))// 2) Update the user-scoped state when the user changes settings._=svc.UpdateUserState(context.Background(),session.UserKey{AppName:app,UserID:user},session.StateMap{"system_prompt":[]byte("You are a helpful assistant. Always answer in English."),})// 3) Runs now read the latest prompt via placeholder injection._,_=run.Run(context.Background(),user,sid,model.NewUserMessage("Hi!"))
Example: per‑turn temp value via a before‑agent callback
Version Requirement
The structured callback API (recommended) requires trpc-agent-go >= 0.6.0.
// Note: Structured callback API requires trpc-agent-go >= 0.6.0callbacks:=agent.NewCallbacks()callbacks.RegisterBeforeAgent(func(ctxcontext.Context,args*agent.BeforeAgentArgs)(*agent.BeforeAgentResult,error){ifargs.Invocation!=nil&&args.Invocation.Session!=nil{ifargs.Invocation.Session.State==nil{args.Invocation.Session.State=make(map[string][]byte)}// Write a one-off instruction for this turn onlyargs.Invocation.Session.State["temp:sys"]=[]byte("Translate to French.")}returnnil,nil})llm:=llmagent.New("temp-agent",llmagent.WithInstruction("{temp:sys}"),llmagent.WithAgentCallbacks(callbacks),// requires trpc-agent-go >= 0.6.0)
Caveats
In-memory UpdateUserState intentionally forbids temp:* updates; write temp:* directly to invocation.Session.State (e.g., via a callback) when you need ephemeral, per‑turn values.
Placeholders are resolved at request time; changing the stored value updates behavior on the next model request without recreating the agent.