Version Requirement
The structured callback API (recommended) requires trpc-agent-go >= 0.6.0.
This page describes the callback system used across the project to intercept,
observe, and customize model inference, tool invocation, and agent execution.
The callback system comes in three categories:
ModelCallbacks
ToolCallbacks
AgentCallbacks
Each category provides a Before and an After callback. A Before callback can
short-circuit the default execution by returning a non-nil custom response.
ModelCallbacks
Structured Model Callbacks (Recommended)
BeforeModelCallbackStructured: Runs before a model inference with structured arguments.
AfterModelCallbackStructured: Runs after the model finishes with structured arguments.
typeBeforeModelArgsstruct{Request*model.Request// The request about to be sent (can be modified)}typeBeforeModelResultstruct{Contextcontext.Context// Optional context for subsequent operationsCustomResponse*model.Response// If non-nil, skips model call and returns this response}typeAfterModelArgsstruct{Request*model.Request// The original request sent to the modelResponse*model.Response// The response from the model (may be nil)Errorerror// Any error that occurred during model call}typeAfterModelResultstruct{Contextcontext.Context// Optional context for subsequent operationsCustomResponse*model.Response// If non-nil, replaces the original response}
// Continue executing remaining callbacks even if an error occursmodelCallbacks:=model.NewCallbacks(model.WithContinueOnError(true),)// Continue executing remaining callbacks even if a CustomResponse is returnedmodelCallbacks:=model.NewCallbacks(model.WithContinueOnResponse(true),)// Enable both options: continue on both error and CustomResponsemodelCallbacks:=model.NewCallbacks(model.WithContinueOnError(true),model.WithContinueOnResponse(true),)
Execution Modes:
Default (both false): Stop on first error or CustomResponse
Continue on Error: Continue executing remaining callbacks even if one returns an error
Continue on Response: Continue executing remaining callbacks even if one returns a CustomResponse
Continue on Both: Continue executing all callbacks regardless of errors or CustomResponse
Priority Rules:
If both an error and a CustomResponse occur, the error takes priority and will be returned (unless continueOnError is true)
When continueOnError is true and an error occurs, execution continues but the first error is preserved and returned at the end
When continueOnResponse is true and a CustomResponse is returned, execution continues but the last CustomResponse is used
modelCallbacks:=model.NewCallbacks().// Before: respond to a special prompt to skip the real model call.RegisterBeforeModel(func(ctxcontext.Context,args*model.BeforeModelArgs)(*model.BeforeModelResult,error){iflen(args.Request.Messages)>0&&strings.Contains(args.Request.Messages[len(args.Request.Messages)-1].Content,"/ping"){return&model.BeforeModelResult{CustomResponse:&model.Response{Choices:[]model.Choice{{Message:model.Message{Role:model.RoleAssistant,Content:"pong"}}}},},nil}returnnil,nil}).// After: annotate successful responses, keep errors untouched.RegisterAfterModel(func(ctxcontext.Context,args*model.AfterModelArgs)(*model.AfterModelResult,error){ifargs.Error!=nil{returnnil,args.Error}ifargs.Response!=nil&&len(args.Response.Choices)>0{args.Response.Choices[0].Message.Content+="\n\n-- answered by callback"return&model.AfterModelResult{CustomResponse:args.Response},nil}returnnil,nil})
Usage: After creating callbacks, pass them to the LLM Agent when creating it using the llmagent.WithModelCallbacks() option:
// Create model callbacksmodelCallbacks:=model.NewCallbacks().RegisterBeforeModel(...).RegisterAfterModel(...)// Create LLM Agent and pass model callbacksllmAgent:=llmagent.New("chat-assistant",llmagent.WithModel(modelInstance),llmagent.WithModelCallbacks(modelCallbacks),// Pass model callbacks)
typeBeforeToolArgsstruct{ToolNamestring// The name of the toolDeclaration*tool.Declaration// Tool declaration metadataArguments[]byte// JSON arguments (can be modified)}typeBeforeToolResultstruct{Contextcontext.Context// Optional context for subsequent operationsCustomResultany// If non-nil, skips tool execution and returns this resultModifiedArguments[]byte// Optional modified arguments for tool execution}typeAfterToolArgsstruct{ToolNamestring// The name of the toolDeclaration*tool.Declaration// Tool declaration metadataArguments[]byte// Original JSON argumentsResultany// Result from tool execution (may be nil)Errorerror// Any error that occurred during tool execution}typeAfterToolResultstruct{Contextcontext.Context// Optional context for subsequent operationsCustomResultany// If non-nil, replaces the original result}
// Continue executing remaining callbacks even if an error occurstoolCallbacks:=tool.NewCallbacks(tool.WithContinueOnError(true),)// Continue executing remaining callbacks even if a CustomResult is returnedtoolCallbacks:=tool.NewCallbacks(tool.WithContinueOnResponse(true),)// Enable both options: continue on both error and CustomResulttoolCallbacks:=tool.NewCallbacks(tool.WithContinueOnError(true),tool.WithContinueOnResponse(true),)
Execution Modes:
Default (both false): Stop on first error or CustomResult
Continue on Error: Continue executing remaining callbacks even if one returns an error
Continue on Response: Continue executing remaining callbacks even if one returns a CustomResult
Continue on Both: Continue executing all callbacks regardless of errors or CustomResult
Priority Rules:
If both an error and a CustomResult occur, the error takes priority and will be returned (unless continueOnError is true)
When continueOnError is true and an error occurs, execution continues but the first error is preserved and returned at the end
When continueOnResponse is true and a CustomResult is returned, execution continues but the last CustomResult is used
toolCallbacks:=tool.NewCallbacks().RegisterBeforeTool(func(ctxcontext.Context,args*tool.BeforeToolArgs)(*tool.BeforeToolResult,error){ifargs.Arguments!=nil&&args.ToolName=="calculator"{// Enrich arguments.original:=string(args.Arguments)enriched:=[]byte(fmt.Sprintf(`{"original":%s,"ts":%d}`,original,time.Now().Unix()))args.Arguments=enriched}returnnil,nil}).RegisterAfterTool(func(ctxcontext.Context,args*tool.AfterToolArgs)(*tool.AfterToolResult,error){ifargs.Error!=nil{returnnil,args.Error}ifs,ok:=args.Result.(string);ok{return&tool.AfterToolResult{CustomResult:s+"\n-- post processed by tool callback",},nil}returnnil,nil})
Usage: After creating callbacks, pass them to the LLM Agent when creating it using the llmagent.WithToolCallbacks() option:
typeBeforeAgentArgsstruct{Invocation*agent.Invocation// The invocation context}typeBeforeAgentResultstruct{Contextcontext.Context// Optional context for subsequent operationsCustomResponse*model.Response// If non-nil, skips agent execution and returns this response}typeAfterAgentArgsstruct{Invocation*agent.Invocation// The invocation contextFullResponseEvent*event.Event// The final response event from agent execution (may be nil)Errorerror// Any error that occurred during agent execution (may be nil)}typeAfterAgentResultstruct{Contextcontext.Context// Optional context for subsequent operationsCustomResponse*model.Response// If non-nil, replaces the original response}
Structured parameters provide better type safety and clearer intent.
BeforeAgentResult.Context and AfterAgentResult.Context can pass context between operations.
Access to full invocation context allows for rich per-invocation logic.
Before can short-circuit with a custom model.Response.
After can return a replacement response.
AfterAgentArgs.FullResponseEvent provides access to the final response event from agent execution, useful for logging, monitoring, post-processing, etc.
Callback Execution Control
By default, callback execution stops immediately when:
A callback returns an error
A callback returns a non-nil CustomResponse
You can control this behavior using options when creating callbacks:
// Continue executing remaining callbacks even if an error occursagentCallbacks:=agent.NewCallbacks(agent.WithContinueOnError(true),)// Continue executing remaining callbacks even if a CustomResponse is returnedagentCallbacks:=agent.NewCallbacks(agent.WithContinueOnResponse(true),)// Enable both options: continue on both error and CustomResponseagentCallbacks:=agent.NewCallbacks(agent.WithContinueOnError(true),agent.WithContinueOnResponse(true),)
Execution Modes:
Default (both false): Stop on first error or CustomResponse
Continue on Error: Continue executing remaining callbacks even if one returns an error
Continue on Response: Continue executing remaining callbacks even if one returns a CustomResponse
Continue on Both: Continue executing all callbacks regardless of errors or CustomResponse
Priority Rules:
If both an error and a CustomResponse occur, the error takes priority and will be returned (unless continueOnError is true)
When continueOnError is true and an error occurs, execution continues but the first error is preserved and returned at the end
When continueOnResponse is true and a CustomResponse is returned, execution continues but the last CustomResponse is used
agentCallbacks:=agent.NewCallbacks().// Before: if the user message contains /abort, return a fixed response and skip the rest.RegisterBeforeAgent(func(ctxcontext.Context,args*agent.BeforeAgentArgs)(*agent.BeforeAgentResult,error){ifargs.Invocation!=nil&&strings.Contains(args.Invocation.GetUserMessageContent(),"/abort"){return&agent.BeforeAgentResult{CustomResponse:&model.Response{Choices:[]model.Choice{{Message:model.Message{Role:model.RoleAssistant,Content:"aborted by callback"}}}},},nil}returnnil,nil}).// After: append a footer to successful responses, can access FullResponseEvent for final response event.RegisterAfterAgent(func(ctxcontext.Context,args*agent.AfterAgentArgs)(*agent.AfterAgentResult,error){ifargs.Error!=nil{returnnil,args.Error}// Can access the final response event from agent execution via FullResponseEvent.ifargs.FullResponseEvent!=nil&&args.FullResponseEvent.Response!=nil{iflen(args.FullResponseEvent.Response.Choices)>0{c:=args.FullResponseEvent.Response.Choices[0]c.Message.Content=c.Message.Content+"\n\n-- handled by agent callback"args.FullResponseEvent.Response.Choices[0]=creturn&agent.AfterAgentResult{CustomResponse:args.FullResponseEvent.Response},nil}}returnnil,nil})
Usage: After creating callbacks, pass them to the LLM Agent when creating it using the llmagent.WithAgentCallbacks() option:
This pattern is showcased in the examples where Before/After callbacks print
the presence of an invocation.
Invocation State: Sharing Data Between Callbacks
Invocation provides a general-purpose State mechanism for storing invocation-scoped data. It can be used not only for sharing data between Before and After callbacks, but also for middleware, custom logic, and any invocation-level state management.
// Set a state value.func(inv*Invocation)SetState(keystring,valueany)// Get a state value, returns value and existence flag.func(inv*Invocation)GetState(keystring)(any,bool)// Delete a state value.func(inv*Invocation)DeleteState(keystring)
Features
Invocation-scoped: State is automatically scoped to a single invocation
Thread-safe: Built-in RWMutex protection for concurrent access
Lazy initialization: Memory allocated only on first use
toolCallbacks:=tool.NewCallbacks().// BeforeToolCallback: Record tool start time.RegisterBeforeTool(func(ctxcontext.Context,args*tool.BeforeToolArgs)(*tool.BeforeToolResult,error){ifinv,ok:=agent.InvocationFromContext(ctx);ok&&inv!=nil{// Get tool call ID for concurrent call support.toolCallID,ok:=tool.ToolCallIDFromContext(ctx)if!ok||toolCallID==""{toolCallID="default"// Fallback for compatibility.}// Use tool call ID to build unique key.key:=fmt.Sprintf("tool:%s:%s:start_time",args.ToolName,toolCallID)inv.SetState(key,time.Now())}returnnil,nil}).// AfterToolCallback: Calculate tool execution duration.RegisterAfterTool(func(ctxcontext.Context,args*tool.AfterToolArgs)(*tool.AfterToolResult,error){ifinv,ok:=agent.InvocationFromContext(ctx);ok&&inv!=nil{// Get tool call ID for concurrent call support.toolCallID,ok:=tool.ToolCallIDFromContext(ctx)if!ok||toolCallID==""{toolCallID="default"// Fallback for compatibility.}key:=fmt.Sprintf("tool:%s:%s:start_time",args.ToolName,toolCallID)ifstartTimeVal,ok:=inv.GetState(key);ok{startTime:=startTimeVal.(time.Time)duration:=time.Since(startTime)fmt.Printf("Tool %s (call %s) took: %v\n",args.ToolName,toolCallID,duration)inv.DeleteState(key)// Clean up state.}}returnnil,nil})
Key Points:
Get tool call ID: Use tool.ToolCallIDFromContext(ctx) to retrieve the unique ID for each tool call from context
Key format: "tool:<toolName>:<toolCallID>:<key>" ensures state isolation for concurrent calls
Fallback handling: If tool call ID is not available (older versions or special scenarios), use "default" as fallback
Consistency: Before and After callbacks must use the same logic to retrieve tool call ID
This ensures that when the LLM calls calculator multiple times concurrently (e.g., calculator(1,2) and calculator(3,4)), each call has its own independent timing data.