Custom Agent
If you don’t want to start with Graph or multi-Agent orchestration and prefer embedding LLM into your existing service logic, implement the agent.Agent
interface directly and control the flow yourself.
This example shows a small “intent branching” agent:
- First classify intent using the LLM:
chitchat
ortask
- If chitchat: reply conversationally
- If task: output a short actionable plan (in real apps, you can route to tools or downstream services)
When to choose a custom Agent
- Logic is simple but you need precise control (validation, fallbacks, branching)
- You don’t need visual orchestration or complex teams yet (you can later evolve to Chain/Parallel/Graph)
What to implement
You must implement:
Run(ctx, *Invocation) (<-chan *event.Event, error)
: execute your flow and emit events (forward model streaming to events)Tools() []tool.Tool
: return available tools (empty if none)Info() Info
: basic agent infoSubAgents()/FindSubAgent()
: return empty/nil if not used
Core pattern:
1) Use invocation.Message
as user input
2) Share framework capabilities via invocation
(Session, Callbacks, Artifact, etc.)
3) Call model.Model.GenerateContent(ctx, *model.Request)
for streaming responses; forward via event.NewResponseEvent(...)
Code example
Full example: examples/customagent
Key snippet (simplified):
Runner integration
While you can call Agent directly, we recommend running agents via Runner
which manages session and appends events for you.
Example:
Run the example (interactive)
Extensions
- Add tools: return
[]tool.Tool
(e.g.,function.NewFunctionTool(...)
) to call DB/HTTP/internal services - Add validation: enforce checks and guards before branching
- Evolve gradually: when if-else grows or you need collaboration, move to
ChainAgent
/ParallelAgent
orGraph