The Memory module is the memory management system in the tRPC-Agent-Go
framework, providing Agents with persistent memory and context management
capabilities. By integrating memory services, session management, and memory
tools, the Memory system helps Agents remember user information, maintain
dialog context, and provide personalized response experiences across multiple
conversations.
Positioning
Memory manages long-term user information with isolation dimension
<appName, userID>. It can be understood as a "personal profile" gradually
accumulated around a single user.
In cross-session scenarios, Memory enables the system to retain key user
information, avoiding repetitive information gathering in each session.
It is suitable for recording stable, reusable facts such as "user name is
John", "occupation is backend engineer", "prefers concise answers", "commonly
used language is English", and directly using this information in subsequent
interactions.
Two Memory Modes
Memory supports two modes for creating and managing memories. Choose based on your scenario:
Auto Mode is available when an Extractor is configured and is recommended as the default choice.
Aspect
Agentic Mode (Tools)
Auto Mode (Extractor)
How it works
Agent decides when to call memory tools
System extracts memories automatically from conversations
User experience
Visible - user sees tool calls
Transparent - memories created silently in background
Control
Agent has full control over what to remember
Extractor decides based on conversation analysis
Available tools
All 6 tools
Search tool (search), optional load tool (load)
Processing
Synchronous - during response generation
Asynchronous - background workers after response
Best for
Precise control, user-driven memory management
Natural conversations, hands-off memory building
Selection Guide:
Agentic Mode: Agent automatically decides when to call memory tools based on conversation content (e.g., when user mentions personal information or preferences), user sees tool calls, suitable for scenarios requiring precise control over memory content
Auto Mode (recommended): Natural conversation flow, system passively learns about users, simplified UX
Core Values
Context Continuity: Maintain user history across sessions, avoiding
repetitive questioning and input.
Personalized Service: Provide customized responses and suggestions based
on long-term user profiles and preferences.
Knowledge Accumulation: Transform facts and experiences from
conversations into reusable knowledge.
Persistent Storage: Support multiple storage backends to ensure data
safety and reliability.
Use Cases
The Memory module is suitable for scenarios requiring cross-session user
information and context retention:
Use Case 1: Personalized Customer Service Agent
Requirement: Customer service Agent needs to remember user information,
historical issues, and preferences for consistent service.
Implementation:
First conversation: Agent uses memory_add to record name, company, contact
Record user preferences like "prefers concise answers", "technical
background"
Subsequent sessions: Agent uses memory_load to load user info, no repeated
questions needed
After resolving issues: Use memory_update to update issue status
Use Case 2: Learning Companion Agent
Requirement: Educational Agent needs to track student learning progress,
knowledge mastery, and interests.
Implementation:
Use memory_add to record mastered knowledge points
Use topic tags for categorization: ["math", "geometry"],
["programming", "Python"]
Use memory_search to query related knowledge, avoid repeated teaching
Adjust teaching strategies based on memories, provide personalized learning
paths
Use Case 3: Project Management Agent
Requirement: Project management Agent needs to track project information,
team members, and task progress.
Implementation:
Record key project info: memory_add("Project X uses Go language",
["project", "tech-stack"])
Record team member roles: memory_add("John Doe is backend lead",
["team", "role"])
Use memory_search to quickly find relevant information
After project completion: Use memory_clear to clear temporary information
# OpenAI API configurationexportOPENAI_API_KEY="your-openai-api-key"exportOPENAI_BASE_URL="your-openai-base-url"
Agentic Mode Configuration (Optional)
In Agentic mode, the Agent automatically decides when to call memory tools
based on conversation content to manage memories. Configuration involves three steps:
packagemainimport("context""log""trpc.group/trpc-go/trpc-agent-go/agent/llmagent"memoryinmemory"trpc.group/trpc-go/trpc-agent-go/memory/inmemory""trpc.group/trpc-go/trpc-agent-go/model""trpc.group/trpc-go/trpc-agent-go/model/openai""trpc.group/trpc-go/trpc-agent-go/runner""trpc.group/trpc-go/trpc-agent-go/session/inmemory")funcmain(){ctx:=context.Background()// Step 1: Create memory service.memoryService:=memoryinmemory.NewMemoryService()// Step 2: Create Agent and register memory tools.modelInstance:=openai.New("deepseek-chat")llmAgent:=llmagent.New("memory-assistant",llmagent.WithModel(modelInstance),llmagent.WithDescription("An assistant with memory capabilities."),llmagent.WithInstruction("Remember important user info and recall it when needed.",),llmagent.WithTools(memoryService.Tools()),// Register memory tools.)// Step 3: Create Runner with memory service.sessionService:=inmemory.NewSessionService()appRunner:=runner.NewRunner("memory-chat",llmAgent,runner.WithSessionService(sessionService),runner.WithMemoryService(memoryService),// Set memory service.)deferappRunner.Close()// Run a dialog (the Agent uses memory tools automatically).log.Println("🧠 Starting memory-enabled chat...")message:=model.NewUserMessage("Hi, my name is John, and I like programming",)eventChan,err:=appRunner.Run(ctx,"user123","session456",message)iferr!=nil{log.Fatalf("Failed to run agent: %v",err)}// Handle responses ..._=eventChan}
User: My name is Alice and I work at TechCorp.
Agent: Nice to meet you, Alice! I'll remember that you work at TechCorp.
🔧 Tool call: memory_add
Args: {"memory": "User's name is Alice, works at TechCorp", "topics": ["name", "work"]}
✅ Memory added successfully.
Agent: I've saved that information. How can I help you today?
Auto Mode Configuration (Recommended)
In Auto mode, an LLM-based extractor analyzes conversations and automatically
creates memories. The only difference from Agentic mode is in Step 1: add an Extractor.
packagemainimport("context""log""time""trpc.group/trpc-go/trpc-agent-go/agent/llmagent""trpc.group/trpc-go/trpc-agent-go/memory/extractor"memoryinmemory"trpc.group/trpc-go/trpc-agent-go/memory/inmemory""trpc.group/trpc-go/trpc-agent-go/model""trpc.group/trpc-go/trpc-agent-go/model/openai""trpc.group/trpc-go/trpc-agent-go/runner""trpc.group/trpc-go/trpc-agent-go/session/inmemory")funcmain(){ctx:=context.Background()// Step 1: Create memory service (configure Extractor to enable auto mode).extractorModel:=openai.New("deepseek-chat")memExtractor:=extractor.NewExtractor(extractorModel)memoryService:=memoryinmemory.NewMemoryService(memoryinmemory.WithExtractor(memExtractor),// Key: configure extractor.// Optional: configure async workers.memoryinmemory.WithAsyncMemoryNum(1),// Configure number of async memory worker.memoryinmemory.WithMemoryQueueSize(10),// Configure memory queue size.memoryinmemory.WithMemoryJobTimeout(30*time.Second),// Configure memory extraction job timeout.)defermemoryService.Close()// Step 2: Create Agent and register memory tools.// Note: With Extractor configured, Tools() exposes Search by default.// Load can be enabled explicitly.chatModel:=openai.New("deepseek-chat")llmAgent:=llmagent.New("memory-assistant",llmagent.WithModel(chatModel),llmagent.WithDescription("An assistant with automatic memory."),llmagent.WithTools(memoryService.Tools()),// Search by default; Load is optional.)// Step 3: Create Runner with memory service.// Runner triggers auto extraction after responses.sessionService:=inmemory.NewSessionService()appRunner:=runner.NewRunner("memory-chat",llmAgent,runner.WithSessionService(sessionService),runner.WithMemoryService(memoryService),)deferappRunner.Close()// Run a dialog (system extracts memories automatically in background).log.Println("🧠 Starting auto memory chat...")message:=model.NewUserMessage("Hi, my name is John, and I like programming",)eventChan,err:=appRunner.Run(ctx,"user123","session456",message)iferr!=nil{log.Fatalf("Failed to run agent: %v",err)}// Handle responses ..._=eventChan}
User: My name is Alice and I work at TechCorp.
Agent: Nice to meet you, Alice! It's great to connect with someone from TechCorp.
How can I help you today?
(Background: Extractor analyzes conversation and creates memory automatically)
Configuration Comparison
Step
Agentic Mode
Auto Mode
Step 1
NewMemoryService()
NewMemoryService(WithExtractor(ext))
Step 2
WithTools(memoryService.Tools())
WithTools(memoryService.Tools())
Step 3
WithMemoryService(memoryService)
WithMemoryService(memoryService)
Available tools
add/update/delete/clear/search/load
search /load
Memory creation
Agent actively calls tools
Background auto extraction
Core Concepts
The memory module
is the core of tRPC-Agent-Go's memory management. It provides complete memory
storage and retrieval capabilities with a modular design that supports
multiple storage backends and memory tools.
import("trpc.group/trpc-go/trpc-agent-go/agent/llmagent""trpc.group/trpc-go/trpc-agent-go/memory"memoryinmemory"trpc.group/trpc-go/trpc-agent-go/memory/inmemory""trpc.group/trpc-go/trpc-agent-go/runner")// Step 1: Create memory servicememoryService:=memoryinmemory.NewMemoryService()// Step 2: Create Agent and register memory toolsllmAgent:=llmagent.New("memory-assistant",llmagent.WithModel(modelInstance),llmagent.WithDescription("An assistant with memory capabilities."),llmagent.WithTools(memoryService.Tools()),// Explicitly register tools)// Step 3: Create Runner and set memory serviceappRunner:=runner.NewRunner("memory-chat",llmAgent,runner.WithMemoryService(memoryService),// Set service at Runner level)
Memory Service
Configure the memory service in code. Five backends are supported: in-memory,
Redis, MySQL, PostgreSQL, and pgvector.
import(memoryinmemory"trpc.group/trpc-go/trpc-agent-go/memory/inmemory"memoryredis"trpc.group/trpc-go/trpc-agent-go/memory/redis"memorymysql"trpc.group/trpc-go/trpc-agent-go/memory/mysql"memorypostgres"trpc.group/trpc-go/trpc-agent-go/memory/postgres")// In-memory implementation for development and testing.memService:=memoryinmemory.NewMemoryService()// Redis implementation for production.redisService,err:=memoryredis.NewService(memoryredis.WithRedisClientURL("redis://localhost:6379"),memoryredis.WithToolEnabled(memory.DeleteToolName,true),// Enable delete.)iferr!=nil{// Handle error.}// MySQL implementation for production (relational database).// Table is automatically created on service initialization (unless skipped). Returns error on failure.mysqlService,err:=memorymysql.NewService(memorymysql.WithMySQLClientDSN("user:password@tcp(localhost:3306)/dbname?parseTime=true"),memorymysql.WithToolEnabled(memory.DeleteToolName,true),// Enable delete.)iferr!=nil{// Handle error.}// PostgreSQL implementation for production (relational database).// Table is automatically created on service initialization (unless skipped). Returns error on failure.postgresService,err:=memorypostgres.NewService(memorypostgres.WithPostgresClientDSN("postgres://user:password@localhost:5432/dbname?sslmode=disable"),memorypostgres.WithSoftDelete(true),// Enable soft delete.memorypostgres.WithToolEnabled(memory.DeleteToolName,true),// Enable delete.)iferr!=nil{// Handle error.}// Register memory tools with the Agent.llmAgent:=llmagent.New("memory-assistant",llmagent.WithTools(memService.Tools()),// Or redisService.Tools(), mysqlService.Tools(), or postgresService.Tools().)// Set memory service in the Runner.runner:=runner.NewRunner("app",llmAgent,runner.WithMemoryService(memService),// Or redisService, mysqlService, or postgresService.)
Memory Tool Configuration
The memory service provides 6 tools. Common tools are enabled by default, while dangerous operations require manual enabling.
Tool List
Tool
Function
Agentic Mode
Auto Extraction Mode
Description
memory_add
Add new memory
✅ Default
❌ Unavailable
Create new memory entry
memory_update
Update memory
✅ Default
❌ Unavailable
Modify existing memory
memory_search
Search memory
✅ Default
✅ Default
Find by keywords
memory_load
Load memories
✅ Default
⚙️ Configurable
Load recent memories
memory_delete
Delete memory
⚙️ Configurable
❌ Unavailable
Delete single memory
memory_clear
Clear memories
⚙️ Configurable
❌ Unavailable
Delete all memories (not exposed in Auto mode)
Notes:
Agentic Mode: Agent actively calls tools to manage memory, all tools are configurable
Auto Mode: LLM extractor handles write operations in background. Tools() exposes Search by default; Load can be enabled.
Default enabled tools: memory_search
Default disabled tools: memory_load
Not exposed tools: memory_add, memory_update, memory_delete, memory_clear
Default: Available immediately when service is created, no extra configuration needed
Configurable: Can be enabled/disabled via WithToolEnabled()
Unavailable: Tool cannot be used in this mode
Enable/Disable Tools
Note: In Auto mode, WithToolEnabled() only affects whether memory_search and
memory_load are exposed via Tools(). memory_add, memory_update,
memory_delete, and memory_clear are not exposed to the Agent.
Memory IDs are generated from memory content + sorted topics + appName + userID.
Adding the same content and topics for the same user is idempotent and overwrites
the existing entry (not append). UpdatedAt is refreshed.
If you need append semantics or different duplicate-handling strategies, you can
implement custom tools or extend the service with policy options (e.g. allow/overwrite/ignore).
Custom Tool Implementation
Note: In Auto mode, Tools() only exposes memory_search and memory_load.
If you need to expose tools like memory_clear, use Agentic mode or call
ClearMemories() from your application code.
You can override default tools with custom implementations. See
memory/tool/tool.go for reference on how to implement custom tools.
import("context""fmt""trpc.group/trpc-go/trpc-agent-go/memory"memoryinmemory"trpc.group/trpc-go/trpc-agent-go/memory/inmemory"toolmemory"trpc.group/trpc-go/trpc-agent-go/memory/tool""trpc.group/trpc-go/trpc-agent-go/tool""trpc.group/trpc-go/trpc-agent-go/tool/function")// A custom clear tool with real logic using the invocation context.funccustomClearMemoryTool()tool.Tool{clearFunc:=func(ctxcontext.Context,_*toolmemory.ClearMemoryRequest)(*toolmemory.ClearMemoryResponse,error){// Get memory service and user info from invocation context.memSvc,err:=toolmemory.GetMemoryServiceFromContext(ctx)iferr!=nil{returnnil,fmt.Errorf("custom clear tool: %w",err)}appName,userID,err:=toolmemory.GetAppAndUserFromContext(ctx)iferr!=nil{returnnil,fmt.Errorf("custom clear tool: %w",err)}iferr:=memSvc.ClearMemories(ctx,memory.UserKey{AppName:appName,UserID:userID});err!=nil{returnnil,fmt.Errorf("custom clear tool: failed to clear memories: %w",err)}return&toolmemory.ClearMemoryResponse{Message:"🎉 All memories cleared successfully!"},nil}returnfunction.NewFunctionTool(clearFunc,function.WithName(memory.ClearToolName),function.WithDescription("Clear all memories for the user."),)}// Register the custom tool with an InMemory service.memoryService:=memoryinmemory.NewMemoryService(memoryinmemory.WithCustomTool(memory.ClearToolName,customClearMemoryTool),)
Full Example
Below is a complete interactive chat example demonstrating memory capabilities in action.
$gorunmain.go
🧠SimpleMemoryChat
Model:deepseek-chat
MemoryService:inmemory
In-memory
Streaming:trueAvailabletools:memory_add,memory_update,memory_search,memory_load
(memory_delete,memory_cleardisabledbydefault,andcanbeenabledorcustomized)==================================================✅Memorychatready!Session:memory-session-1765504626
💡Specialcommands:
/memory-Showusermemories
/new-Startanewsession
/exit-Endtheconversation
👤You:Hi,mynameisJohnandIlikecoffee.
🤖Assistant:HiJohn!Nicetomeetyou.I've made a note that you like coffee. It'sgreattoknowyourpreferences-I'll remember this for our future conversations. Is there anything specific about coffee that you enjoy, or anything else you'dlikemetoknowaboutyou?
🔧Memorytoolcallsinitiated:
•memory_add(ID:call_00_wE9FAqaLEPtWcqgF3tQqRoLn)Args:{"memory":"John likes coffee.","topics":["preferences","food-drink"]}🔄Executingmemorytools...
✅Memorytoolresponse(ID:call_00_wE9FAqaLEPtWcqgF3tQqRoLn):{"message":"Memory added successfully","memory":"John likes coffee.","topics":["preferences","food-drink"]}Iseeyou're a coffee enthusiast! What brings you here today, John? Are you looking for coffee recommendations, or is there something else I can help you with?👤 You: /new🆕 Started new memory session! Previous: memory-session-1765504626 Current: memory-session-1765504664 (Conversation history has been reset, memories are preserved)👤 You: What do I like?🤖 Assistant: I'llsearchthroughmymemoriestorecallwhatyoulike.LetmecheckwhatinformationIhavestoredaboutyourpreferences.
🔧Memorytoolcallsinitiated:
•memory_search(ID:call_00_CCn57ylCDDQ7iaL88d2JScvl)Args:{"query":"likes preferences favorite enjoy"}🔄Executingmemorytools...
✅Memorytoolresponse(ID:call_00_CCn57ylCDDQ7iaL88d2JScvl):{"query":"likes preferences favorite enjoy","results":[{"id":"47f1de6c1318d41001a17a46ebb9f9984b6e89e5ac549aedbf34d7744e8862e0","memory":"John likes coffee.","topics":["preferences","food-drink"],"created":"2025-12-12T09:57:12.456153047+08:00"}],"count":1}Basedonmymemories,Iknowthat**youlikecoffee**.That's the only preference I have recorded so far.To give you a more complete answer about your likes, I'dneedtolearnmoreaboutyou!Couldyoutellmeaboutsomeofyourotherinterests,hobbies,orpreferences?Forexample:
-Whatfoodsordrinksdoyouenjoy?
-Whathobbiesoractivitiesdoyoulike?
-Whatkindofmusic,movies,orbooksdoyouprefer?
-Arethereanyparticulartopicsorsubjectsyou're interested in?The more you share with me, the better I'llbeabletorememberandhelpyouinthefuture!
👤You:/exit
👋Goodbye!
packagemainimport("context""flag""fmt""log""os""trpc.group/trpc-go/trpc-agent-go/agent/llmagent""trpc.group/trpc-go/trpc-agent-go/memory"memoryinmemory"trpc.group/trpc-go/trpc-agent-go/memory/inmemory"memoryredis"trpc.group/trpc-go/trpc-agent-go/memory/redis"memorymysql"trpc.group/trpc-go/trpc-agent-go/memory/mysql"memorypostgres"trpc.group/trpc-go/trpc-agent-go/memory/postgres""trpc.group/trpc-go/trpc-agent-go/model""trpc.group/trpc-go/trpc-agent-go/model/openai""trpc.group/trpc-go/trpc-agent-go/runner""trpc.group/trpc-go/trpc-agent-go/session/inmemory")funcmain(){var(memType=flag.String("memory","inmemory","Memory service type")streaming=flag.Bool("streaming",true,"Enable streaming")softDelete=flag.Bool("soft-delete",false,"Enable soft delete")modelName=flag.String("model","deepseek-chat","Model name"))flag.Parse()ctx:=context.Background()// 1. Create memory servicememoryService,err:=createMemoryService(*memType,*softDelete)iferr!=nil{log.Fatalf("Failed to create memory service: %v",err)}// 2. Create modelmodelInstance:=openai.New(*modelName)// 3. Create AgentgenConfig:=model.GenerationConfig{MaxTokens:intPtr(2000),Temperature:floatPtr(0.7),Stream:*streaming,}llmAgent:=llmagent.New("memory-assistant",llmagent.WithModel(modelInstance),llmagent.WithDescription("A helpful AI assistant with memory capabilities. "+"I can remember important information about you and "+"recall it when needed.",),llmagent.WithGenerationConfig(genConfig),llmagent.WithTools(memoryService.Tools()),)// 4. Create RunnersessionService:=inmemory.NewSessionService()appRunner:=runner.NewRunner("memory-chat",llmAgent,runner.WithSessionService(sessionService),runner.WithMemoryService(memoryService),)deferappRunner.Close()// 5. Run chatlog.Println("🧠 Starting memory-enabled chat...")// ... handle user input and responses}funccreateMemoryService(memTypestring,softDeletebool)(memory.Service,error){switchmemType{case"redis":redisAddr:=os.Getenv("REDIS_ADDR")ifredisAddr==""{redisAddr="localhost:6379"}returnmemoryredis.NewService(memoryredis.WithRedisClientURL(fmt.Sprintf("redis://%s",redisAddr),),memoryredis.WithToolEnabled(memory.DeleteToolName,false),)case"mysql":dsn:=buildMySQLDSN()returnmemorymysql.NewService(memorymysql.WithMySQLClientDSN(dsn),memorymysql.WithSoftDelete(softDelete),memorymysql.WithToolEnabled(memory.DeleteToolName,false),)case"postgres":returnmemorypostgres.NewService(memorypostgres.WithHost(getEnv("PG_HOST","localhost")),memorypostgres.WithPort(getEnvInt("PG_PORT",5432)),memorypostgres.WithUser(getEnv("PG_USER","postgres")),memorypostgres.WithPassword(getEnv("PG_PASSWORD","")),memorypostgres.WithDatabase(getEnv("PG_DATABASE","trpc-agent-go-pgmemory")),memorypostgres.WithSoftDelete(softDelete),memorypostgres.WithToolEnabled(memory.DeleteToolName,false),)default:// inmemoryreturnmemoryinmemory.NewMemoryService(memoryinmemory.WithToolEnabled(memory.DeleteToolName,false),),nil}}funcbuildMySQLDSN()string{host:=getEnv("MYSQL_HOST","localhost")port:=getEnv("MYSQL_PORT","3306")user:=getEnv("MYSQL_USER","root")password:=getEnv("MYSQL_PASSWORD","")database:=getEnv("MYSQL_DATABASE","trpc_agent_go")returnfmt.Sprintf("%s:%s@tcp(%s:%s)/%s?parseTime=true&charset=utf8mb4",user,password,host,port,database,)}funcgetEnv(key,defaultValstring)string{ifval:=os.Getenv(key);val!=""{returnval}returndefaultVal}funcintPtr(iint)*int{return&i}funcfloatPtr(ffloat64)*float64{return&f}
CREATETABLEmemories(memory_idTEXTPRIMARYKEY,app_nameTEXTNOTNULL,user_idTEXTNOTNULL,memory_dataJSONBNOTNULL,created_atTIMESTAMPNOTNULLDEFAULTCURRENT_TIMESTAMP,updated_atTIMESTAMPNOTNULLDEFAULTCURRENT_TIMESTAMP,deleted_atTIMESTAMPNULLDEFAULTNULL);-- Indexes for performanceCREATEINDEXIFNOTEXISTSmemories_app_userONmemories(app_name,user_id);CREATEINDEXIFNOTEXISTSmemories_updated_atONmemories(updated_atDESC);CREATEINDEXIFNOTEXISTSmemories_deleted_atONmemories(deleted_at);
Resource cleanup: Call Close() method to release database connection:
CREATETABLEmemories(memory_idTEXTPRIMARYKEY,app_nameTEXTNOTNULL,user_idTEXTNOTNULL,memory_contentTEXTNOTNULL,topicsTEXT[],embeddingvector(1536),created_atTIMESTAMPNOTNULLDEFAULTCURRENT_TIMESTAMP,updated_atTIMESTAMPNOTNULLDEFAULTCURRENT_TIMESTAMP,deleted_atTIMESTAMPNULLDEFAULTNULL);-- Indexes for performanceCREATEINDEXONmemories(app_name,user_id);CREATEINDEXONmemories(updated_atDESC);CREATEINDEXONmemories(deleted_at);CREATEINDEXONmemoriesUSINGhnsw(embeddingvector_cosine_ops);
Resource cleanup: Call Close() method to release database connection:
// Memory: persists across sessionsmemory.AddMemory(ctx,userKey,"User is a backend engineer",[]string{"occupation"})// Session: valid only within a sessionsession.AddMessage(ctx,sessionKey,userMessage("What's the weather today?"))session.AddMessage(ctx,sessionKey,agentMessage("It's sunny today"))// New session: Memory retained, Session reset
Memory ID Idempotency
Memory ID is generated from SHA256 hash of "content + sorted topics + appName + userID". Same content produces the same ID for the same user:
// First addmemory.AddMemory(ctx,userKey,"User likes programming",[]string{"hobby"})// Generated ID: abc123...// Second add with same contentmemory.AddMemory(ctx,userKey,"User likes programming",[]string{"hobby"})// Same ID: abc123..., overwrites, refreshes updated_at
Implications:
✅ Natural deduplication: Avoids redundant storage
✅ Idempotent operations: Repeated additions don't create multiple records
⚠️ Overwrite update: Cannot append same content (add timestamp or sequence number if append is needed)
Search Behavior Notes
Search behavior depends on the backend:
For inmemory / redis / mysql / postgres: SearchMemories uses token matching (not semantic search).
For pgvector: SearchMemories uses vector similarity search and requires an embedder.
Token matching details (non-pgvector backends):
English tokenization: lowercase → filter stopwords (a, the, is, etc.) → split by spaces
// ⚠️ Migrating from soft-delete backend to non-supporting backend// Soft-deleted records will be lost!// Migrating from MySQL (soft delete) to Redis (hard delete)// Need to manually handle soft-deleted records
// ✅ Complete error handlingerr:=memoryService.AddMemory(ctx,userKey,content,topics)iferr!=nil{ifstrings.Contains(err.Error(),"limit exceeded"){// Handle limit: clean old memories or rejectlog.Warnf("Memory limit exceeded for user %s",userKey.UserID)}else{returnfmt.Errorf("failed to add memory: %w",err)}}
Checkers control when memory extraction should be triggered. By default, extraction happens on every conversation turn. Use checkers to optimize extraction frequency and reduce LLM costs.
Available Checkers
Checker
Description
Example
CheckMessageThreshold
Triggers when accumulated messages exceed threshold
CheckMessageThreshold(5) - when messages > 5
CheckTimeInterval
Triggers when time since last extraction exceeds interval
// Example 1: Extract when messages > 5 OR every 3 minutes (OR logic).memExtractor:=extractor.NewExtractor(extractorModel,extractor.WithCheckersAny(extractor.CheckMessageThreshold(5),extractor.CheckTimeInterval(3*time.Minute),),)// Example 2: Extract when messages > 10 AND every 5 minutes (AND logic).memExtractor:=extractor.NewExtractor(extractorModel,extractor.WithChecker(extractor.CheckMessageThreshold(10)),extractor.WithChecker(extractor.CheckTimeInterval(5*time.Minute)),)
ExtractionContext
The ExtractionContext provides information for checker decisions:
typeExtractionContextstruct{UserKeymemory.UserKey// User identifier.Messages[]model.Message// Accumulated messages since last extraction.LastExtractAt*time.Time// Last extraction timestamp, nil if never extracted.}
Note: Messages contains all accumulated messages since the last successful extraction. When a checker returns false, messages are accumulated and will be included in the next extraction. This ensures no conversation context is lost when using turn-based or time-based checkers.
Tool Control
In auto extraction mode, WithToolEnabled controls all 6 tools, but they serve different purposes:
Front-end Tools (exposed via Tools() for agent to call):
Tool
Default
Description
memory_search
✅ On
Search memories by query
memory_load
❌ Off
Load all or recent N memories
Back-end Tools (used by extractor in background, not exposed to agent):
memoryService:=memoryinmemory.NewMemoryService(memoryinmemory.WithExtractor(memExtractor),// Front-end: enable memory_load for agent to call.memoryinmemory.WithToolEnabled(memory.LoadToolName,true),// Back-end: disable memory_delete so extractor cannot delete.memoryinmemory.WithToolEnabled(memory.DeleteToolName,false),// Back-end: enable memory_clear for extractor (use with caution).memoryinmemory.WithToolEnabled(memory.ClearToolName,true),)
Note: WithToolEnabled can be called before or after WithExtractor - the order does not matter.
Comparison: Agentic Mode vs Auto Mode
Tool
Agentic Mode (no extractor)
Auto Mode (with extractor)
memory_add
✅ Agent calls via Tools()
✅ Extractor uses in background
memory_update
✅ Agent calls via Tools()
✅ Extractor uses in background
memory_search
✅ Agent calls via Tools()
✅ Agent calls via Tools()
memory_load
✅ Agent calls via Tools()
⚙️ Agent calls via Tools() if enabled
memory_delete
⚙️ Agent calls via Tools() if enabled
✅ Extractor uses in background
memory_clear
⚙️ Agent calls via Tools() if enabled
⚙️ Extractor uses in background if enabled
Memory Preloading
Both modes support preloading memories into the system prompt:
llmAgent:=llmagent.New("assistant",llmagent.WithModel(model),llmagent.WithTools(memoryService.Tools()),// Preload options:// llmagent.WithPreloadMemory(0), // Disable preloading (default).// llmagent.WithPreloadMemory(10), // Load 10 most recent.// llmagent.WithPreloadMemory(-1), // Load all.// // ⚠️ WARNING: Loading all memories may significantly// // increase token usage and API costs, especially// // for users with many stored memories. Consider// // using a positive limit for production use.// llmagent.WithPreloadMemory(10), // Load 10 most recent (recommended for production).)
When preloading is enabled, memories are automatically injected into the
system prompt, giving the Agent context about the user without explicit
tool calls.
⚠️ Important Note: Setting the configuration to -1 loads all memories,
which may significantly increase Token Usage and API Costs. By default,
preloading is disabled (0), and we recommend using positive limits (e.g., 10-50)
to balance performance and cost.
Hybrid Approach
You can combine both approaches:
Use Auto mode for passive learning (background extraction)