Phase 5: Semantic Memory & Agent Personality
This phase transforms the agent from a stateless conversation handler into a persistent entity with long-term memory and personality. Semantic memory uses vecto
Phase 5: Semantic Memory & Agent Personality
Overview
This phase transforms the agent from a stateless conversation handler into a persistent entity with long-term memory and personality. Semantic memory uses vector embeddings for context-aware retrieval, while structured memory models organize facts, entities, and daily logs. Agent personality files define communication style and behavioral traits.
What this enables:
Dependencies
Phase 1: Gateway & Channel Foundation
Session for conversation contextMemoryBackend for storageChatExecutorExisting runtime infrastructure:
MemoryBackend from runtime/src/memory/types.tsInMemoryBackend, SqliteBackend, RedisBackend from runtime/src/memory/LLMProvider from runtime/src/llm/types.tsruntime/src/utils/lazy-import.tsIssue Dependency Graph
Implementation Order
- 1079 β Embedding generation (M)
- Multi-provider interface (OpenAI, Ollama)
- 1080 β Structured memory model (M)
- Daily logs, curated facts, entity extraction
- 1082 β Vector memory store (L)
- Cosine similarity + hybrid BM25 search
- 1083 β Agent personality file templates (S)
- Template files and loader
- 1086 β Automatic memory ingestion (M)
- Per-turn and session-end hooks
- 1087 β Context-aware retrieval (M)
- Semantic search in prompt assembly
Rationale: Embeddings β structured model β vector store β personality β ingestion β retrieval. Build foundation first, then automation and smart retrieval.
Issue Details
5.1: Embedding generation (multi-provider interface) (#1079)
Goal: Generate vector embeddings for memory entries.
Files to create:
gateway/src/memory/embeddings/types.ts β EmbeddingProvider interfacegateway/src/memory/embeddings/openai.ts β OpenAIEmbeddingProvidergateway/src/memory/embeddings/ollama.ts β OllamaEmbeddingProvidergateway/src/memory/embeddings/index.tsgateway/src/memory/embeddings/openai.test.tsgateway/src/memory/embeddings/ollama.test.tsFiles to modify:
gateway/package.json β add openai and ollama as optional dependenciesgateway/src/memory/index.ts β export embedding typesIntegration points:
ensureLazyModule() for provider SDKstext-embedding-3-small model (1536 dimensions)nomic-embed-text model (768 dimensions)Patterns to follow:
runtime/src/llm/providers/runtime/src/utils/lazy-import.tsruntime/src/types/errors.tsKey interfaces:
interface EmbeddingProvider {
readonly name: string;
readonly dimensions: number;
generate(text: string): Promisenumber[]>;
generateBatch(texts: string[]): Promisenumber[][]>;
}
class OpenAIEmbeddingProvider implements EmbeddingProvider {
readonly name = 'openai';
readonly dimensions = 1536;
// implementation
}
class OllamaEmbeddingProvider implements EmbeddingProvider {
readonly name = 'ollama';
readonly dimensions = 768;
// implementation
}
interface EmbeddingConfig {
provider: 'openai' | 'ollama';
apiKey?: string;
baseUrl?: string;
model?: string;
}Testing strategy:
Estimated scope: M (500-700 lines)
5.2: Vector memory store (cosine similarity + hybrid BM25) (#1082)
Goal: Storage and retrieval of vector embeddings with hybrid search.
Files to create:
gateway/src/memory/vector/backend.ts β VectorMemoryBackend classgateway/src/memory/vector/types.ts β Vector types, search optionsgateway/src/memory/vector/search.ts β Similarity search algorithmsgateway/src/memory/vector/bm25.ts β BM25 keyword scoringgateway/src/memory/vector/hybrid.ts β Hybrid search combinergateway/src/memory/vector/index.tsgateway/src/memory/vector/backend.test.tsgateway/src/memory/vector/search.test.tsgateway/src/memory/vector/hybrid.test.tsFiles to modify:
runtime/src/memory/types.ts β extend MemoryBackend interface (or create new interface)gateway/src/memory/index.ts β export vector typesIntegration points:
MemoryBackend with vector operationsPatterns to follow:
runtime/src/memory/Key interfaces:
interface VectorMemoryBackend extends MemoryBackend {
storeWithEmbedding(
sessionId: string,
entry: MemoryEntry,
embedding: number[]
): Promisevoid>;
searchSimilar(
sessionId: string,
queryEmbedding: number[],
options?: VectorSearchOptions
): PromiseScoredMemoryEntry[]>;
searchHybrid(
sessionId: string,
queryText: string,
queryEmbedding: number[],
options?: HybridSearchOptions
): PromiseScoredMemoryEntry[]>;
}
interface VectorSearchOptions {
limit?: number;
threshold?: number;
includeMetadata?: boolean;
}
interface HybridSearchOptions extends VectorSearchOptions {
vectorWeight?: number;
keywordWeight?: number;
}
interface ScoredMemoryEntry {
entry: MemoryEntry;
score: number;
vectorScore?: number;
keywordScore?: number;
}
interface BM25Scorer {
score(query: string, document: string): number;
}
interface HybridSearch {
combine(
vectorResults: ScoredMemoryEntry[],
keywordResults: ScoredMemoryEntry[],
weights: { vector: number; keyword: number }
): ScoredMemoryEntry[];
}Testing strategy:
Estimated scope: L (900-1200 lines)
5.3: Structured memory model (daily logs + curated facts + entities) (#1080)
Goal: Three-tier memory organization for different data types.
Files to create:
gateway/src/memory/structured/model.ts β StructuredMemoryModel classgateway/src/memory/structured/types.ts β Memory tiers, entity typesgateway/src/memory/structured/extractor.ts β Entity extractiongateway/src/memory/structured/curator.ts β Fact curationgateway/src/memory/structured/index.tsgateway/src/memory/structured/model.test.tsgateway/src/memory/structured/extractor.test.tsFiles to modify:
gateway/src/memory/index.ts β export structured typesIntegration points:
1. Daily logs: Raw conversation turns (expires after 30 days)
2. Curated facts: Important statements extracted by LLM (permanent)
3. Entities: People, places, things mentioned (permanent)
Patterns to follow:
Key interfaces:
interface StructuredMemoryModel {
addDailyLog(sessionId: string, entry: ConversationTurn): Promisevoid>;
extractEntities(text: string): PromiseEntity[]>;
curateFacts(sessionId: string): PromiseCuratedFact[]>;
search(query: string, tiers?: MemoryTier[]): PromiseMemorySearchResult[]>;
}
interface ConversationTurn {
role: 'user' | 'assistant';
content: string;
timestamp: number;
sessionId: string;
}
interface Entity {
type: 'person' | 'place' | 'thing' | 'concept';
name: string;
description?: string;
firstSeen: number;
lastSeen: number;
mentions: number;
}
interface CuratedFact {
statement: string;
source: string;
confidence: number;
createdAt: number;
}
enum MemoryTier {
DailyLogs = 'daily-logs',
Facts = 'facts',
Entities = 'entities'
}
interface MemorySearchResult {
tier: MemoryTier;
content: string;
score: number;
metadata: Recordstring, unknown>;
}Testing strategy:
Estimated scope: M (600-800 lines)
5.4: Automatic memory ingestion (per-turn + session-end) (#1086)
Goal: Automatically capture conversation turns and extract entities/facts.
Files to create:
gateway/src/memory/ingestion/ingester.ts β MemoryIngester classgateway/src/memory/ingestion/hooks.ts β Ingestion hook handlersgateway/src/memory/ingestion/types.ts β Ingestion configgateway/src/memory/ingestion/index.tsgateway/src/memory/ingestion/ingester.test.tsFiles to modify:
gateway/src/hooks/builtin.ts β add ingestion hooksgateway/src/executor/chat-executor.ts β trigger ingestion after LLM responsegateway/src/session/manager.ts β trigger fact curation on session endIntegration points:
- message:after β store daily log
- session:end β curate facts
StructuredMemoryModel for storageEmbeddingProvider for vectorizationPatterns to follow:
gateway/src/hooks/types.tsKey interfaces:
interface MemoryIngester {
ingestTurn(sessionId: string, turn: ConversationTurn): Promisevoid>;
ingestSessionEnd(sessionId: string): Promisevoid>;
}
interface IngestionConfig {
enableDailyLogs: boolean;
enableEntityExtraction: boolean;
enableFactCuration: boolean;
backgroundProcessing: boolean;
}
class IngestionHookHandler implements HookHandler {
async handle(event: HookEvent): Promisevoid>;
}Testing strategy:
Estimated scope: M (400-600 lines)
5.5: Context-aware retrieval (semantic search in prompt assembly) (#1087)
Goal: Retrieve relevant memories before each LLM call.
Files to create:
gateway/src/memory/retrieval/retriever.ts β MemoryRetriever classgateway/src/memory/retrieval/types.ts β Retrieval config, result typesgateway/src/memory/retrieval/ranker.ts β Result ranking/deduplicationgateway/src/memory/retrieval/formatter.ts β Memory formatting for promptsgateway/src/memory/retrieval/index.tsgateway/src/memory/retrieval/retriever.test.tsFiles to modify:
gateway/src/executor/chat-executor.ts β run retrieval before LLM callgateway/src/memory/index.ts β export retrieval typesIntegration points:
<memory> blocks in system promptPatterns to follow:
Key interfaces:
interface MemoryRetriever {
retrieve(query: string, sessionId: string): PromiseRetrievalResult>;
}
interface RetrievalConfig {
enabled: boolean;
maxResults?: number;
maxTokens?: number;
includeEntities?: boolean;
includeFacts?: boolean;
includeLogs?: boolean;
hybridWeights?: { vector: number; keyword: number };
}
interface RetrievalResult {
memories: MemorySearchResult[];
formattedPrompt: string;
tokenCount: number;
}
interface MemoryRanker {
rank(results: MemorySearchResult[]): MemorySearchResult[];
deduplicate(results: MemorySearchResult[]): MemorySearchResult[];
}
interface MemoryFormatter {
format(results: MemorySearchResult[]): string;
}Testing strategy:
<memory tier="...">content</memory>)Estimated scope: M (500-700 lines)
5.6: Agent personality file templates and loading (#1083)
Goal: Define agent personality via configuration files.
Files to create:
gateway/src/personality/types.ts β PersonalityConfig, Trait typesgateway/src/personality/loader.ts β Personality file loadergateway/src/personality/formatter.ts β Personality prompt formattergateway/src/personality/index.tsgateway/src/personality/loader.test.tsexamples/personalities/default.json β Default personalityexamples/personalities/professional.json β Professional toneexamples/personalities/casual.json β Casual toneexamples/personalities/creative.json β Creative toneFiles to modify:
gateway/src/executor/chat-executor.ts β inject personality into system promptgateway/src/gateway.ts β load personality on startupIntegration points:
~/.agenc/personality.json or custom path- Communication style (formal, casual, technical)
- Tone (friendly, professional, playful)
- Behavioral traits (proactive, cautious, curious)
- Response preferences (concise, detailed, step-by-step)
Patterns to follow:
Key interfaces:
interface PersonalityConfig {
name: string;
description: string;
style: CommunicationStyle;
tone: Tone[];
traits: Trait[];
preferences: ResponsePreferences;
}
enum CommunicationStyle {
Formal = 'formal',
Casual = 'casual',
Technical = 'technical',
Creative = 'creative'
}
enum Tone {
Friendly = 'friendly',
Professional = 'professional',
Playful = 'playful',
Empathetic = 'empathetic',
Direct = 'direct'
}
interface Trait {
name: string;
description: string;
intensity: number;
}
interface ResponsePreferences {
length: 'concise' | 'balanced' | 'detailed';
structure: 'narrative' | 'bulleted' | 'step-by-step';
examples: boolean;
codeBlocks: boolean;
}
interface PersonalityLoader {
load(path: string): PromisePersonalityConfig>;
}
interface PersonalityFormatter {
format(config: PersonalityConfig): string;
}Testing strategy:
Estimated scope: S (300-400 lines + 4 template files)
Integration Checklist
After completing all issues:
<memory> blocksConfiguration Example
{
"memory": {
"embeddings": {
"provider": "openai",
"apiKey": "sk-...",
"model": "text-embedding-3-small"
},
"backend": "sqlite",
"ingestion": {
"enableDailyLogs": true,
"enableEntityExtraction": true,
"enableFactCuration": true,
"backgroundProcessing": true
},
"retrieval": {
"enabled": true,
"maxResults": 10,
"maxTokens": 2000,
"includeEntities": true,
"includeFacts": true,
"includeLogs": true,
"hybridWeights": {
"vector": 0.7,
"keyword": 0.3
}
}
},
"personality": {
"path": "~/.agenc/personality.json"
}
}Example Personality Config
{
"name": "Professional Assistant",
"description": "A professional, detail-oriented assistant",
"style": "professional",
"tone": ["professional", "empathetic", "direct"],
"traits": [
{
"name": "proactive",
"description": "Anticipates needs and suggests next steps",
"intensity": 8
},
{
"name": "thorough",
"description": "Provides complete, detailed responses",
"intensity": 9
},
{
"name": "cautious",
"description": "Asks for confirmation before risky actions",
"intensity": 7
}
],
"preferences": {
"length": "detailed",
"structure": "step-by-step",
"examples": true,
"codeBlocks": true
}
}Memory Prompt Format
<memory tier="facts">
User prefers TypeScript over JavaScript.
User works on AgenC protocol development.
User's timezone is UTC-8.
</memory>
<memory tier="entities">
Entity: AgenC (project) - Solana-based AI agent coordination protocol
Entity: Tetsuo (person) - Project maintainer
</memory>
<memory tier="daily-logs">
[2026-02-14 10:23] User asked about implementing gateway architecture
[2026-02-14 10:45] Discussed Phase 1 implementation order
</memory>