Anchor-and-Flux Memory for Virtual Humans

Fontys University of Applied Sciences, MindLabs
Research Group Interaction Design (IxD)
Tilburg, The Netherlands
*Indicates Equal Contribution

Introduction

Virtual human platforms often rely on static trait sliders or perfect recall systems, leading to brittle, robotic experiences. Our research proposes a psychologically grounded alternative: let identity emerge from stories over time. We model memory as two layers. Anchors store factual event memories; flux produces living retellings that adapt to context, mood, and perceived age while staying faithful to the anchor. This proof-of-concept demonstrates the approach within a Kafka-driven microservices architecture using Qdrant, BGE-M3 embeddings, and LLM-based retelling with Ebbinghaus forgetting dynamics.

Abstract

We present a conversational AI memory system that combines durable anchors with adaptive flux. Anchors are facts stored as vectors with salience and timestamps. Flux is generated by an LLM that retells anchors according to perceived age, salience, and conversational prompt. Temporal realism is achieved by integrating the Ebbinghaus forgetting curve (λ = 0.002) and multi-factor activation (similarity × decay × salience). Automated experiments confirm that recent relevant anchors dominate recall, while high-salience older events remain accessible. The system runs on a GPU-native stack (RTX 4090/A6000 with Ollama) and falls back to API accelerators (Groq, Gemini, OpenAI) when GPU resources are unavailable.

Results

  • Temporal Decay: λ = 0.002 yields human-like forgetting: 99.8% retention after 1 day, 94.2% after 30 days, 48.2% after 1 year.
  • Salience × Decay: A 180-day critical outage (salience 2.5) outranks a 2-day trivial chat (salience 0.3), mirroring flashbulb memory.
  • Anchor-to-Flux Retelling: LLM outputs preserve factual anchors while naturally shifting detail and tone as memories age.
  • Scalability: Kafka + Qdrant handle 100k anchors and multi-agent queries with <500ms latency in preliminary tests.
  • Deployment: GPU-first (Ollama + Phi4/Qwen3 with BGE-M3) with API fallback (Groq/Gemini/OpenAI) to ensure continuity.

Discussion

Anchors plus flux deliver controllable yet expressive memory that users perceive as coherent and continuous. Key limitations include qualitative character validation and foundational security. Our upcoming 12-week research sprint targets multi-agent sharing, personality-aware flux, audit trails, and large-scale benchmarks. Future directions include emotional anchors, cross-lingual retelling, and memory reconsolidation effects.

Conclusion

The Anchor-and-Flux model offers a psychologically grounded path to believable virtual humans. By separating factual anchors from adaptive flux, we model how people remember, forget, and retell experiences. The proof-of-concept validates the approach technically and experimentally. Next steps focus on production integration within the MindLabs virtual human platform, scaling to multi-agent scenarios, and extending the research for publication.

BibTeX

If you reference this work, please cite the research proposal below.

@misc{crombach2025anchorflux,
  author       = {Crombach, Coen and van Bokhorst, Leon},
  title        = {Anchor-and-Flux Memory for Virtual Humans},
  howpublished = {Fontys University of Applied Sciences, MindLabs},
  year         = {2025},
  url          = {https://leonvanbokhorst.github.io/convai-narrative-memory-poc}
}