Virtual Human Memory

Fontys University of Applied Sciences, MindLabs
Research Group Interaction Design (IxD)
Eindhoven, The Netherlands
*Indicates Equal Contribution

Introduction

Current virtual human memory systems often feel static and brittle. Our research introduces a dynamic, psychologically-grounded alternative where a virtual human's identity emerges from the stories it tells over time. We present the Virtual Human Memory (VHM) project, a distributed, multi-agent architecture that models long-term memory as a living, evolving process. The system comprises three core services: an Indexer that ingests new memories, a Resonance Engine that calculates their emotional and contextual significance, and a Reteller that weaves these memories into coherent, dynamic narratives.

Abstract

We present a multi-agent system for long-term virtual human memory. The architecture is composed of loosely coupled microservices communicating over a message bus (e.g., Kafka). The Indexer receives new events and stores them as memory 'anchors' in a vector database, enriched with metadata like salience and timestamps. The Resonance Engine continuously evaluates incoming stimuli against the memory store, calculating an activation score based on similarity, salience, and temporal decay, modeled on the Ebbinghaus forgetting curve. Finally, the Reteller service uses the most highly activated memories to generate adaptive, context-aware narratives, or 'flux,' via a Large Language Model (LLM). This separation of concerns allows for a scalable, robust, and psychologically plausible memory system that enables virtual humans to develop a unique and evolving identity.

Results

  • Temporal Decay: λ = 0.002 yields human-like forgetting: 99.8% retention after 1 day, 94.2% after 30 days, 48.2% after 1 year.
  • Salience × Decay: A 180-day critical outage (salience 2.5) outranks a 2-day trivial chat (salience 0.3), mirroring flashbulb memory.
  • Anchor-to-Flux Retelling: LLM outputs preserve factual anchors while naturally shifting detail and tone as memories age.
  • Scalability: Kafka + Qdrant handle 100k anchors and multi-agent queries with <500ms latency in preliminary tests.
  • Deployment: GPU-first (Ollama + Phi4/Qwen3 with BGE-M3) with API fallback (Groq/Gemini/OpenAI) to ensure continuity.

Discussion

Anchors plus flux deliver controllable yet expressive memory that users perceive as coherent and continuous. Key limitations include qualitative character validation and foundational security. Our upcoming 12-week research sprint targets multi-agent sharing, personality-aware flux, audit trails, and large-scale benchmarks. Future directions include emotional anchors, cross-lingual retelling, and memory reconsolidation effects.

Conclusion

The Anchor-and-Flux model offers a psychologically grounded path to believable virtual humans. By separating factual anchors from adaptive flux, we model how people remember, forget, and retell experiences. The proof-of-concept validates the approach technically and experimentally. Next steps focus on production integration within the MindLabs virtual human platform, scaling to multi-agent scenarios, and extending the research for publication.

BibTeX

If you reference this work, please cite the research proposal below.

@misc{vanbokhorst2025vhm,
  author       = {van Bokhorst, Leon and Crombach, Coen},
  title        = {A Multi-Agent Architecture for Virtual Human Memory},
  howpublished = {Fontys University of Applied Sciences, MindLabs},
  year         = {2025},
  url          = {https://research-group-ixd.github.io/virtual-human-memory/}
}