Current virtual human memory systems often feel static and brittle. Our research introduces a dynamic, psychologically-grounded alternative where a virtual human's identity emerges from the stories it tells over time. We present the Virtual Human Memory (VHM) project, a distributed, multi-agent architecture that models long-term memory as a living, evolving process. The system comprises three core services: an Indexer that ingests new memories, a Resonance Engine that calculates their emotional and contextual significance, and a Reteller that weaves these memories into coherent, dynamic narratives.
We present a multi-agent system for long-term virtual human memory. The architecture is composed of loosely coupled microservices communicating over a message bus (e.g., Kafka). The Indexer receives new events and stores them as memory 'anchors' in a vector database, enriched with metadata like salience and timestamps. The Resonance Engine continuously evaluates incoming stimuli against the memory store, calculating an activation score based on similarity, salience, and temporal decay, modeled on the Ebbinghaus forgetting curve. Finally, the Reteller service uses the most highly activated memories to generate adaptive, context-aware narratives, or 'flux,' via a Large Language Model (LLM). This separation of concerns allows for a scalable, robust, and psychologically plausible memory system that enables virtual humans to develop a unique and evolving identity.
Anchors plus flux deliver controllable yet expressive memory that users perceive as coherent and continuous. Key limitations include qualitative character validation and foundational security. Our upcoming 12-week research sprint targets multi-agent sharing, personality-aware flux, audit trails, and large-scale benchmarks. Future directions include emotional anchors, cross-lingual retelling, and memory reconsolidation effects.
The Anchor-and-Flux model offers a psychologically grounded path to believable virtual humans. By separating factual anchors from adaptive flux, we model how people remember, forget, and retell experiences. The proof-of-concept validates the approach technically and experimentally. Next steps focus on production integration within the MindLabs virtual human platform, scaling to multi-agent scenarios, and extending the research for publication.
If you reference this work, please cite the research proposal below.
@misc{vanbokhorst2025vhm,
author = {van Bokhorst, Leon and Crombach, Coen},
title = {A Multi-Agent Architecture for Virtual Human Memory},
howpublished = {Fontys University of Applied Sciences, MindLabs},
year = {2025},
url = {https://research-group-ixd.github.io/virtual-human-memory/}
}