What Lorevox is
- A memoir and life-story platform centered on guided conversation.
- A local-first and privacy-conscious system.
- A way to turn spoken memory into structured history.
- A writing environment for human-reviewed memoir drafts.
Lorevox helps older adults preserve their life stories through guided conversation, strengthening memory, identity, and family connection. A conversational guide named Lori conducts structured life-story interviews, extracting biographical facts from natural speech and organizing them into a living archive that grows into a publishable memoir.
Lorevox is currently in development and is not a public online app. It is being built for local hosting, with a tightly scoped private narrator universe.
Lorevox preserves original source material, builds a structured historical layer from what is said, and assembles a memoir draft — never collapsing those layers, never crossing them without a human review.
Is the preserved source record: transcripts, audio, photos, scans, and session material. Nothing is deleted. Everything is timestamped and source-tagged.
Is the structured layer: facts, claims, contradictions, relationships, and timeline events — extracted from the archive and verified by a human reviewer.
Is the narrative draft: assembled with AI assistance, but always treated as editable writing rather than a published claim.
Most personal AI tools are built for productivity or conversation, not for preserving a life with dignity. Lorevox starts from the opposite premise: the person is the author, the archive is sacred, and the AI is a careful guide.
When a narrator speaks, Lorevox processes their response through a multi-layer extraction pipeline that produces structured biographical claims with confidence scores, schema-bound field paths, and explicit contradictions — never silently overwritten, never quietly inferred.
A local language model parses conversational responses into structured field-value pairs with confidence scores. No external API calls, ever.
Extracted claims pass through field-path validation, relation allowlists, confidence floors, and negation guards before reaching the proposal layer.
Narrators naturally mention multiple people, places, and events in a single response. The pipeline disambiguates and routes each claim to the right entity.
A 104-case evaluation suite benchmarks extraction accuracy across three real older-adult narrators, covering single, compound, and narrative response shapes.
Every session preserves more than words. Voice, video, and synchronized timestamps stay aligned so a moment in a life story isn't just text — it's something you can return to. A grandchild who never met the narrator can click an extracted fact and hear her say it, see her face when she said it, feel the pause before the word.
Per-turn audio captured locally and kept beside the transcript. Two-sided text + audio for every conversation, in the narrator's own voice. Operator can export the full archive as a single zip at any time.
Audio is the default. Video is opt-in per session — for moments the family wants to preserve visually. Like audio, every frame stays on the narrator's machine. Family decides what gets kept.
Audio, video, transcript, and facial-expression signal all share the same capture clock. A weight in the voice, a pause before a memory, a softening of expression — anchored to the exact words that produced them. Memoir becomes multimodal.
FastAPI backend with local LLM inference (GPU-accelerated), SQLite storage, browser-based WebRTC audio capture, and on-device facial signal processing. Nothing leaves the device. No external API for any modality, including facial recognition. Audio is processed locally and never transmitted.
Whisper variants run locally for transcription. Web Speech API for browser-side capture. The narrator's voice never reaches a hosted service.
Llama 3.1 8B (4-bit quantized) on a local GPU. Hermes 3 / Qwen swappable as hardware advances. The fusion contract stays stable; only the upstream extractor changes.
MediaPipe FaceMesh in the browser; only derived affect labels (steady / engaged / reflective / moved) leave the camera-preview boundary. No video, no landmarks, no raw vectors.
SQLite + filesystem on the narrator's machine. Per-session zip export, two-sided text transcripts, per-turn audio archive. Family controls the data.
Every feature is exercised against three real older-adult narrators in the family-locked Hornelore R&D fork before being considered for promotion to the public Lorevox product. The relationship is one-way and deliberate: features move only by promotion, after they prove themselves with real narrators — never by file-parity backport.
Hornelore
The family-locked private build. Three real older-adult narrators (Chris, Kent, Janice). Heavy heritage language: coin, metal, stone, runic border. Where every feature meets actual aging-parent use before anything moves forward.
Lorevox
The public-facing memory archive and memoir platform. Local-first, private, careful, human-authored. Inherits only what earns the move — the surviving capability gets generalized for arbitrary narrators and the family-specific scaffolding stays behind.
Lab → Gold · By deliberate decision · Never by file parity
Lorevox is being developed by Christopher Horne, OTR/L. After 40 years in pediatrics, school-based occupational therapy, and family collaboration, I'm retiring January 1, 2026 — and starting Lorevox full-time. The values behind it are closely aligned with OT: listening carefully, meeting the person where they are, and treating partial progress as real.
I'm also testing Lorevox directly with my own parents — both 86, with memories starting to slip in different ways. My dad tells long, meandering stories with sudden cynical humor. My mom rarely initiates but with the right cue opens into vivid, passionate stories. They are, in a real sense, the people Lorevox is being built for.
Lorevox is in active development. If you work in memoir design, privacy-first AI, family history, occupational therapy, life review with older adults, or open-source local AI — I'd be glad to talk.
Lorevox is not being designed as a general chatbot. It is being shaped as a human legacy studio for capturing, reviewing, and assembling a life story across years.
The operating doctrine is simple: human memory stays primary, AI assistance stays visible but subordinate, and human edits are protected from silent AI replacement.
Lori is the conversational guide inside Lorevox. She helps prompt, organize, draft, and gently reframe — without ever silently outranking the person's own voice.
Lori is designed to support memoir interviews and life review in a way that feels calm, collaborative, and steady. She knows when to wait, when to reflect a single concrete detail back, and when to ask one focused question.
Lori is the persistent center of the experience. The person talks to Lori, and the rest of the interface organizes itself around what Lori and the narrator have built together.
I think about my own grandparents the same way — the stories I never asked for in time, the voices I can no longer hear. By the time I had the questions, the people who could answer them were gone. What remains are fragments, and each year they get thinner.
Now I'm watching it happen again, more slowly, with my parents. They're both 86, and their memories are beginning to slip in different ways. My dad — who has a PhD and multiple master's degrees — recently scored lower than expected on a cognitive assessment. He still tells long, meandering stories with flashes of sharp humor, but there are gaps now, and it frustrates him. My mom, who has always been the more precise and intellectually structured of the two, is facing her own changes. She often needs prompting for basic daily routines, but with the right cue she can still open into vivid, passionate stories.
What's striking is not just the memory loss, but the awareness of it — and the frustration that comes with it. They are, in a very real sense, the people I'm building this for.
I've been testing Lorevox directly with them, and it's been humbling. My dad has already managed to crash it a few times just through long, winding responses and conversational edge cases. It's made it very clear how challenging real-world interaction is compared to anything you see in clean datasets.
That's why Lori is designed the way she is. Not as an interviewer pushing through questions, but as a listener who knows when to wait, when to reflect a single detail back, when to ask one warm question, and when to simply be present. Older adults — especially with cognitive variability — don't need more questions. They need the right cue at the right moment.
Lorevox is the public-facing product line, and Hornelore is the family-locked R&D fork where I test and prove ideas. Everything runs locally on the narrator's machine — speech, model, and signal processing — which is a deliberate design choice around privacy and trust.
In terms of direction, I see Lorevox as both a product and a research path. The product side matters because this needs to exist for real families, but many of the hardest problems — memory, narrative truth, multimodal alignment, human-in-the-loop correction — are fundamentally research problems as well.
A moment in a life story isn't just text — it becomes something you can return to. A future reader could move from a fact to the actual moment it was spoken: hear the voice, see the expression, and feel the pause before the memory.