Personalised AI: a proposal for proactive, interview-based context generation
I've been using AI assistants daily for well over a year now, and despite extensive conversation histories — thousands of messages across Claude, ChatGPT, Gemini, and others — the models still regularly miss context that would be obvious to anyone who's spent five minutes talking to me. They don't know that I prefer direct communication over diplomatic hedging. They don't know that when I ask about a technology, I'm almost always evaluating it for a specific use case rather than idly curious. They don't know that I'm based in Israel and that timezone, cultural context, and regional considerations matter for nearly everything I do. I end up re-explaining these things in every new conversation, which is both tedious and a little absurd given how much data these systems ostensibly have about me. The full architecture proposal is in my Personalised AI Idea repository, but let me walk through the thinking here.
The poverty of passive context
The standard approach to personalising AI right now relies on what I'd call passive methods. RAG pipelines ingest your existing documents. Memory features distil context from conversation history. These work, sort of, but they're glacially slow and fundamentally limited by a problem that nobody seems to talk about: most people don't have comprehensive documentation of their preferences, work history, decision-making frameworks, or communication styles sitting in a folder somewhere. The richest context about who you are and how you think lives in your head, not in your Google Drive or your Notion workspace. RAG can only work with artifacts that already exist, which means the most valuable personalisation data — the stuff that would transform an AI assistant from generically helpful to genuinely useful — never makes it into the pipeline in the first place.
There's something almost comically backwards about the current state of affairs. We have these extraordinarily powerful language models that can reason about complex problems, write code, analyse data, and engage with nuanced arguments — but they know essentially nothing about the person they're talking to. It's as if you hired a brilliant consultant and then refused to brief them on your company, your industry, or what you're actually trying to accomplish. Every conversation starts from scratch, or from whatever scraps the system managed to hoover up from your previous interactions.
What if we just... told them?
My proposal flips the passive model on its head. Instead of waiting for AI to gradually figure you out from breadcrumbs, you deliberately invest time in generating context data through structured interviews. Think of it like onboarding a new colleague — except the colleague has perfect memory, never forgets what you told them, and can be cloned across every AI tool you use. The architecture has four components: an Interviewer AI agent that asks targeted questions designed to extract specific types of context, a Vector Store Pipeline that processes and structures the interview data for retrieval, a Personal Agent that uses the accumulated context for genuinely personalised assistance, and an optional Data Classification Agent that segments sensitive from non-sensitive context so you can control what lives where.
The Interviewer agent is the key innovation and it's not a chatbot making small talk. It's a purposeful agent that follows structured question trees — adapted dynamically based on your responses — designed to extract the kind of context that makes AI actually useful. What are your communication preferences? What does your typical workday look like? What are your current projects and priorities? What tools do you use and why? When you ask for a recommendation, what factors matter most to you? These aren't questions that passive observation can reliably answer, but they're exactly the context that transforms a generic assistant into one that feels like it actually knows you.
The beautiful thing about speech-to-text is that this doesn't even require typing. You just talk, the way you'd brief a new team member, and the pipeline handles the rest. I've found that I can generate more useful context in a twenty-minute voice session than in weeks of regular AI usage. The interview captures tacit knowledge — the kind of reasoning you'd never write down unprompted but that you articulate naturally when someone asks the right questions. It's the difference between an AI knowing that you use VS Code and knowing that you use VS Code because you need extensive extension support for polyglot development and you've tried JetBrains but found the context-switching between different IDEs more disruptive than a single editor with plugins.
Portability, sovereignty, and not being a digital serf
Here's where this gets philosophically interesting. If you've spent months building up context with Claude's memory feature, that context is locked inside Anthropic's platform. Switch to ChatGPT and you start from zero. Switch back six months later and who knows what's been retained. You're essentially a digital serf — your personalisation investment is trapped in the landlord's estate, and if you want to move to a different estate, you leave it all behind. The interview-based model produces a portable context store: a structured dataset that you own, that you can review and edit, and that can be plugged into any AI system that supports RAG. Your personalisation investment travels with you.
There's a transparency benefit too that I think matters more than it first appears. When your context is generated through explicit interviews, you know exactly what's in the store. You can read the transcripts, correct misunderstandings, redact things you'd rather the AI didn't know. Passive systems accumulate context in opaque ways — you might not even know what the model has inferred about you, or whether those inferences are accurate. I've seen Claude's memory feature confidently store things about me that are wrong, and the only reason I caught them is that I happened to check. With an interview transcript, the context is as transparent as a conversation.
Beyond the personal: why this matters for everyone
While I'm approaching this from the perspective of someone who lives and breathes AI tools, the same architecture maps directly onto enterprise onboarding. Imagine a new employee's first week: instead of drowning in wiki pages and Slack channel backscrolls, an AI interviewer walks them through structured conversations about their role, their team, their projects, and how they prefer to work. The resulting context store powers their AI assistant from day one. Scale that to a team and you get cross-pollinated context stores where AI assistants understand not just individual preferences but team dynamics, shared conventions, and the institutional knowledge that currently lives only in the heads of people who've been around longest.
I think proactive context generation deserves to be a first-class feature in AI products, not an afterthought bolted onto passive collection. The technology to build this exists today — it's just plumbing, really, connecting interview agents to vector stores to retrieval pipelines. The missing ingredient is the recognition that the most valuable data about a person isn't sitting in their files. It's sitting in their head, waiting to be asked for.
Daniel Rosehill
AI developer and technologist specializing in AI systems, workflow orchestration, and automation. Specific interests include agentic AI, workflows, MCP, STT and ASR, and multimodal AI.