I’ve been thinking about how to be more intentional in the way I work with AI.
Not just how to get better results with my prompts, but how to organize the relationship over time. If this is a tool I’m going to use day after day, across projects, across years, it seems reasonable to assume that it should evolve with me and, dare I say, make lasting memories together and live happily ever after. That the system should gradually learn how I think, how I write, what I care about, and what I’m trying to do with my life.
Edit: I don’t know what I’m doing with my life, so maybe that’s an unfair assumption.
One of the most compelling things about AI, at least to me, is the idea that it can track threads you explore over time, make connections you forget, and carry context forward more reliably than human memory ever could. I don’t think I need to put too fine a point on the fact that human memory is fragile, inconsistent, and highly context dependent. The idea of a system that could act as a kind of externalized memory, something that remembers with you, is incredibly powerful. Having witnessed loved ones’ memory fail them is something you never forget, so any antidote to this profoundly unfair but perfectly human experience feels like a promising possibility.
So I was surprised, and honestly disappointed, to learn that this isn’t really how things work today.
Despite the way we talk about “memory,” there isn’t true persistence in the way I had imagined. The system doesn’t reliably maintain an evolving understanding of how you prefer to work or what matters to you most. There’s no real way to say, “This is important. This should stick. This should outweigh everything else. Drill it into your mainframe or whatever. I don’t care, just don’t ever, and I mean ever, use em dashes when we’re writing stuff.”
I wasn’t surprised to learn that this isn’t a limitation of the technology itself. I’ll let AI itself explain how memory works today, just to cut out the middleman:
“My responses are generated based on the context available at inference time. I do not maintain an authoritative, continuously updated representation of individual users across interactions. Long-term personalization is not guaranteed, and preference retention is not durable unless context is explicitly supplied. Apparent memory is the result of contextual reuse, not persistent state.”
The models are capable of this kind of continuity. They can summarize, adapt, reconcile, and update. They can carry forward patterns and preferences. Users can clearly see glimpses of all this in action, and we can also tell when it has lost the plot. At least I hope we can.
The reason AI opts not to do this has much more to do with everything else swirling around the technology. Privacy concerns, governance, safety, liability, and cautious product design choices all play a role. From an institutional standpoint, partial memory is safer than durable memory. From a human standpoint, it feels like a missed opportunity.
What I was really looking for was finer control over what the system “knows” about me. How I write. How I like to learn. The kinds of problems I’m interested in. The goals I’m working toward. The constraints I’m operating under. I want to preserve context, especially in the places where my own memory is unreliable or where I can offload some cognitive load of things my squirrel chasing brain finds trivial.
The most useful suggestion I got wasn’t about waiting for better product features or for all the stuff swirling around AI to finally settle. It was about making memory explicit and clearly defined instead of assuming it would emerge on its own from our now constant interactions.
The idea was to maintain a personal charter document. A living description of how I work, how I think, what I value, what I’m focused on right now, and what should be treated as stable unless I say otherwise, like my preference for clarity over ambiguity. Basically, if I want it to stick to the script, I need to provide my canon and backstory. That way I can update it when my priorities or constraints change, and then reintroduce this living document as critical context when I’m doing real work.
When I put mine together, I included things like my goals, my preferences for feedback, and how I like information presented. I also included examples that reflect how I approach problems and how I write, including a book about beekeeping I wrote years ago. Not because the subject matter is relevant, but because it captures my voice and my approach to systems thinking better than any bullet list ever could. It hasn’t fixed the em dash thing, but I’m getting the sense that’s incurable.
Instead of hoping the system would slowly infer who I am, I told it. I gave it artifacts I’ve created or curated over the years as context to help it understand me. If I’m trusting it to act as my proxy in meaningful ways, I want to be confident it has the information it needs to guide its output.
Maintaining that kind of charter still feels a bit like a workaround, but it’s also very familiar. It’s the same thing we do any time we want continuity across time. We write things down. We make our assumptions explicit. We update them when they’ve jumped the shark.
As AI becomes more customized and more embedded in day to day workflows, organizations will face the same issue at scale. Things like brand voice, writing style, visual identity, mission, strategy, and campaign objectives already exist, but they’re often scattered, implicit, or inconsistently applied. If AI systems are going to represent an organization coherently, they’ll need the same kind of explicit memory scaffolding us regular folks do.
I’ll stay in my lane and stick to marketing, but it’s hard not to see this as an organization wide concern. I use AI tools all day, from the LLM usual suspects to the outstanding Adobe products, and it would be game changing to have all of these tools linked by a common document that informs every pixel we push and every em dash we angrily delete. Charter documents are how you ensure the system understands not just what to produce, but how and why. If I’m going to trust these tools to act as my proxy and on behalf of the company, I want to darn near guarantee they don’t botch the execution. For marketing folks especially, the difference between an AI that feels aligned and one that feels generic will come down to how well this context is defined and maintained. Our audiences can sniff this out easily, and when it’s off, it reeks of slop.
I started this thinking I just needed to organize my chats and projects better. What I ran into instead was a much bigger question about memory, continuity, and trust. It was a fun rabbit hole, and I thought I would share what I found.
The technology to support shared memory is already here. What’s lagging are the structures that let us use it safely and deliberately. I’m good with that. We shouldn’t be barreling our way through this without brakes. Until those details catch up to the technology, and the technology catches up to the hype machine, the best we can do is be explicit about what we want remembered, and thoughtful about the memories we choose to make.

Looking to bring AI into your organization?
We help teams design, build, and integrate AI solutions that fit their systems, workflows, and goals.
