10 March 2026
The Six Things AI Can Keep Running Without You
THE WORKING JOINTLY NEWSLETTER · ISSUE FIVE

This is the third post in a series on AI Immersion, a framework for helping organisations move from curiosity to capability with AI, without everyone involved feeling like an imposter from the IT department. The first post sets up why most AI adoption stalls and what we're trying to do differently. The second unpacks the AI Tasks Framework: six types of work you can do with AI right now. Start there if you want the full picture.
Last time we ended with a category called "Do Stuff" and promised we'd get properly into agentic AI. This is that post.
The shift nobody's talking about clearly enough
Most of what people call "using AI" still follows the same pattern. You open a tool, you type a prompt, you get a response, you check it, you close the tool.
That's generative AI. It's powerful but it's also fundamentally limited by the fact that nothing happens unless you're sitting there making it happen.
Agentic AI is very different. Instead of asking AI to produce something, you're asking it to handle something. You brief it, you set boundaries, you walk away, it keeps going whether you're there or not.
For the uninitiated it's quite the thing to get your head around. But try this. Generative AI is like having a brilliant freelancer but they only works when you're on a call with them. Agentic AI is like hiring someone full-time and trusting them to get on with it. they both have the same intelligence, but working with them is a completely different relationship.
We needed a framework to make this shift tangible for teams who've just got comfortable with prompting and suddenly find themselves hearing about "agents" all the time. So we built one.
Why "Keep"
With the Tasks Framework we used "Stuff" because it was accessible and nobody would mistake it for jargon. We needed the same clarity here.
"Keep" landed for a few reasons. It implies persistence. An agent that "keeps watch" doesn't stop watching when you go to lunch. It implies trust. "Keep Order" and "Keep Talking" are instructions you give to someone you believe will follow through. Plus it pairs quite naturally with "Stuff." We use AI to make our Stuff and Keep things running, all pretty easty to understand and remember.
The six "Keeps" describe the things an agent can do for you, not how it's built. Whether you're using Claude Code, a ChatGPT Agent, Gemini Agent, Make or Zapier or whatever emerges next week, next quarter or next year, these categories hold.
Six types of work AI keeps running
Just like the Tasks Framework, we've landed on six categories. They emerged from the same process: running immersions with real teams and seeing which buckets kept proving useful. We'll do deeper dives on each in future posts but here's an overview.
Keep Watch. Agents that monitor, detect and alert. They watch the things you can't watch because you're busy doing actual work. Competitor pricing changes, brand mentions, anomalies in your data, compliance drift, market signals. The difference between this and a dashboard is significant. A dashboard shows you data and waits for you to notice something. A Keep Watch agent tells you when something matters and explains why. It never goes home, it never gets bored. Most organisations have people spending hours each week manually checking things that an agent could monitor continuously. When we run this sprint with teams, that's realisation tends to hit hard.
Keep Order. Agents that triage, sort, route and prioritise. The inbox is chaos, the ticket queue is chaos, the lead pipeline is chaos. These agents take the incoming mess and make sense of it before it reaches a human in the team. They decide what goes where and what matters most. The question we ask teams is: what needs handling first? Not "what came in first" but "what actually needs attention right now?" That's a judgement call and is where traditional rules break because inputs vary, whereas an agent handles ambiguity, context and edge cases in a way that a filter rule never will. Email triage is the entry point most teams start with. Lead qualification is where it gets interesting, because deciding who deserves attention is a judgement call made by AI.
Keep Moving. Agents that execute multi-step processes from start to finish. Onboarding a new hire. Processing an invoice. Running an approval chain. Managing a content pipeline. These are the workflows that currently live in someone's head or in a spreadsheet that three people understand and nobody updates. The question we ask: what keeps stalling? What process has seven steps but only moves forward when someone remembers to chase it? Traditional workflow tools follow rigid paths. When something unexpected happens, they stop. An agent adapts. If step three fails, it works out what step three-and-a-half should be. Automation follows instructions but you can keep moving because your agents handle situations.
Keep Talking. Agents that engage with people on your behalf. Customer support at 2am. Following up with a lead who went quiet three weeks ago. Answering the same ten questions your team gets asked every week. Booking appointments. Collecting feedback. The question: what conversations should be happening that aren't because nobody has the bandwidth? This is not the chatbot your bank installed in 2019 that made you shout at your phone. Old chatbots followed scripts. These agents understand context, remember history and escalate with judgement. They carry the conversation much further so the team can focus on the conversations that actually need a human.
Keep Connected. Most organisations don't have a system. They have twelve systems. The CRM doesn't talk to the project management tool. The spreadsheet doesn't update the dashboard. The email doesn't trigger the task. Every organisation has this problem. The question: why don't our tools talk to each other? Simple point-to-point integrations still work for simple jobs. Zapier is fine when the logic is "if this then that." but when the connection requires judgement about what to send, when and to whom, you need something smarter. Keep Connected agents are the connective tissue that makes your stack work as one system instead of twelve separate ones.
Keep Learning. Agents that improve over time. They notice what works, refine their approach and personalise their responses. This is the category that separates AI agents from traditional automation entirely. A rule engine runs the same way on day one and day one thousand. A learning agent gets better with every interaction. The question: can it get smarter? Better response times. More accurate routing. More relevant recommendations. Tighter processes. This is where the compound returns live. It's also the most advanced category and the one most teams reach last. But when they get there, they really understand the potential of what agents can do for them.
How this connects to the Tasks Framework
The Tasks Framework answers "what can we do with AI?" The Keep Framework answers "what can AI keep doing for us?"
They're not competing frameworks. They're layers. Most teams start with Stuff. You learn to prompt. You generate outputs. You build confidence with AI as a tool you direct. Then you start noticing patterns. You're drafting the same type of email every Tuesday. You're checking the same dashboard every morning. You're routing the same kinds of requests to the same people.
That's when something shifts. You stop asking "how can AI help me do this?" and start asking "why am I still doing this at all?" That's where delegation begins. The repetitive tasks you've been doing with generative AI become candidates. "Create Stuff" becomes "Keep Moving" when you turn a manual content process into an automated pipeline. "Find Stuff" becomes "Keep Watch" when you stop manually researching competitors and start monitoring them continuously. "Think Stuff Through" becomes "Keep Learning" when the AI doesn't just help you think once but remembers what worked and applies it next time.
The progression is natural. You don't need to plan it. You just need to recognise it when it starts happening.
What we're still figuring out
The line between a well-configured automation and an actual agent isn't always clear. The tools are evolving fast. The Keep categories are designed to survive those shifts and so far they have even as Agentic AI tools proliforate.
The bigger question is trust. Specifically: how much do you trust the agent's judgement when something actually matters?
Think of Who Wants to Be a Millionaire. You phone a friend. They give you an answer. Now you have to decide: do I trust this enough to stake £32,000 on it? The friend might be confident and they might even be right but you're the one sitting in the chair. You're the one who has to make the call.
That's exactly the relationship between a human and an AI agent. The agent triages your inbox, qualifies a lead, handles a customer query. It gives you an answer. Sometimes you trust it completely. Sometimes you need to check. And sometimes you override it because your gut says otherwise and that's fine. The skill isn't building agents that are always right. The skill is knowing when to trust the response and when to phone another friend.
Guardrails are how you design for that. Too few and people won't use the agent because they can't verify what it's doing. Too many and you've rebuilt the rigid system you were escaping. Every organisation finds that balance differently. We don't have a formula but we do have a set of questions that help teams find their own answer. It's still the loosest part of immersing people in the world of Ai agents though.
Surprising Reactions
We ran this sprint with a European insurer recently. One of the team leads had been spending every Monday morning manually checking six competitor websites for pricing changes something she'd been doing for two years, forty-five minutes every week, without fail. During the sprint she built a Keep Watch agent that monitors all six sites and sends her a summary only when something actually changes. Interestingly, her first reaction wasn't excitement she was what can best be described as angry "Why has nobody told me this was possible?" That's the reaction we see most often. We expected wonder and we do get plenty of that but frustration at the cost of lost time is the most common sentiment.
What's next?
The Tasks Framework gives you a map of what AI can do. The Keep Framework gives you a map of what AI can run. Together they cover the full spectrum from single-shot prompts to always-on systems without requiring a six-month strategy project or a Chief AI Officer.
In future posts we'll do deeper dives on each of the six Keeps: what good looks like, which tools work best for each category right now and the mistakes we see teams make most often.
If you want a starting point, ask one question: what should still be happening here when nobody's looking? That's usually where the next agent should be.
If your organisation is exploring agentic AI, we'd be curious to hear what's landing. Which of these categories feels closest to home? Where are you seeing real traction and where does it still feel like science fiction? The framework keeps evolving based on what people are actually experiencing.

