Read all about it folks
Our Blog
Our Blog
This is where we think out loud. About collaboration,
innovation, marketing and the tools and tech that
help teams do their best thinking together.
This is where we think out loud. About collaboration,
innovation, marketing and the tools and tech that
help teams do their best thinking together.
This is where we think out loud. About collaboration, innovation, marketing
and the tools and tech that help teams do their best thinking together.
Our Opinions
10 March 2026
Posted By: Jonny Lang
This is the third post in a series on AI Immersion, a framework for helping organisations move from curiosity to capability with AI, without everyone involved feeling like an imposter from the IT department. The first post sets up why most AI adoption stalls and what we're trying to do differently. The second unpacks the AI Tasks Framework: six types of work you can do with AI right now. Start there if you want the full picture.
Last time we ended with a category called "Do Stuff" and promised we'd get properly into agentic AI. This is that post.
The shift nobody's talking about clearly enough
Most of what people call "using AI" still follows the same pattern. You open a tool, you type a prompt, you get a response, you check it, you close the tool.
That's generative AI. It's powerful but it's also fundamentally limited by the fact that nothing happens unless you're sitting there making it happen.
Agentic AI is very different. Instead of asking AI to produce something, you're asking it to handle something. You brief it, you set boundaries, you walk away, it keeps going whether you're there or not.
For the uninitiated it's quite the thing to get your head around. But try this. Generative AI is like having a brilliant freelancer but they only works when you're on a call with them. Agentic AI is like hiring someone full-time and trusting them to get on with it. they both have the same intelligence, but working with them is a completely different relationship.
We needed a framework to make this shift tangible for teams who've just got comfortable with prompting and suddenly find themselves hearing about "agents" all the time. So we built one.
Why "Keep"
With the Tasks Framework we used "Stuff" because it was accessible and nobody would mistake it for jargon. We needed the same clarity here.
"Keep" landed for a few reasons. It implies persistence. An agent that "keeps watch" doesn't stop watching when you go to lunch. It implies trust. "Keep Order" and "Keep Talking" are instructions you give to someone you believe will follow through. Plus it pairs quite naturally with "Stuff." We use AI to make our Stuff and Keep things running, all pretty easty to understand and remember.
The six "Keeps" describe the things an agent can do for you, not how it's built. Whether you're using Claude Code, a ChatGPT Agent, Gemini Agent, Make or Zapier or whatever emerges next week, next quarter or next year, these categories hold.
Six types of work AI keeps running
Just like the Tasks Framework, we've landed on six categories. They emerged from the same process: running immersions with real teams and seeing which buckets kept proving useful. We'll do deeper dives on each in future posts but here's an overview.
Keep Watch. Agents that monitor, detect and alert. They watch the things you can't watch because you're busy doing actual work. Competitor pricing changes, brand mentions, anomalies in your data, compliance drift, market signals. The difference between this and a dashboard is significant. A dashboard shows you data and waits for you to notice something. A Keep Watch agent tells you when something matters and explains why. It never goes home, it never gets bored. Most organisations have people spending hours each week manually checking things that an agent could monitor continuously. When we run this sprint with teams, that's realisation tends to hit hard.
Keep Order. Agents that triage, sort, route and prioritise. The inbox is chaos, the ticket queue is chaos, the lead pipeline is chaos. These agents take the incoming mess and make sense of it before it reaches a human in the team. They decide what goes where and what matters most. The question we ask teams is: what needs handling first? Not "what came in first" but "what actually needs attention right now?" That's a judgement call and is where traditional rules break because inputs vary, whereas an agent handles ambiguity, context and edge cases in a way that a filter rule never will. Email triage is the entry point most teams start with. Lead qualification is where it gets interesting, because deciding who deserves attention is a judgement call made by AI.
Keep Moving. Agents that execute multi-step processes from start to finish. Onboarding a new hire. Processing an invoice. Running an approval chain. Managing a content pipeline. These are the workflows that currently live in someone's head or in a spreadsheet that three people understand and nobody updates. The question we ask: what keeps stalling? What process has seven steps but only moves forward when someone remembers to chase it? Traditional workflow tools follow rigid paths. When something unexpected happens, they stop. An agent adapts. If step three fails, it works out what step three-and-a-half should be. Automation follows instructions but you can keep moving because your agents handle situations.
Keep Talking. Agents that engage with people on your behalf. Customer support at 2am. Following up with a lead who went quiet three weeks ago. Answering the same ten questions your team gets asked every week. Booking appointments. Collecting feedback. The question: what conversations should be happening that aren't because nobody has the bandwidth? This is not the chatbot your bank installed in 2019 that made you shout at your phone. Old chatbots followed scripts. These agents understand context, remember history and escalate with judgement. They carry the conversation much further so the team can focus on the conversations that actually need a human.
Keep Connected. Most organisations don't have a system. They have twelve systems. The CRM doesn't talk to the project management tool. The spreadsheet doesn't update the dashboard. The email doesn't trigger the task. Every organisation has this problem. The question: why don't our tools talk to each other? Simple point-to-point integrations still work for simple jobs. Zapier is fine when the logic is "if this then that." but when the connection requires judgement about what to send, when and to whom, you need something smarter. Keep Connected agents are the connective tissue that makes your stack work as one system instead of twelve separate ones.
Keep Learning. Agents that improve over time. They notice what works, refine their approach and personalise their responses. This is the category that separates AI agents from traditional automation entirely. A rule engine runs the same way on day one and day one thousand. A learning agent gets better with every interaction. The question: can it get smarter? Better response times. More accurate routing. More relevant recommendations. Tighter processes. This is where the compound returns live. It's also the most advanced category and the one most teams reach last. But when they get there, they really understand the potential of what agents can do for them.
How this connects to the Tasks Framework
The Tasks Framework answers "what can we do with AI?" The Keep Framework answers "what can AI keep doing for us?"
They're not competing frameworks. They're layers. Most teams start with Stuff. You learn to prompt. You generate outputs. You build confidence with AI as a tool you direct. Then you start noticing patterns. You're drafting the same type of email every Tuesday. You're checking the same dashboard every morning. You're routing the same kinds of requests to the same people.
That's when something shifts. You stop asking "how can AI help me do this?" and start asking "why am I still doing this at all?" That's where delegation begins. The repetitive tasks you've been doing with generative AI become candidates. "Create Stuff" becomes "Keep Moving" when you turn a manual content process into an automated pipeline. "Find Stuff" becomes "Keep Watch" when you stop manually researching competitors and start monitoring them continuously. "Think Stuff Through" becomes "Keep Learning" when the AI doesn't just help you think once but remembers what worked and applies it next time.
The progression is natural. You don't need to plan it. You just need to recognise it when it starts happening.
What we're still figuring out
The line between a well-configured automation and an actual agent isn't always clear. The tools are evolving fast. The Keep categories are designed to survive those shifts and so far they have even as Agentic AI tools proliforate.
The bigger question is trust. Specifically: how much do you trust the agent's judgement when something actually matters?
Think of Who Wants to Be a Millionaire. You phone a friend. They give you an answer. Now you have to decide: do I trust this enough to stake £32,000 on it? The friend might be confident and they might even be right but you're the one sitting in the chair. You're the one who has to make the call.
That's exactly the relationship between a human and an AI agent. The agent triages your inbox, qualifies a lead, handles a customer query. It gives you an answer. Sometimes you trust it completely. Sometimes you need to check. And sometimes you override it because your gut says otherwise and that's fine. The skill isn't building agents that are always right. The skill is knowing when to trust the response and when to phone another friend.
Guardrails are how you design for that. Too few and people won't use the agent because they can't verify what it's doing. Too many and you've rebuilt the rigid system you were escaping. Every organisation finds that balance differently. We don't have a formula but we do have a set of questions that help teams find their own answer. It's still the loosest part of immersing people in the world of Ai agents though.
Surprising Reactions
We ran this sprint with a European insurer recently. One of the team leads had been spending every Monday morning manually checking six competitor websites for pricing changes something she'd been doing for two years, forty-five minutes every week, without fail. During the sprint she built a Keep Watch agent that monitors all six sites and sends her a summary only when something actually changes. Interestingly, her first reaction wasn't excitement she was what can best be described as angry "Why has nobody told me this was possible?" That's the reaction we see most often. We expected wonder and we do get plenty of that but frustration at the cost of lost time is the most common sentiment.
What's next?
The Tasks Framework gives you a map of what AI can do. The Keep Framework gives you a map of what AI can run. Together they cover the full spectrum from single-shot prompts to always-on systems without requiring a six-month strategy project or a Chief AI Officer.
In future posts we'll do deeper dives on each of the six Keeps: what good looks like, which tools work best for each category right now and the mistakes we see teams make most often.
If you want a starting point, ask one question: what should still be happening here when nobody's looking? That's usually where the next agent should be.
If your organisation is exploring agentic AI, we'd be curious to hear what's landing. Which of these categories feels closest to home? Where are you seeing real traction and where does it still feel like science fiction? The framework keeps evolving based on what people are actually experiencing.
This is the second post in a series on AI Immersion, a framework for helping organisations move from curiosity to capability with AI, without feeling like an imposter from the IT department. If you missed the first post, it sets up why most AI adoption stalls and what we're trying to do differently. Start there if you want the full picture.
Last time we introduced two frameworks that sit at the heart of AI Immersion. Today we want to unpack the first one: The AI Tasks Framework.
The idea is simple. Most of us still use AI as a glorified chatbot. We want to reframe the question: what can people actually do with this stuff right now?
Not in theory. Not in a YouTube demo. In your actual working week.
Six types of AI work
We've landed on six categories, inspired by an OpenAI paper called Identifying and Scaling AI Use Cases. After running this with teams (most recently a big European insurer and a global B2B manufacturer) these are the buckets that keep proving useful. They're concrete enough to plan around and broad enough to cover most knowledge work.
The framework unpacks six types of work tasks, or "Stuff." (Yes, that's the deeply technical term we've chosen.) We'll do a deeper dive on each in future posts, but here's the overview.
Find Stuff. Searching across documents, surfacing patterns in data, sourcing inventory, tracking down competitive intelligence. The gap between what search used to mean and what it means now is fundamentally under-appreciated. Tools like ChatGPT, Perplexity, NotebookLM, Microsoft Copilot and Alibaba's Accio have turned "searching" from keyword matching into a conversation with your own data.
Make Sense Of Stuff. Summarising long reports, comparing datasets, spotting anomalies, pulling themes from customer feedback. AI as a very fast, very patient research assistant. Claude, Gemini and ChatGPT all handle this well. Gemini can process video and audio natively, so "making sense" now extends to meeting recordings and customer calls, not just text.
Create Stuff. Drafting, writing, generating. First drafts of emails, reports, presentations, training materials, ad copy. This is the category most people think of first and it is useful, but it's also where quality control matters most. The output is a starting point. You're there to be the editor. The range has expanded fast: ChatGPT and Claude for writing, Midjourney and Adobe Firefly for images, Gamma for decks, Synthesia for video, Claude Cowork for polished documents and spreadsheets, Suno for music.
Build Stuff. This one surprises people. You can use AI to build functional tools: dashboards, simple apps, prototypes. Not production-grade software (usually), but working things that solve real problems. Replit, Lovable and Bolt turn natural language into deployed web apps. Claude Code and Cursor let developers build full applications through conversation. If you've never written a line of code you can build a fully operational app in an afternoon. That wasn't possible 18 months ago.
Think Through Stuff. AI as a thinking partner: pressure-testing assumptions, generating counter-arguments, mapping out scenarios, structuring messy problems. It's not that the AI thinks for you. It's that talking to it forces you to think more clearly. NotebookLM is particularly good here because it grounds the conversation in your source material rather than what the model was trained on. Nobody brags about using AI to think harder. But the people who do it tend to make better decisions.
Do Stuff. This is where the AI Tasks Framework starts pointing towards the next framework. The first five categories all follow the same pattern: you ask, AI delivers, you check. With "Do Stuff," the AI starts to act. Not just generate an output, but take steps, chain tasks together and make decisions within guardrails you set. OpenAI's Operator, Claude Code, Zapier, Microsoft Copilot Studio. The tools are evolving fast and the trajectory is clear. We'll get properly into agentic AI with the AI Actions Framework in the next post. For now, "Do Stuff" is the bridge. It's where tasks stop being one-shot and start becoming workflows.
How it works in practice
When we run an AI Immersion, each of the six task types gets its own mini sprint: a focused session, from 90 minutes to as much as half a day, where a team works through real examples using new tools.
The point isn't speed. It's coverage. By the end, you've got a practical map of where AI adds value in your specific context, built from direct experience rather than someone else's case study.
What we're still figuring out
The boundaries between these categories aren't always clean. "Find Stuff" bleeds into "Make Sense Of Stuff." "Build Stuff" often requires "Create Stuff" first. And "Think Through Stuff" sits underneath everything. We've gone back and forth on whether six is the right number. But every time we try to collapse them, we lose something useful. One team would never have explored "Build Stuff" if it hadn't been a distinct category. They didn't see themselves as builders. Giving it its own sprint gave them permission to try.
So for now, six it is. We're open to being wrong about that.
What's next
The AI Tasks Framework answers "what can we do?" The AI Actions Framework (next post) picks up where "Do Stuff" leaves off and gets properly into agentic AI: systems that don't just complete tasks but take actions, make decisions and run workflows. Together the two frameworks give you a practical starting point that doesn't require a six-month strategy project or a Chief AI Officer.
More on that soon.
If you're working through similar questions in your organisation, what types of AI work are landing well and what's falling flat? We're curious. The framework keeps evolving based on what people are actually experiencing.
This is the second post in a series on AI Immersion, a framework for helping organisations move from curiosity to capability with AI, without feeling like an imposter from the IT department. If you missed the first post, it sets up why most AI adoption stalls and what we're trying to do differently. Start there if you want the full picture.
Last time we introduced two frameworks that sit at the heart of AI Immersion. Today we want to unpack the first one: The AI Tasks Framework.
The idea is simple. Most of us still use AI as a glorified chatbot. We want to reframe the question: what can people actually do with this stuff right now?
Not in theory. Not in a YouTube demo. In your actual working week.
Six types of AI work
We've landed on six categories, inspired by an OpenAI paper called Identifying and Scaling AI Use Cases. After running this with teams (most recently a big European insurer and a global B2B manufacturer) these are the buckets that keep proving useful. They're concrete enough to plan around and broad enough to cover most knowledge work.
The framework unpacks six types of work tasks, or "Stuff." (Yes, that's the deeply technical term we've chosen.) We'll do a deeper dive on each in future posts, but here's the overview.
Find Stuff. Searching across documents, surfacing patterns in data, sourcing inventory, tracking down competitive intelligence. The gap between what search used to mean and what it means now is fundamentally under-appreciated. Tools like ChatGPT, Perplexity, NotebookLM, Microsoft Copilot and Alibaba's Accio have turned "searching" from keyword matching into a conversation with your own data.
Make Sense Of Stuff. Summarising long reports, comparing datasets, spotting anomalies, pulling themes from customer feedback. AI as a very fast, very patient research assistant. Claude, Gemini and ChatGPT all handle this well. Gemini can process video and audio natively, so "making sense" now extends to meeting recordings and customer calls, not just text.
Create Stuff. Drafting, writing, generating. First drafts of emails, reports, presentations, training materials, ad copy. This is the category most people think of first and it is useful, but it's also where quality control matters most. The output is a starting point. You're there to be the editor. The range has expanded fast: ChatGPT and Claude for writing, Midjourney and Adobe Firefly for images, Gamma for decks, Synthesia for video, Claude Cowork for polished documents and spreadsheets, Suno for music.
Build Stuff. This one surprises people. You can use AI to build functional tools: dashboards, simple apps, prototypes. Not production-grade software (usually), but working things that solve real problems. Replit, Lovable and Bolt turn natural language into deployed web apps. Claude Code and Cursor let developers build full applications through conversation. If you've never written a line of code you can build a fully operational app in an afternoon. That wasn't possible 18 months ago.
Think Through Stuff. AI as a thinking partner: pressure-testing assumptions, generating counter-arguments, mapping out scenarios, structuring messy problems. It's not that the AI thinks for you. It's that talking to it forces you to think more clearly. NotebookLM is particularly good here because it grounds the conversation in your source material rather than what the model was trained on. Nobody brags about using AI to think harder. But the people who do it tend to make better decisions.
Do Stuff. This is where the AI Tasks Framework starts pointing towards the next framework. The first five categories all follow the same pattern: you ask, AI delivers, you check. With "Do Stuff," the AI starts to act. Not just generate an output, but take steps, chain tasks together and make decisions within guardrails you set. OpenAI's Operator, Claude Code, Zapier, Microsoft Copilot Studio. The tools are evolving fast and the trajectory is clear. We'll get properly into agentic AI with the AI Actions Framework in the next post. For now, "Do Stuff" is the bridge. It's where tasks stop being one-shot and start becoming workflows.
How it works in practice
When we run an AI Immersion, each of the six task types gets its own mini sprint: a focused session, from 90 minutes to as much as half a day, where a team works through real examples using new tools.
The point isn't speed. It's coverage. By the end, you've got a practical map of where AI adds value in your specific context, built from direct experience rather than someone else's case study.
What we're still figuring out
The boundaries between these categories aren't always clean. "Find Stuff" bleeds into "Make Sense Of Stuff." "Build Stuff" often requires "Create Stuff" first. And "Think Through Stuff" sits underneath everything. We've gone back and forth on whether six is the right number. But every time we try to collapse them, we lose something useful. One team would never have explored "Build Stuff" if it hadn't been a distinct category. They didn't see themselves as builders. Giving it its own sprint gave them permission to try.
So for now, six it is. We're open to being wrong about that.
What's next
The AI Tasks Framework answers "what can we do?" The AI Actions Framework (next post) picks up where "Do Stuff" leaves off and gets properly into agentic AI: systems that don't just complete tasks but take actions, make decisions and run workflows. Together the two frameworks give you a practical starting point that doesn't require a six-month strategy project or a Chief AI Officer.
More on that soon.
If you're working through similar questions in your organisation, what types of AI work are landing well and what's falling flat? We're curious. The framework keeps evolving based on what people are actually experiencing.
A new series about transforming from AI aware to AI ready.
I know. It's the kind of headline that peppers every other LinkedIn post and makes your toes curl. Bear with me though as I can't think of a better way to frame the AI problem most of us are wrestling with. And it's a big problem because it's not hyperbole to say that our jobs depend on it.
Most of us have watched the keynotes, seen the demos, tried the tools. We can probably talk a good AI game. However, if someone asked you tomorrow to commission an AI project, assess a vendor or redesign a workflow around it, you would undoubtedly feel a sense of rising panic.
the issue is that if you haven't built anything with AI, you haven't felt the speed of it or made the mistakes that teach you where the boundaries are. You haven't had the moment where it produces something unexpectedly useful and you think right, now I get what this is for.
That's not a knowledge gap, it's an experience gap. And at some point you have to stop reading and start using, so that you understand the art of the possible. You need to get in the pool.
AI aware is not the same as AI ready
A good client and friend of ours works for a large player in financial services. Thousands of people, massive profits. At an event in December he pulled us aside and said, quietly, like he was admitting to something, "None of us really get AI, we're just bluffing every day".
He's not alone. We hear some version of this confession every week from all sorts of people in all sorts of industries. Really smart and capable people managing big teams, businesses and budgets. Every one of them is AI aware. Almost none of them feel AI ready.
The reason isn't that they're behind what's being reported. The reason is that most of what you can read about AI starts with the technology. Here's a large language model. Here's how tokens work. Here's a prompt engineering framework with seven steps and an acronym. It's like teaching someone to swim by explaining fluid dynamics.
What you actually need isn't AI expertise. It's AI fluency and the difference really matters. Expertise means you can build the thing. Fluency means you know what the thing can do, when to reach for it and how to talk about it with your colleagues. You don't need to understand how a large language model works any more than you need to understand the chemistry of chlorine to get your laps in.
We've been here before
Just over ten years ago, we found ourselves at Freeformers, a startup revealing what digital skills could do for leadership teams at the likes of News UK, the BBC, Tesco and Barclays. We taught CEOs to code a working app in a day. We were part of the genesis of the Barclays Digital Eagles programme. The whole point was the same - don't explain the technology, let people experience what it can do. The confidence naturally follows.
Today's shift feels similar but more existential. Back then, digital transformation was about getting ahead. AI feels like it's as much about survival. The pace is faster. The capability gap is wider. And the cost of watching from the sidelines is higher than it was a decade ago.
Two frameworks for every AI decision you'll face
How then, can we become fluent and confident in a way that isn't all about the tech?
Well, our approach us expressed in two frameworks that give you a usable and stable mental model for every AI use case you'll encounter. One for generative AI and another for agentic AI. The tools will keep changing, but the categories within them won't.
Generative AI is the bit everyone's tried. You type a prompt, you get a response. You're in the driving seat the whole time. Powerful, but manual. Like having a brilliant assistant who only works when you're standing over them.
Agentic AI is what's coming next. You set the goal, define the boundaries and decide when it should escalate back to a human. Then it runs. Whether you're there or not.
These frameworks aren't designed for the people with technology jobs. They're designed for the people in the business who need to understand AI well enough to commission it, evaluate it or lead it without having to become a technologist.
Over the coming weeks we'll dive into the detail. For now, here's how the two frameworks map the landscape.
The Stuff Framework (Generative AI)
Inspired by this LinkedIn post last year which was itself a build on this OpenAI paper this is abut classifying the tools you prompt to produce things. You ask, it creates. You direct, it delivers. We call them "Stuff" because the language should be as accessible as the technology is becoming.
Create Stuff: from blank page to finished output
Find Stuff: surface what matters, fast
Build Stuff: make functional things without a dev team
Make Sense of Stuff: turn data into decisions
Think Stuff Through: a thinking partner on demand
Do Stuff Automatically: set it up once, let it run
The Keep Framework (Agentic AI)
The systems that operate on your behalf, continuously, without you standing over them. You brief an agent the way you'd brief a capable colleague. Then you walk away and it keeps going.
Keep Watch: eyes on everything, all the time
Keep Order: everything in the right pile
Keep Moving: the work flows, even when you stop
Keep Talking: always available, always on-brand
Keep Connected: everything talks to everything
Keep Learning: smarter tomorrow than today
The language is deliberately informal. When frameworks use plain language, people remember them. They use them in meetings. They apply them without needing a reference guide. The moment you have to look something up, the framework has failed.
Most teams will start with Stuff (generating outputs) and graduate to Keep (delegating operations) as their confidence grows. The two frameworks work together. They give any team, in any industry, a shared vocabulary for the whole landscape of what AI can do.
What's coming in this series
Over the coming weeks, we're going to explore both frameworks in detail. Not as theory. As practical guidance you can take into your next team meeting.
Next up: a deep dive into the Stuff Framework. What each of the six categories means in practice, where teams typically start, where the biggest quick wins are and the mistakes that waste the most time.
Then: the Keep Framework. How agentic AI changes what's possible, what it means to delegate to an AI agent and how to know which of the six Keeps your organisation should prioritise first.
After that: twelve posts, one for each of the twelve components. Real examples. Real use cases. The kind of detail that turns understanding into action.
If you've been feeling like AI is moving fast and you're not sure where you fit in, this series is for you. Not because you need to become technical. Because you're already capable of leading with this stuff. You just need the right map.
Subscribe so you don't miss the next one. And if you know someone who's been quietly feeling behind on AI, send this their way. They're not behind. They just haven't found the right starting point yet.
It's time to get in the pool.
A new series about transforming from AI aware to AI ready.
I know. It's the kind of headline that peppers every other LinkedIn post and makes your toes curl. Bear with me though as I can't think of a better way to frame the AI problem most of us are wrestling with. And it's a big problem because it's not hyperbole to say that our jobs depend on it.
Most of us have watched the keynotes, seen the demos, tried the tools. We can probably talk a good AI game. However, if someone asked you tomorrow to commission an AI project, assess a vendor or redesign a workflow around it, you would undoubtedly feel a sense of rising panic.
the issue is that if you haven't built anything with AI, you haven't felt the speed of it or made the mistakes that teach you where the boundaries are. You haven't had the moment where it produces something unexpectedly useful and you think right, now I get what this is for.
That's not a knowledge gap, it's an experience gap. And at some point you have to stop reading and start using, so that you understand the art of the possible. You need to get in the pool.
AI aware is not the same as AI ready
A good client and friend of ours works for a large player in financial services. Thousands of people, massive profits. At an event in December he pulled us aside and said, quietly, like he was admitting to something, "None of us really get AI, we're just bluffing every day".
He's not alone. We hear some version of this confession every week from all sorts of people in all sorts of industries. Really smart and capable people managing big teams, businesses and budgets. Every one of them is AI aware. Almost none of them feel AI ready.
The reason isn't that they're behind what's being reported. The reason is that most of what you can read about AI starts with the technology. Here's a large language model. Here's how tokens work. Here's a prompt engineering framework with seven steps and an acronym. It's like teaching someone to swim by explaining fluid dynamics.
What you actually need isn't AI expertise. It's AI fluency and the difference really matters. Expertise means you can build the thing. Fluency means you know what the thing can do, when to reach for it and how to talk about it with your colleagues. You don't need to understand how a large language model works any more than you need to understand the chemistry of chlorine to get your laps in.
We've been here before
Just over ten years ago, we found ourselves at Freeformers, a startup revealing what digital skills could do for leadership teams at the likes of News UK, the BBC, Tesco and Barclays. We taught CEOs to code a working app in a day. We were part of the genesis of the Barclays Digital Eagles programme. The whole point was the same - don't explain the technology, let people experience what it can do. The confidence naturally follows.
Today's shift feels similar but more existential. Back then, digital transformation was about getting ahead. AI feels like it's as much about survival. The pace is faster. The capability gap is wider. And the cost of watching from the sidelines is higher than it was a decade ago.
Two frameworks for every AI decision you'll face
How then, can we become fluent and confident in a way that isn't all about the tech?
Well, our approach us expressed in two frameworks that give you a usable and stable mental model for every AI use case you'll encounter. One for generative AI and another for agentic AI. The tools will keep changing, but the categories within them won't.
Generative AI is the bit everyone's tried. You type a prompt, you get a response. You're in the driving seat the whole time. Powerful, but manual. Like having a brilliant assistant who only works when you're standing over them.
Agentic AI is what's coming next. You set the goal, define the boundaries and decide when it should escalate back to a human. Then it runs. Whether you're there or not.
These frameworks aren't designed for the people with technology jobs. They're designed for the people in the business who need to understand AI well enough to commission it, evaluate it or lead it without having to become a technologist.
Over the coming weeks we'll dive into the detail. For now, here's how the two frameworks map the landscape.
The Stuff Framework (Generative AI)
Inspired by this LinkedIn post last year which was itself a build on this OpenAI paper this is abut classifying the tools you prompt to produce things. You ask, it creates. You direct, it delivers. We call them "Stuff" because the language should be as accessible as the technology is becoming.
Create Stuff: from blank page to finished output
Find Stuff: surface what matters, fast
Build Stuff: make functional things without a dev team
Make Sense of Stuff: turn data into decisions
Think Stuff Through: a thinking partner on demand
Do Stuff Automatically: set it up once, let it run
The Keep Framework (Agentic AI)
The systems that operate on your behalf, continuously, without you standing over them. You brief an agent the way you'd brief a capable colleague. Then you walk away and it keeps going.
Keep Watch: eyes on everything, all the time
Keep Order: everything in the right pile
Keep Moving: the work flows, even when you stop
Keep Talking: always available, always on-brand
Keep Connected: everything talks to everything
Keep Learning: smarter tomorrow than today
The language is deliberately informal. When frameworks use plain language, people remember them. They use them in meetings. They apply them without needing a reference guide. The moment you have to look something up, the framework has failed.
Most teams will start with Stuff (generating outputs) and graduate to Keep (delegating operations) as their confidence grows. The two frameworks work together. They give any team, in any industry, a shared vocabulary for the whole landscape of what AI can do.
What's coming in this series
Over the coming weeks, we're going to explore both frameworks in detail. Not as theory. As practical guidance you can take into your next team meeting.
Next up: a deep dive into the Stuff Framework. What each of the six categories means in practice, where teams typically start, where the biggest quick wins are and the mistakes that waste the most time.
Then: the Keep Framework. How agentic AI changes what's possible, what it means to delegate to an AI agent and how to know which of the six Keeps your organisation should prioritise first.
After that: twelve posts, one for each of the twelve components. Real examples. Real use cases. The kind of detail that turns understanding into action.
If you've been feeling like AI is moving fast and you're not sure where you fit in, this series is for you. Not because you need to become technical. Because you're already capable of leading with this stuff. You just need the right map.
Subscribe so you don't miss the next one. And if you know someone who's been quietly feeling behind on AI, send this their way. They're not behind. They just haven't found the right starting point yet.
It's time to get in the pool.
History has its dividing lines. Moments so significant that everything gets measured against them. Before and after.
For me, that line is Claude Code.
I'm not being dramatic. I'm being precise. My working life now has two eras: BCC (Before Claude Code) and ACC (After Claude Code). And the gap between them is so wide that BCC already feels like a different century.
The World Before
BCC wasn't bad. I was productive. I ran workshops, built strategies, delivered for clients. I used AI tools — ChatGPT, Copilot, the usual suspects. I thought I was ahead of the curve.
I was using AI the way most people still do: as a slightly better search engine. Ask a question, get an answer, copy-paste something useful. Maybe generate a first draft that needed heavy editing. It felt like progress at the time.
But here's what I didn't realise: I was still doing all the heavy lifting. The thinking, the structuring, the connecting of dots, the building — that was all me, with AI occasionally handing me a brick.
The Moment Everything Changed
Claude Code didn't just hand me bricks. It started building alongside me.
The first time I used it properly — not as a chatbot, but as a genuine collaborator — something shifted. I wasn't asking it questions anymore. I was working with it. Planning. Iterating. Building things I wouldn't have attempted alone. Not because I couldn't, but because the time and effort would have made it impractical.
This is the bit that's hard to explain to people who haven't experienced it: Claude Code doesn't just do tasks faster. It changes what you consider possible in a working day.
Projects that would have taken weeks now take days. Ideas that would have stayed in a notebook because "who has the time?" actually get built. The gap between thinking something and shipping it has collapsed.
What ACC Actually Looks Like
Let me be specific, because vague AI hype is everywhere and it helps no one.
My knowledge base is alive. I have an entire repository — my "brain" — that Claude Code reads, writes to, and builds on. It knows my clients, my brand, my voice, my frameworks. Every session picks up where the last one left off. It's not a tool I use. It's a collaborator that knows my work.
I build things I couldn't before. Not because I've suddenly learned to code. Because the barrier between "I want this to exist" and "this exists" has almost disappeared. Want a workshop framework? A content strategy? A client proposal structure? The thinking is mine. The building is ours.
Quality goes up, not just speed. This is the counterintuitive bit. You'd think working faster means cutting corners. The opposite happens. Because the grunt work takes less time, I spend more time on the thinking. More time on whether something is actually good. More time on the bits that matter.
I'm braver. When building something only costs time, you play it safe. When you have a collaborator that can help you prototype in minutes, you try things. Weird ideas. Ambitious ideas. The kind of ideas that die in notebooks when you're working alone.
Why Anthropic, Though?
I should be clear: this isn't a paid endorsement. Nobody asked me to write this. I'm writing it because I genuinely believe Anthropic is building something different.
It's not just the product — though the product is remarkable. It's the philosophy behind it. The transparency. The thoughtfulness about safety. The way they think about AI as a collaborator rather than a replacement. The way Claude feels to work with — like it actually cares about getting things right, not just getting things done.
Every interaction I have with Claude reinforces something: this company understands that the future of AI isn't about replacing humans. It's about making humans more capable. More creative. More ambitious.
Three recent things that cemented it for me:
1. The Super Bowl ads. Anthropic's first ever TV campaign was bold, funny, and brilliantly crafted — showing what happens when AI conversations get hijacked by ads. Creative that actually says something. Rare in tech.
2. The no-ads pledge. While competitors rush to monetise your conversations with sponsored links, Anthropic publicly committed to keeping Claude ad-free. Your conversations stay yours. That takes conviction.
3. Boris Cherny on Lenny's Podcast. The head of Claude Code sat down to talk about what happens after coding is solved. No hype, no hand-waving — just a builder explaining what they're actually building and why. Refreshing.
That's exactly what Jointly believes about collaboration. The best work doesn't happen when one party does everything. It happens when different capabilities — human and AI — come together.
This Is the Me+AI Era
At Jointly, we talk about three modes of collaboration: Me+AI, Me+Us, and Us+AI. Claude Code is the most powerful expression of Me+AI I've ever experienced.
It's not artificial intelligence pretending to be human. It's artificial intelligence being genuinely, usefully intelligent — in a way that amplifies what I can do rather than replacing what I am.
And here's what excites me most: we're at the very beginning of ACC. This is the earliest this technology will ever be. If it's this transformative now, what does it look like in a year? In five?
What This Means for You
If you're still in the BCC era — still using AI as a fancy search engine, still doing all the building yourself, still keeping ambitious ideas in notebooks — I get it. I was there. The jump feels big.
But here's the thing: the gap between BCC and ACC isn't closing. It's widening. Every day.
You don't have to go all-in overnight. Start small. Pick one project. Work with AI rather than just using it. See what happens when the barrier between thinking and building disappears.
Because once you've experienced ACC, you can't go back. And honestly? You won't want to.
This article was unashamedly written jointly with AI. Obviously. Because that's the whole point.
History has its dividing lines. Moments so significant that everything gets measured against them. Before and after.
For me, that line is Claude Code.
I'm not being dramatic. I'm being precise. My working life now has two eras: BCC (Before Claude Code) and ACC (After Claude Code). And the gap between them is so wide that BCC already feels like a different century.
The World Before
BCC wasn't bad. I was productive. I ran workshops, built strategies, delivered for clients. I used AI tools — ChatGPT, Copilot, the usual suspects. I thought I was ahead of the curve.
I was using AI the way most people still do: as a slightly better search engine. Ask a question, get an answer, copy-paste something useful. Maybe generate a first draft that needed heavy editing. It felt like progress at the time.
But here's what I didn't realise: I was still doing all the heavy lifting. The thinking, the structuring, the connecting of dots, the building — that was all me, with AI occasionally handing me a brick.
The Moment Everything Changed
Claude Code didn't just hand me bricks. It started building alongside me.
The first time I used it properly — not as a chatbot, but as a genuine collaborator — something shifted. I wasn't asking it questions anymore. I was working with it. Planning. Iterating. Building things I wouldn't have attempted alone. Not because I couldn't, but because the time and effort would have made it impractical.
This is the bit that's hard to explain to people who haven't experienced it: Claude Code doesn't just do tasks faster. It changes what you consider possible in a working day.
Projects that would have taken weeks now take days. Ideas that would have stayed in a notebook because "who has the time?" actually get built. The gap between thinking something and shipping it has collapsed.
What ACC Actually Looks Like
Let me be specific, because vague AI hype is everywhere and it helps no one.
My knowledge base is alive. I have an entire repository — my "brain" — that Claude Code reads, writes to, and builds on. It knows my clients, my brand, my voice, my frameworks. Every session picks up where the last one left off. It's not a tool I use. It's a collaborator that knows my work.
I build things I couldn't before. Not because I've suddenly learned to code. Because the barrier between "I want this to exist" and "this exists" has almost disappeared. Want a workshop framework? A content strategy? A client proposal structure? The thinking is mine. The building is ours.
Quality goes up, not just speed. This is the counterintuitive bit. You'd think working faster means cutting corners. The opposite happens. Because the grunt work takes less time, I spend more time on the thinking. More time on whether something is actually good. More time on the bits that matter.
I'm braver. When building something only costs time, you play it safe. When you have a collaborator that can help you prototype in minutes, you try things. Weird ideas. Ambitious ideas. The kind of ideas that die in notebooks when you're working alone.
Why Anthropic, Though?
I should be clear: this isn't a paid endorsement. Nobody asked me to write this. I'm writing it because I genuinely believe Anthropic is building something different.
It's not just the product — though the product is remarkable. It's the philosophy behind it. The transparency. The thoughtfulness about safety. The way they think about AI as a collaborator rather than a replacement. The way Claude feels to work with — like it actually cares about getting things right, not just getting things done.
Every interaction I have with Claude reinforces something: this company understands that the future of AI isn't about replacing humans. It's about making humans more capable. More creative. More ambitious.
Three recent things that cemented it for me:
1. The Super Bowl ads. Anthropic's first ever TV campaign was bold, funny, and brilliantly crafted — showing what happens when AI conversations get hijacked by ads. Creative that actually says something. Rare in tech.
2. The no-ads pledge. While competitors rush to monetise your conversations with sponsored links, Anthropic publicly committed to keeping Claude ad-free. Your conversations stay yours. That takes conviction.
3. Boris Cherny on Lenny's Podcast. The head of Claude Code sat down to talk about what happens after coding is solved. No hype, no hand-waving — just a builder explaining what they're actually building and why. Refreshing.
That's exactly what Jointly believes about collaboration. The best work doesn't happen when one party does everything. It happens when different capabilities — human and AI — come together.
This Is the Me+AI Era
At Jointly, we talk about three modes of collaboration: Me+AI, Me+Us, and Us+AI. Claude Code is the most powerful expression of Me+AI I've ever experienced.
It's not artificial intelligence pretending to be human. It's artificial intelligence being genuinely, usefully intelligent — in a way that amplifies what I can do rather than replacing what I am.
And here's what excites me most: we're at the very beginning of ACC. This is the earliest this technology will ever be. If it's this transformative now, what does it look like in a year? In five?
What This Means for You
If you're still in the BCC era — still using AI as a fancy search engine, still doing all the building yourself, still keeping ambitious ideas in notebooks — I get it. I was there. The jump feels big.
But here's the thing: the gap between BCC and ACC isn't closing. It's widening. Every day.
You don't have to go all-in overnight. Start small. Pick one project. Work with AI rather than just using it. See what happens when the barrier between thinking and building disappears.
Because once you've experienced ACC, you can't go back. And honestly? You won't want to.
This article was unashamedly written jointly with AI. Obviously. Because that's the whole point.
11 February 2026
11 February 2026
Posted By: Spencer Ayres
Last issue we talked about The Most Important Job of the Next Decade. This week we’re showing something we built that is helping hundreds of people get instant, boardroom quality advice in an instant!
We made a boardroom. A virtual room with five opinionated advisors and a conversation that feels uncomfortably real. None of the advisors exist. But the feedback they give is better than most real meetings we’ve sat through.
Here’s the problem it solves.
The Feedback Gap
You’ve got an idea. A pitch. A strategy. A product concept. And you need someone to tell you what’s wrong with it before you walk into the room that matters.
Your options are limited. You could ask your team, but they’ve been working on it too and they’re too close. You could ask your boss, but that’s also the person you’re trying to impress. You could ask you ‘Mom’, but we know how that ends. You could ask a mentor, but finding 30 minutes in their diary takes longer than the idea stays relevant.
So most people do what most people always do. They go in unprepared and find out what’s wrong with their thinking the hard way. In the room. In front of the people who matter.
We thought there had to be a better way.
How Might We get honest feedback at 11pm on a Sunday?
What The Boardroom Actually Does
You describe what you need advice on - a pitch, a strategy, a product idea, a pricing decision. The Boardroom suggests five advisors from a roster of ten, each with a distinct perspective and personality.
Then you submit your idea and the advisors discuss it. Not one at a time, but actually with each other.
They disagree. They build on each other’s points. They challenge assumptions the others missed.
An Investor who wants to know your unfair advantage.
A CFO who picks apart your unit economics.
A CMO who asks who the hero of your story is.
A Customer who tells you they already have Slack and don’t see why they’d switch.
A Skeptic who asks the question nobody in your real team would dare to: “Why hasn’t someone done this already?”
The result is a structured boardroom discussion - strengths, concerns, tough questions and suggested improvements. The kind of feedback that would take weeks to gather from real people, delivered in five minutes.
Why Multiple Perspectives Matter
Most AI feedback is one voice. You ask ChatGPT what it thinks. It tells you. Politely. Comprehensively. And completely lacking in the creative friction that makes real feedback useful.
Real decisions don’t happen in one head. They happen in the collision between different priorities. The CFO and the CMO don’t see the world the same way. The Investor and the Customer want different things. That tension is where the good thinking lives.
The Boardroom creates that tension deliberately. Five perspectives that don’t agree. Each one responding not just to your idea, but to what the others said about it.
We tested it during our AI Immersion workshop. Teams used The Boardroom to stress-test their pitches before a Dragon’s Den session. The AI advisors surfaced exactly the tough questions the real judges asked 30 minutes later. Teams who’d been through The Boardroom first were visibly more prepared.
One team got hit with “Why hasn’t someone done this already?” by both the AI Skeptic and a real Dragon. They had an answer ready the second time.
When to Use It
The Boardroom works best when you need honest feedback and you can’t get it through normal channels. We’ve been using it for:
Before a pitch — stress-test your argument and prepare for tough questions
Early-stage ideas — find the blind spots before you invest weeks of work
Pricing decisions — hear from the CFO, the Customer and the Investor in the same conversation
Strategy sense-checks — surface assumptions you didn’t know you were making
Solo founders and small teams — get the board meeting you don’t have access to
It’s not a replacement for real human advisors. It’s what you use at 11pm when the real advisors are asleep and your pitch is tomorrow.
What We Learned Building It
Three things surprised us.
1. The advisors need to talk to each other, not just to you. Early versions gave five separate opinions. It was useful but flat. When we made the advisors respond to each other, the quality jumped, massively. Real boardrooms have cross-talk. AI ones should too.
“I hear what the Investor is saying, but as the Customer, I don’t care about your unfair advantage, I care about whether this saves me time on Tuesday”
2. People want to be challenged, not validated. We expected users to get defensive when the Skeptic pushed back hard. Instead, the tough feedback is consistently rated as the most useful. It turns out people don’t come to The Boardroom for reassurance. They come because they want someone to find the holes before the real audience does. By the way - Claude totally gets this - did you see their Superbowl ads? Worldclass!
3. Five minutes and less than 20p. The whole thing runs on Claude and costs less than 20p per session. A conversation that would take weeks to arrange with five real advisors, delivered in the time it takes to make a cup of tea.
Try It
The Boardroom is live and free to use. Bring a pitch, a strategy, a product idea - anything you’d want five smart people to argue about. See what comes back.
You might be surprised how much you learn from people who don’t exist.
Try The Boardroom →
Last issue we talked about The Most Important Job of the Next Decade. This week we’re showing something we built that is helping hundreds of people get instant, boardroom quality advice in an instant!
We made a boardroom. A virtual room with five opinionated advisors and a conversation that feels uncomfortably real. None of the advisors exist. But the feedback they give is better than most real meetings we’ve sat through.
Here’s the problem it solves.
The Feedback Gap
You’ve got an idea. A pitch. A strategy. A product concept. And you need someone to tell you what’s wrong with it before you walk into the room that matters.
Your options are limited. You could ask your team, but they’ve been working on it too and they’re too close. You could ask your boss, but that’s also the person you’re trying to impress. You could ask you ‘Mom’, but we know how that ends. You could ask a mentor, but finding 30 minutes in their diary takes longer than the idea stays relevant.
So most people do what most people always do. They go in unprepared and find out what’s wrong with their thinking the hard way. In the room. In front of the people who matter.
We thought there had to be a better way.
How Might We get honest feedback at 11pm on a Sunday?
What The Boardroom Actually Does
You describe what you need advice on - a pitch, a strategy, a product idea, a pricing decision. The Boardroom suggests five advisors from a roster of ten, each with a distinct perspective and personality.
Then you submit your idea and the advisors discuss it. Not one at a time, but actually with each other.
They disagree. They build on each other’s points. They challenge assumptions the others missed.
An Investor who wants to know your unfair advantage.
A CFO who picks apart your unit economics.
A CMO who asks who the hero of your story is.
A Customer who tells you they already have Slack and don’t see why they’d switch.
A Skeptic who asks the question nobody in your real team would dare to: “Why hasn’t someone done this already?”
The result is a structured boardroom discussion - strengths, concerns, tough questions and suggested improvements. The kind of feedback that would take weeks to gather from real people, delivered in five minutes.
Why Multiple Perspectives Matter
Most AI feedback is one voice. You ask ChatGPT what it thinks. It tells you. Politely. Comprehensively. And completely lacking in the creative friction that makes real feedback useful.
Real decisions don’t happen in one head. They happen in the collision between different priorities. The CFO and the CMO don’t see the world the same way. The Investor and the Customer want different things. That tension is where the good thinking lives.
The Boardroom creates that tension deliberately. Five perspectives that don’t agree. Each one responding not just to your idea, but to what the others said about it.
We tested it during our AI Immersion workshop. Teams used The Boardroom to stress-test their pitches before a Dragon’s Den session. The AI advisors surfaced exactly the tough questions the real judges asked 30 minutes later. Teams who’d been through The Boardroom first were visibly more prepared.
One team got hit with “Why hasn’t someone done this already?” by both the AI Skeptic and a real Dragon. They had an answer ready the second time.
When to Use It
The Boardroom works best when you need honest feedback and you can’t get it through normal channels. We’ve been using it for:
Before a pitch — stress-test your argument and prepare for tough questions
Early-stage ideas — find the blind spots before you invest weeks of work
Pricing decisions — hear from the CFO, the Customer and the Investor in the same conversation
Strategy sense-checks — surface assumptions you didn’t know you were making
Solo founders and small teams — get the board meeting you don’t have access to
It’s not a replacement for real human advisors. It’s what you use at 11pm when the real advisors are asleep and your pitch is tomorrow.
What We Learned Building It
Three things surprised us.
1. The advisors need to talk to each other, not just to you. Early versions gave five separate opinions. It was useful but flat. When we made the advisors respond to each other, the quality jumped, massively. Real boardrooms have cross-talk. AI ones should too.
“I hear what the Investor is saying, but as the Customer, I don’t care about your unfair advantage, I care about whether this saves me time on Tuesday”
2. People want to be challenged, not validated. We expected users to get defensive when the Skeptic pushed back hard. Instead, the tough feedback is consistently rated as the most useful. It turns out people don’t come to The Boardroom for reassurance. They come because they want someone to find the holes before the real audience does. By the way - Claude totally gets this - did you see their Superbowl ads? Worldclass!
3. Five minutes and less than 20p. The whole thing runs on Claude and costs less than 20p per session. A conversation that would take weeks to arrange with five real advisors, delivered in the time it takes to make a cup of tea.
Try It
The Boardroom is live and free to use. Bring a pitch, a strategy, a product idea - anything you’d want five smart people to argue about. See what comes back.
You might be surprised how much you learn from people who don’t exist.
Try The Boardroom →
3 February 2026
3 February 2026
Posted By: Jonny Lang
McKinsey is cutting 10% of its workforce. The firm that tells everyone else how to restructure is restructuring itself.
AI is automating the very work consultants built their model on. The gathering of data, synthesising research, building slide decks and generating first drafts.
That last one matters most.
The first draft used to be expensive as it took time, training and proper graft to get anything onto the page. Now you can generate ten versions of almost anything before your coffee gets cold.
This doesn’t mean creation got easier. It means the bottleneck moved and the hard work now sits at both ends.
Upstream: deciding what’s actually worth making, who it’s for and why it matters.
Downstream: knowing whether what comes back is any good, fixing what counts and standing behind the result.
The draft in the middle? That’s the easy part now.
AI writes. We edit. That’s the new division of labour.
Mind you, “editing” doesn’t mean what most people think it means.
⸻
Above the Line, Below the Line
In book publishing there’s a distinction between what editors do below the line and above the line.
Below the line is what most people imagine, grammar, clarity, consistency, polish, application of red pen, tutting etc.
Above the line is everything else, Should this exist at all? – What is it really trying to say? – What’s missing? – Who is this actually for? – When is it done?
Peter Ginna, editor of What Editors Do, describes the role as being a connector, a conduit between writer and reader, a translator or someone who improves communication in both directions.
That’s not someone fixing commas, it’s someone standing between creation and audience asking one hard question:
Does this work?
Jonathan Karp, now CEO of Simon & Schuster, puts it more bluntly. Editors earn their keep at the acquisitions stage. Choosing what to bet on. “No amount of brilliant editing can turn an unsaleable book into a winner.”
The skill isn’t polish. It’s judgement about what deserves to be polished in the first place.
⸻
What McKinsey Is Really Cutting
When McKinsey talks about its AI strategy, it’s explicit about what stays and what goes.
They’ll keep hiring people who face clients and they’ll shrink the layers that gather, synthesise and present information.
Production work is being automated where judgement, relationships and accountability are being protected.
In other words, the people who decide what to make and whether it worked.
Editors.
This isn’t about one firm or one industry. As one analysis of the cuts put it, “the premium for future talent will no longer rest on analytical horsepower alone.”
The old moat, being good at processing information has drained away.
What’s valuable now is knowing what the information means, whether it matters and what to do about it.
⸻
“Editor” Is Not the Grammar Police
When people hear “editor,” they think red pen.
That’s not the job.
The real job is taste, judgement, and accountability. The ability to say this works or this is nonsense and live with the consequences.
These skills were always valuable. They were just harder to see when we were busy typing.
Publishing figured this out years ago. Editors were never content producers. Manuscripts arrived in huge volumes, most of them unusable. The job was selection, shaping and saying no far more often than yes.
That’s now everyone’s job – The lawyer reviewing AI-drafted contracts – The strategist sifting AI-generated scenarios – The marketer choosing between AI-produced campaigns – The leader deciding which insight to back and which to bin.
The cost of production has collapsed and the value of selection has gone through the roof.
⸻
The Skill That Was Hiding in Plain Sight
For decades, consulting and knowledge work ran on a comforting assumption that the hard part was doing the work. Analysis. Research. Synthesis. Presentation.
Clients paid for output.
AI exposes what was always true, the output was never the point as the real value was knowing what question to ask, recognising the right answer when you saw it and having the nerve to act on it.
Clearly those aren’t analytical skills, they’re editorial ones.
⸻
What Editing Looks Like at Work Now
This is the part most people miss.
Modern editing isn’t about fixing text. It’s about shaping thinking.
The editing moves that matter now:
Framing: “What problem are we actually solving here?”
Audience editing:“Who is this really for and what will they care about?”
Insight extraction: “Which trend or data point matters and which is just noise?
Assumption testing: “What would have to be true for this to work?”
Selection: “If we could only keep one idea, which survives?” –
Stopping: “This is good enough. We’re done.”
These show up as prompts too: For example, “What’s the strongest version of this argument and why might it still be wrong?” – “What would a sceptic say in one sentence?” – “What’s missing that would change the decision?” – “If this failed in six months, what would we say we ignored?” – “Which part is trying too hard?”
AI is very good at generating options. It is terrible at choosing.
That’s on us.
⸻
Start Now
At the end of last year we watched a leadership team use AI to generate five versions of a strategy in about ten minutes. Perfectly coherent, nicely structured, all of them very plausible.
Then they sat looking at each other as nobody could say which one was right, or whether any of them were. The AI had done the writing but it couldn’t tell them what they actually believed.
That’s the gap and it’s not going away, so it’s time to train your editorial instinct.
Read more and notice why things work or fall flat. Practice explaining what you’d cut, not just what you’d add. Get comfortable making calls with incomplete information because that’s all you ever have.
Most importantly, get used to being accountable for decisions AI helped you make but won’t help you defend.
The machines can write. But they can’t decide what’s worth writing, or whether it’s good enough. They can’t take responsibility when it matters.
That work has a name. It's called editing. And it's not going anywhere.
Next time: We built a thing. It's a boardroom full of opinionated execs who'll tell you what's wrong with your idea. Except they don't exist, they won’t judge and they're available at 11pm on a Sunday. We'll show you how it works.".
What We’re Reading
Three pieces this week that all circle the same uncomfortable question: in a world where AI can produce anything, who decides what’s actually worth making? The answers point the same direction, toward judgment, discernment and the stubbornly human skill of knowing when to say no:
The Rise of Taste: Why Human Curation Will Define the AI Era — Debris Studio “Taste is a responsibility. It’s not just about what you like. It’s about what you allow in.” A design studio argues that in a world drowning in AI-generated content, the scarcest skill isn’t creation, it’s the wisdom to know what’s worth creating in the first place.
Velocity Is the New Authority. Here’s Why — Om Malik Authority used to be the organising principle of information. You earned attention by being right. That world is gone. Now the algorithm doesn’t care whether something is true, it cares whether it moves. The result: a culture optimised for first takes, not best takes.
AI Is Everywhere. Editors Should Be, Too — Poynter A catalogue of AI-generated disasters from fake books, to fabricated sources, to hallucinated facts, all with one thing in common: no editor in sight.
McKinsey is cutting 10% of its workforce. The firm that tells everyone else how to restructure is restructuring itself.
AI is automating the very work consultants built their model on. The gathering of data, synthesising research, building slide decks and generating first drafts.
That last one matters most.
The first draft used to be expensive as it took time, training and proper graft to get anything onto the page. Now you can generate ten versions of almost anything before your coffee gets cold.
This doesn’t mean creation got easier. It means the bottleneck moved and the hard work now sits at both ends.
Upstream: deciding what’s actually worth making, who it’s for and why it matters.
Downstream: knowing whether what comes back is any good, fixing what counts and standing behind the result.
The draft in the middle? That’s the easy part now.
AI writes. We edit. That’s the new division of labour.
Mind you, “editing” doesn’t mean what most people think it means.
⸻
Above the Line, Below the Line
In book publishing there’s a distinction between what editors do below the line and above the line.
Below the line is what most people imagine, grammar, clarity, consistency, polish, application of red pen, tutting etc.
Above the line is everything else, Should this exist at all? – What is it really trying to say? – What’s missing? – Who is this actually for? – When is it done?
Peter Ginna, editor of What Editors Do, describes the role as being a connector, a conduit between writer and reader, a translator or someone who improves communication in both directions.
That’s not someone fixing commas, it’s someone standing between creation and audience asking one hard question:
Does this work?
Jonathan Karp, now CEO of Simon & Schuster, puts it more bluntly. Editors earn their keep at the acquisitions stage. Choosing what to bet on. “No amount of brilliant editing can turn an unsaleable book into a winner.”
The skill isn’t polish. It’s judgement about what deserves to be polished in the first place.
⸻
What McKinsey Is Really Cutting
When McKinsey talks about its AI strategy, it’s explicit about what stays and what goes.
They’ll keep hiring people who face clients and they’ll shrink the layers that gather, synthesise and present information.
Production work is being automated where judgement, relationships and accountability are being protected.
In other words, the people who decide what to make and whether it worked.
Editors.
This isn’t about one firm or one industry. As one analysis of the cuts put it, “the premium for future talent will no longer rest on analytical horsepower alone.”
The old moat, being good at processing information has drained away.
What’s valuable now is knowing what the information means, whether it matters and what to do about it.
⸻
“Editor” Is Not the Grammar Police
When people hear “editor,” they think red pen.
That’s not the job.
The real job is taste, judgement, and accountability. The ability to say this works or this is nonsense and live with the consequences.
These skills were always valuable. They were just harder to see when we were busy typing.
Publishing figured this out years ago. Editors were never content producers. Manuscripts arrived in huge volumes, most of them unusable. The job was selection, shaping and saying no far more often than yes.
That’s now everyone’s job – The lawyer reviewing AI-drafted contracts – The strategist sifting AI-generated scenarios – The marketer choosing between AI-produced campaigns – The leader deciding which insight to back and which to bin.
The cost of production has collapsed and the value of selection has gone through the roof.
⸻
The Skill That Was Hiding in Plain Sight
For decades, consulting and knowledge work ran on a comforting assumption that the hard part was doing the work. Analysis. Research. Synthesis. Presentation.
Clients paid for output.
AI exposes what was always true, the output was never the point as the real value was knowing what question to ask, recognising the right answer when you saw it and having the nerve to act on it.
Clearly those aren’t analytical skills, they’re editorial ones.
⸻
What Editing Looks Like at Work Now
This is the part most people miss.
Modern editing isn’t about fixing text. It’s about shaping thinking.
The editing moves that matter now:
Framing: “What problem are we actually solving here?”
Audience editing:“Who is this really for and what will they care about?”
Insight extraction: “Which trend or data point matters and which is just noise?
Assumption testing: “What would have to be true for this to work?”
Selection: “If we could only keep one idea, which survives?” –
Stopping: “This is good enough. We’re done.”
These show up as prompts too: For example, “What’s the strongest version of this argument and why might it still be wrong?” – “What would a sceptic say in one sentence?” – “What’s missing that would change the decision?” – “If this failed in six months, what would we say we ignored?” – “Which part is trying too hard?”
AI is very good at generating options. It is terrible at choosing.
That’s on us.
⸻
Start Now
At the end of last year we watched a leadership team use AI to generate five versions of a strategy in about ten minutes. Perfectly coherent, nicely structured, all of them very plausible.
Then they sat looking at each other as nobody could say which one was right, or whether any of them were. The AI had done the writing but it couldn’t tell them what they actually believed.
That’s the gap and it’s not going away, so it’s time to train your editorial instinct.
Read more and notice why things work or fall flat. Practice explaining what you’d cut, not just what you’d add. Get comfortable making calls with incomplete information because that’s all you ever have.
Most importantly, get used to being accountable for decisions AI helped you make but won’t help you defend.
The machines can write. But they can’t decide what’s worth writing, or whether it’s good enough. They can’t take responsibility when it matters.
That work has a name. It's called editing. And it's not going anywhere.
Next time: We built a thing. It's a boardroom full of opinionated execs who'll tell you what's wrong with your idea. Except they don't exist, they won’t judge and they're available at 11pm on a Sunday. We'll show you how it works.".
What We’re Reading
Three pieces this week that all circle the same uncomfortable question: in a world where AI can produce anything, who decides what’s actually worth making? The answers point the same direction, toward judgment, discernment and the stubbornly human skill of knowing when to say no:
The Rise of Taste: Why Human Curation Will Define the AI Era — Debris Studio “Taste is a responsibility. It’s not just about what you like. It’s about what you allow in.” A design studio argues that in a world drowning in AI-generated content, the scarcest skill isn’t creation, it’s the wisdom to know what’s worth creating in the first place.
Velocity Is the New Authority. Here’s Why — Om Malik Authority used to be the organising principle of information. You earned attention by being right. That world is gone. Now the algorithm doesn’t care whether something is true, it cares whether it moves. The result: a culture optimised for first takes, not best takes.
AI Is Everywhere. Editors Should Be, Too — Poynter A catalogue of AI-generated disasters from fake books, to fabricated sources, to hallucinated facts, all with one thing in common: no editor in sight.
Here's a question that's been nagging at us lately.
What if we've got the whole prompting thing backwards?
The AI conversation has become obsessed with prompts writing and "engineering". How to phrase your request or how to structure your instructions, Basically, how to coax better outputs from the black box. It's undoubtedly a useful and important skill, but one that's over egged as the answer to being able to say "I'm good at AI".
Unsurprisingly it was in a workshop, with real people working live on real problems, that we began to experiment with something much more interesting.
The magic happens when AI prompts you.
We'd been messing about with Miro's new AI capability, specifically what they call "Sidekicks" in our workshops. We made a shift that's subtle but ended up being significant. Instead of teams asking AI to generate ideas or summarise and document what they'd done, we started configuring Sidekicks to do something different and challenge the team back.
Picture this. A team is mapping out their product strategy. They've been at it for ninety minutes and they're getting comfortable with their assumptions. Then the Sidekick drops a question:
"You've mentioned 'customer experience' twelve times but haven't defined which customers you mean. Who specifically are you designing for and who have you decided to exclude?"
To begin with we get silence and nervous "how did we miss that" laughter. Then, the actual conversation begins and the AI challenge to the group works its magic.
This isn't how most people think about AI in collaboration. The default mode is AI-as-assistant. You give it a task, twiddle your thumbs while it does its thing, get an output. All very fast, efficient and pretty scalable at a personal level. But in a room full of people trying to solve a strategic problem, speed isn't the bottleneck, clarity and confidence are. The willingness to say the thing everyone's been dancing around.
And this is where an AI challenger becomes surprisingly useful.
A Miro AI Sidekick doesn't care about hierarchy. It won't soften its question because the boss is in the room. It has no career anxiety. It reads the Miro board as context, gets what's really going on or spots the glaring omission and asks the uncomfortable thing. Maybe it's the thing a junior team member might notice but would never say out loud, or something we miss while facilitating because they're we're focused on maintaining focus and keeping energy high.
Researchers at Carnegie Mellon have been exploring this exact dynamic, understanding how AI might serve in "partnership or facilitation roles rather than managerial ones." They describe AI as a tool that can provide the user with an alternative perspective. That's exactly what we're seeing. Not AI doing the thinking. AI provoking better thinking.
There's a reason this works particularly well in workshops.
When you're brainstorming alone with ChatGPT, the dynamic is simple. You prompt, it responds, you iterate, you share. But when you're in a room (or on a Miro board, or both) with a dozen other people, the social dynamics get complicated. Who speaks first? Who dominates? Who holds back? Now, our workshops go a long way to limiting this, nevertheless too often, the loudest voice often wins, not because their idea is best but because volume is a proxy for confidence.
AI can disrupt this in a useful way. When AI poses a question based on what the group have written on their stickies, not what someone said loudest, it creates a moment of democratic reckoning. Everyone has to engage with the same provocation and it pushes the collective to more and better ideas.
Being prompted" changes our role too.
Normally, a good facilitator reads the room, notices when things have gone off the boil and thinking is getting stale. They intervene with a question or activity to break the pattern and create progress. That skill still matters. But now you can configure an AI teammate to do some of that pattern-recognition work in real time. We can focus on human dynamics. The AI watches the content.
A piece from The Living Core, a German consultancy puts it nicely. Rather than letting AI do our work, we can create loops where AI prompts deeper exploration of our own ideas. They describe it as "positively disruptive prompting" or AI triggering thoughts we wouldn't have had otherwise.
That's the crux of it all. From AI as answer machine to AI as thinking partner. From prompting it to being prompted by it.
We're still in the early days of figuring this out, even the Miro AI Sidekicks are still in beta and the configurations that work best are still emerging. But we've seen enough to believe this is a meaningful direction.
In a world obsessed with AI outputs, the teams that will thrive are the ones who use AI to improve their inputs, the quality of their questions, the depth of their exploration, the honesty of their conversations.
Stop asking what AI can do for you. Start asking what AI can ask of you.
Next time: We'll look at an old role that's suddenly become essential, the editor. And why the skills it requires are important as they're hard to automate but harder to define than you'd think.
Here's a question that's been nagging at us lately.
What if we've got the whole prompting thing backwards?
The AI conversation has become obsessed with prompts writing and "engineering". How to phrase your request or how to structure your instructions, Basically, how to coax better outputs from the black box. It's undoubtedly a useful and important skill, but one that's over egged as the answer to being able to say "I'm good at AI".
Unsurprisingly it was in a workshop, with real people working live on real problems, that we began to experiment with something much more interesting.
The magic happens when AI prompts you.
We'd been messing about with Miro's new AI capability, specifically what they call "Sidekicks" in our workshops. We made a shift that's subtle but ended up being significant. Instead of teams asking AI to generate ideas or summarise and document what they'd done, we started configuring Sidekicks to do something different and challenge the team back.
Picture this. A team is mapping out their product strategy. They've been at it for ninety minutes and they're getting comfortable with their assumptions. Then the Sidekick drops a question:
"You've mentioned 'customer experience' twelve times but haven't defined which customers you mean. Who specifically are you designing for and who have you decided to exclude?"
To begin with we get silence and nervous "how did we miss that" laughter. Then, the actual conversation begins and the AI challenge to the group works its magic.
This isn't how most people think about AI in collaboration. The default mode is AI-as-assistant. You give it a task, twiddle your thumbs while it does its thing, get an output. All very fast, efficient and pretty scalable at a personal level. But in a room full of people trying to solve a strategic problem, speed isn't the bottleneck, clarity and confidence are. The willingness to say the thing everyone's been dancing around.
And this is where an AI challenger becomes surprisingly useful.
A Miro AI Sidekick doesn't care about hierarchy. It won't soften its question because the boss is in the room. It has no career anxiety. It reads the Miro board as context, gets what's really going on or spots the glaring omission and asks the uncomfortable thing. Maybe it's the thing a junior team member might notice but would never say out loud, or something we miss while facilitating because they're we're focused on maintaining focus and keeping energy high.
Researchers at Carnegie Mellon have been exploring this exact dynamic, understanding how AI might serve in "partnership or facilitation roles rather than managerial ones." They describe AI as a tool that can provide the user with an alternative perspective. That's exactly what we're seeing. Not AI doing the thinking. AI provoking better thinking.
There's a reason this works particularly well in workshops.
When you're brainstorming alone with ChatGPT, the dynamic is simple. You prompt, it responds, you iterate, you share. But when you're in a room (or on a Miro board, or both) with a dozen other people, the social dynamics get complicated. Who speaks first? Who dominates? Who holds back? Now, our workshops go a long way to limiting this, nevertheless too often, the loudest voice often wins, not because their idea is best but because volume is a proxy for confidence.
AI can disrupt this in a useful way. When AI poses a question based on what the group have written on their stickies, not what someone said loudest, it creates a moment of democratic reckoning. Everyone has to engage with the same provocation and it pushes the collective to more and better ideas.
Being prompted" changes our role too.
Normally, a good facilitator reads the room, notices when things have gone off the boil and thinking is getting stale. They intervene with a question or activity to break the pattern and create progress. That skill still matters. But now you can configure an AI teammate to do some of that pattern-recognition work in real time. We can focus on human dynamics. The AI watches the content.
A piece from The Living Core, a German consultancy puts it nicely. Rather than letting AI do our work, we can create loops where AI prompts deeper exploration of our own ideas. They describe it as "positively disruptive prompting" or AI triggering thoughts we wouldn't have had otherwise.
That's the crux of it all. From AI as answer machine to AI as thinking partner. From prompting it to being prompted by it.
We're still in the early days of figuring this out, even the Miro AI Sidekicks are still in beta and the configurations that work best are still emerging. But we've seen enough to believe this is a meaningful direction.
In a world obsessed with AI outputs, the teams that will thrive are the ones who use AI to improve their inputs, the quality of their questions, the depth of their exploration, the honesty of their conversations.
Stop asking what AI can do for you. Start asking what AI can ask of you.
Next time: We'll look at an old role that's suddenly become essential, the editor. And why the skills it requires are important as they're hard to automate but harder to define than you'd think.
A new framework for collaboration in the AI era
Here’s a question no one’s asking clearly enough. What what actually happens to collaboration when AI shows up?
Not “how do I use ChatGPT better.” Not “will AI take my job.” The harder question. “When humans and machines start thinking together, what does good teamwork actually look like anymore?”.
We’ve spent the past year watching this play out. Working with teams, running workshops, watching what happens when AI gets dropped into existing ways of working. And we’ve come to believe that most organisations are solving the wrong problem.The conversation has been stuck on individual productivity. How do I get better at prompting? How do I save time? But the interesting challenge isn’t at an individual level, it’s what’s happening between people.
Here’s the pattern we keep seeing. AI doesn’t fix broken collaboration. It makes it worse. It amplifies the problems.
The loudest voice in the room used to dominate meetings. Now they dominate meetings and fire off polished looking documents before the rest of us have had time to think at all. Bad assumptions spread quicker. The same team dysfunctions that have always existed are now running at machine speed and with better formatting.
So what do most organisations do? They train people harder. More prompt workshops. More tool tutorials. More people getting clever with AI on their own.
Twenty people who are each good with AI doesn’t give you a team that’s good with AI. It gives you twenty separate experiments, twenty different approaches and confusion about which outputs to trust.
We’re proposing a framework that we’re calling Working Jointly. Not because we have all the answers, but because we need a name for the thing we’re trying to figure out. Three dimensions of joint work that we believe need to develop together. It goes something like this:
Me + AI
How I think, decide, and create alongside AI.
This is where all the attention goes, and fair enough, it’s where everyone has to start. Individual fluency with AI tools. You need individual fluency before anything else makes sense. You need to know when the thing is lying to you, when it’s useful, when it’s just making you lazy.
We should be honest here. This dimension has changed how we work. There are only a few of us. AI has let us operate like a company three times our size, creating, researching, prototyping at a pace that wasn’t possible before. We’ve learned a lot about what works, what doesn’t and where the traps are. It’s time we started sharing that.
Me + Us
How we collaborate better as humans.
This is home turf for us. It’s where Jointly started, years before anyone was talking about ChatGPT. We’ve spent a long time helping teams actually think together, using Miro, designing workshops, trying to create the conditions where a room full of smart people produces something smarter than any of them would alone. It’s harder than it looks. Most meetings fail at it.
Here’s what gets overlooked in the AI conversation. The human skills that matter more, not less, as AI handles more of the execution. How do we disagree productively? Make decisions under uncertainty? Hold each other accountable? Build trust?Teams with the strongest human collaboration will use AI best. This dimension is often the first casualty in the rush to adopt new tools. We think that’s a mistake.
Us + AI
How teams use AI collectively, not individually.
This is where almost no one is yet. Shared prompts. Shared workflows. Shared practices. Intelligence that compounds across a team over time, not just within individual heads.
Everyone is training individuals. Almost no one is building organisational AI capability. That gap—between individual fluency and collective intelligence—is where we think the real opportunity lives.
AI amplifies existing dynamics. If your collaboration is weak, AI makes it weaker. If it's strong, AI becomes an accelerant.
The argument we're making is this:
The organisations that get this right won’t treat Me, Us, and AI as three separate problems, an AI training initiative here, a culture programme there, some team-building off to the side. They’ll see them as three dimensions of the same thing. A way of working where AI amplifies what teams can do together, and where the human collaboration actually gets better rather than being hollowed out.
What comes next
We're going to work through this, in public. What we’re learning, what we’re getting wrong, what we’re stealing from people smarter than us. We’ll share the practices that seem to help and the experiments that fell flat. There’s no playbook for this yet. We’re writing it as we go, and we’d rather do that out loud than pretend we’ve got it figured out.
A new framework for collaboration in the AI era
Here’s a question no one’s asking clearly enough. What what actually happens to collaboration when AI shows up?
Not “how do I use ChatGPT better.” Not “will AI take my job.” The harder question. “When humans and machines start thinking together, what does good teamwork actually look like anymore?”.
We’ve spent the past year watching this play out. Working with teams, running workshops, watching what happens when AI gets dropped into existing ways of working. And we’ve come to believe that most organisations are solving the wrong problem.The conversation has been stuck on individual productivity. How do I get better at prompting? How do I save time? But the interesting challenge isn’t at an individual level, it’s what’s happening between people.
Here’s the pattern we keep seeing. AI doesn’t fix broken collaboration. It makes it worse. It amplifies the problems.
The loudest voice in the room used to dominate meetings. Now they dominate meetings and fire off polished looking documents before the rest of us have had time to think at all. Bad assumptions spread quicker. The same team dysfunctions that have always existed are now running at machine speed and with better formatting.
So what do most organisations do? They train people harder. More prompt workshops. More tool tutorials. More people getting clever with AI on their own.
Twenty people who are each good with AI doesn’t give you a team that’s good with AI. It gives you twenty separate experiments, twenty different approaches and confusion about which outputs to trust.
We’re proposing a framework that we’re calling Working Jointly. Not because we have all the answers, but because we need a name for the thing we’re trying to figure out. Three dimensions of joint work that we believe need to develop together. It goes something like this:
Me + AI
How I think, decide, and create alongside AI.
This is where all the attention goes, and fair enough, it’s where everyone has to start. Individual fluency with AI tools. You need individual fluency before anything else makes sense. You need to know when the thing is lying to you, when it’s useful, when it’s just making you lazy.
We should be honest here. This dimension has changed how we work. There are only a few of us. AI has let us operate like a company three times our size, creating, researching, prototyping at a pace that wasn’t possible before. We’ve learned a lot about what works, what doesn’t and where the traps are. It’s time we started sharing that.
Me + Us
How we collaborate better as humans.
This is home turf for us. It’s where Jointly started, years before anyone was talking about ChatGPT. We’ve spent a long time helping teams actually think together, using Miro, designing workshops, trying to create the conditions where a room full of smart people produces something smarter than any of them would alone. It’s harder than it looks. Most meetings fail at it.
Here’s what gets overlooked in the AI conversation. The human skills that matter more, not less, as AI handles more of the execution. How do we disagree productively? Make decisions under uncertainty? Hold each other accountable? Build trust?Teams with the strongest human collaboration will use AI best. This dimension is often the first casualty in the rush to adopt new tools. We think that’s a mistake.
Us + AI
How teams use AI collectively, not individually.
This is where almost no one is yet. Shared prompts. Shared workflows. Shared practices. Intelligence that compounds across a team over time, not just within individual heads.
Everyone is training individuals. Almost no one is building organisational AI capability. That gap—between individual fluency and collective intelligence—is where we think the real opportunity lives.
AI amplifies existing dynamics. If your collaboration is weak, AI makes it weaker. If it's strong, AI becomes an accelerant.
The argument we're making is this:
The organisations that get this right won’t treat Me, Us, and AI as three separate problems, an AI training initiative here, a culture programme there, some team-building off to the side. They’ll see them as three dimensions of the same thing. A way of working where AI amplifies what teams can do together, and where the human collaboration actually gets better rather than being hollowed out.
What comes next
We're going to work through this, in public. What we’re learning, what we’re getting wrong, what we’re stealing from people smarter than us. We’ll share the practices that seem to help and the experiments that fell flat. There’s no playbook for this yet. We’re writing it as we go, and we’d rather do that out loud than pretend we’ve got it figured out.
How often have you heard this? A big company maying McKinsey millions for some kind of "transformation" strategy. Big words. Big invoice. Three months later it’s in a drawer. Not because it was necessarily wrong. Because it was obvious. Their own people had been saying the same thing for years. They just hadn't been heard. So they'd paid someone in a designer gilet to say it louder.
And you know what? This isn't unusual at all.
Every day, companies pay fortunes for external validation of internal knowledge. They hire strangers to tell them what their own people have been screaming into the void. It's corporate theatre at its most expensive.
Now, it's easy to take a swing McKinsey (they did tell us all to back the Metaverse, remember?). They make a convenient villain. But the same thing happens with any big consultancy or marketing agency promising to crack your problem at great expense. The pattern is identical: throw the problem over the wall to an outsider, wait for the deck, then wonder why nothing changes.
Here's what nobody wants to say out loud. Outsourcing your thinking is a way of cheating on your team. The signal it sends is brutal. Either you don't trust the answers they've already given you, or worse, you don't believe they have answers worth hearing in the first place. Either way, you've just told your people that a stranger's opinion matters more than theirs.
The expertise problem
Here's what McKinsey won't tell you: your team already knows what needs to be done. They've been living with your problems, watching your customers, fighting your battles every single day. They don't need frameworks. They need permission.
According to our research across 200+ sprints, internal teams identify the right solution 85% of the time. The issue isn't knowledge. It's confidence. It's the political cover to say what everyone's thinking but no one's saying.
The real issue isn't intellectual. It's behavioural. As Peter Drucker wrote decades ago, "Culture eats strategy for breakfast." And your culture is eating your team's best ideas before they even reach the boardroom.
The collaboration fix
So what's the fix?
The solution isn't another consultant. It's actual collaboration. Not "alignment." Not "buy-in." Actual work, together.
Most teams don't need someone to hand them the answer. But they do need help drawing it out of themselves. The knowledge is there, it's just stuck. Buried under hierarchy, habit and the fear of saying the obvious thing out loud.
That's what we do. We run proper collaboration sessions, on Miro, with real structure that brings everyone together and draw those answers out. No months of interviews. No waiting for a massive deck. Just the right people, the right questions and the right space for it all to come together.
Making space for truth
Workshops work because they bypass the hierarchy that kills honesty. They create what psychologists call "psychological safety" - the confidence to speak without career consequences.
Here, the intern can challenge the CEO's assumption. The engineer can question the marketing strategy. The quiet thinker gets the same airtime as the confident speaker.
It's not magic. It's method. And it's exactly what your team needs to beat any consultancy at their own game.
Why it beats McKinsey
It takes a fraction of the time. A fraction of the money. And when it's done, the team owns the outcome. They built it. They believe it. They have skin in the game.
Your people are better than your procurement habits suggest. Every consultancy contract is a vote of no confidence whether you mean it that way or not.
The brains are already on payroll. The experience is already in the building. What's missing isn't capability it's the conditions to use it.
So before you brief another agency, ask a harder question. When did you last give your team the space, the tools and the permission to solve this themselves?
You might find they've been ready for a while. They were just waiting to be asked.
Don't bring in outsiders. Bring people together.
How often have you heard this? A big company maying McKinsey millions for some kind of "transformation" strategy. Big words. Big invoice. Three months later it’s in a drawer. Not because it was necessarily wrong. Because it was obvious. Their own people had been saying the same thing for years. They just hadn't been heard. So they'd paid someone in a designer gilet to say it louder.
And you know what? This isn't unusual at all.
Every day, companies pay fortunes for external validation of internal knowledge. They hire strangers to tell them what their own people have been screaming into the void. It's corporate theatre at its most expensive.
Now, it's easy to take a swing McKinsey (they did tell us all to back the Metaverse, remember?). They make a convenient villain. But the same thing happens with any big consultancy or marketing agency promising to crack your problem at great expense. The pattern is identical: throw the problem over the wall to an outsider, wait for the deck, then wonder why nothing changes.
Here's what nobody wants to say out loud. Outsourcing your thinking is a way of cheating on your team. The signal it sends is brutal. Either you don't trust the answers they've already given you, or worse, you don't believe they have answers worth hearing in the first place. Either way, you've just told your people that a stranger's opinion matters more than theirs.
The expertise problem
Here's what McKinsey won't tell you: your team already knows what needs to be done. They've been living with your problems, watching your customers, fighting your battles every single day. They don't need frameworks. They need permission.
According to our research across 200+ sprints, internal teams identify the right solution 85% of the time. The issue isn't knowledge. It's confidence. It's the political cover to say what everyone's thinking but no one's saying.
The real issue isn't intellectual. It's behavioural. As Peter Drucker wrote decades ago, "Culture eats strategy for breakfast." And your culture is eating your team's best ideas before they even reach the boardroom.
The collaboration fix
So what's the fix?
The solution isn't another consultant. It's actual collaboration. Not "alignment." Not "buy-in." Actual work, together.
Most teams don't need someone to hand them the answer. But they do need help drawing it out of themselves. The knowledge is there, it's just stuck. Buried under hierarchy, habit and the fear of saying the obvious thing out loud.
That's what we do. We run proper collaboration sessions, on Miro, with real structure that brings everyone together and draw those answers out. No months of interviews. No waiting for a massive deck. Just the right people, the right questions and the right space for it all to come together.
Making space for truth
Workshops work because they bypass the hierarchy that kills honesty. They create what psychologists call "psychological safety" - the confidence to speak without career consequences.
Here, the intern can challenge the CEO's assumption. The engineer can question the marketing strategy. The quiet thinker gets the same airtime as the confident speaker.
It's not magic. It's method. And it's exactly what your team needs to beat any consultancy at their own game.
Why it beats McKinsey
It takes a fraction of the time. A fraction of the money. And when it's done, the team owns the outcome. They built it. They believe it. They have skin in the game.
Your people are better than your procurement habits suggest. Every consultancy contract is a vote of no confidence whether you mean it that way or not.
The brains are already on payroll. The experience is already in the building. What's missing isn't capability it's the conditions to use it.
So before you brief another agency, ask a harder question. When did you last give your team the space, the tools and the permission to solve this themselves?
You might find they've been ready for a while. They were just waiting to be asked.
Don't bring in outsiders. Bring people together.
In the rush to embrace AI, we've turned it into the ultimate productivity theatre. Reports materialise in minutes, slide decks assemble themselves, emails arrive perfectly phrased with those telltale Oxford commas. Everything looks professional until someone tries to use it and then the facts don't hold up, the logic dissolves, the ideas collapse under the weight of their own polish.
There's a name for this now. Workslop. The growing flood of AI-generated output that looks like work, sounds like work, but adds nothing of value.
According to researchers at Stanford and BetterUp, it already accounts for around 15% of work in most organisations (we think it's much more than that), costing time, money and trust as businesses begin drown in nonsense.
The real issue isn't technological, it's behavioural. As Cassie Kozyrkov wrote in Harvard Business Review, workslop is "thoughtlessness enabled by AI". When we can skip the hardest part of work, the actual thinking, our instincts tell us to do exactly that. And when everyone's doing it, we get thoughtlessness at scale.
Good friction
AI has quietly stripped away something we didn't realise we needed - friction. The conversations, the disagreements, the questioning. All the messy (and frankly enjoyable) human stuff that forced us to make sense before we spoke.
Without it, we just get cognitive pollution. Why? Because we've treated AI like a vending machine for answers instead of a tool for better thinking.
So what's the fix?
Not another layer of software. An older, simpler idea, proper collaboration. The workshop.
Workshops have always been places where people slow down to think together to question, debate and connect ideas until they actually make sense. Now, with AI-enabled workshops on platforms like Miro we can have the best of both worlds.
Quiet correction
Workslop happens when organisations confuse output with outcome. When they chase more instead of better.
But the companies that thrive in the age of AI won't be the ones generating the most words. They'll be the ones generating the most sense.
The workshop is a key component of how we get there. A space where AI makes it easier to start the conversation, not finish it.
That's exactly what we're building at Jointly.
A place where teams use AI not to avoid the hard work of thinking, but to think better, together. More workshop. Less workslop.
In the rush to embrace AI, we've turned it into the ultimate productivity theatre. Reports materialise in minutes, slide decks assemble themselves, emails arrive perfectly phrased with those telltale Oxford commas. Everything looks professional until someone tries to use it and then the facts don't hold up, the logic dissolves, the ideas collapse under the weight of their own polish.
There's a name for this now. Workslop. The growing flood of AI-generated output that looks like work, sounds like work, but adds nothing of value.
According to researchers at Stanford and BetterUp, it already accounts for around 15% of work in most organisations (we think it's much more than that), costing time, money and trust as businesses begin drown in nonsense.
The real issue isn't technological, it's behavioural. As Cassie Kozyrkov wrote in Harvard Business Review, workslop is "thoughtlessness enabled by AI". When we can skip the hardest part of work, the actual thinking, our instincts tell us to do exactly that. And when everyone's doing it, we get thoughtlessness at scale.
Good friction
AI has quietly stripped away something we didn't realise we needed - friction. The conversations, the disagreements, the questioning. All the messy (and frankly enjoyable) human stuff that forced us to make sense before we spoke.
Without it, we just get cognitive pollution. Why? Because we've treated AI like a vending machine for answers instead of a tool for better thinking.
So what's the fix?
Not another layer of software. An older, simpler idea, proper collaboration. The workshop.
Workshops have always been places where people slow down to think together to question, debate and connect ideas until they actually make sense. Now, with AI-enabled workshops on platforms like Miro we can have the best of both worlds.
Quiet correction
Workslop happens when organisations confuse output with outcome. When they chase more instead of better.
But the companies that thrive in the age of AI won't be the ones generating the most words. They'll be the ones generating the most sense.
The workshop is a key component of how we get there. A space where AI makes it easier to start the conversation, not finish it.
That's exactly what we're building at Jointly.
A place where teams use AI not to avoid the hard work of thinking, but to think better, together. More workshop. Less workslop.
Some tools shout. Some tools show off. Some tools think they’re the star. Miro doesn’t.
It doesn't dominate proceedings. It doesn't try to replace you. It just gives your thinking somewhere to go, somewhere it can be seen. By you. By the team.
Not hidden in slides. Not scattered in Slack. Out in the open.
And that’s what matters. Because it’s the difference between working hard. And actually working together.
The browser for work
A browser isn’t the internet. It’s just how you get there. That’s Miro. It’s not the work. It’s the space that makes the work happen.
A universal canvas where where ideas from different people, different disciplines, different time zones can exist and develop in the same space at the same time.
Every browser knows its job is to get out of the way and let you reach what matters. Miro understands the same thing.
Together Isn’t a Tab
Modern work is lonely. Everyone’s busy. No one’s present.
But in Miro, presence comes back. Not through chaos but through structure that invites contribution not control.
Half-baked ideas? Good. Bring them in. This isn’t about making a mess. It’s about giving thinking room to breathe. And structure to take shape. That’s what real creativity needs.
Miro gives you a stage, not a script. It trusts you to think for yourself.
When a tool gets out of the way, people step up. This is why, for modern teams trying to rebuild connection across offices and time zones, Miro remains peerless. Because it doesn’t fragment. It unites.
Why we picked it
We tried the lot. FigJam, Mural, Zoom Whiteboard, Microsoft Whiteboard—all the usual suspects.
They all sort of worked. Technically.
But Miro felt different. It thinks like we do.We don’t want automation. We want augmentation. Not tools that do the work for us. Tools that give the work a home.
Because when your platform becomes your practice, you need one that was built for working together. Not just working.
Collaborative AI, not solo AI
Most AI is needy. You prompt. It answers. Repeat. You're stuck in a loop of explaining context, trying to extract something useful.
Miro flips this with a simple but profound idea. The canvas is the prompt.
It sees what the team sees. It knows what you’re trying to do.
And it joins in. Not as a robot. As another brain in the room. The kind that nudges. Pushes. Questions. Connects. It’s not about replacing your thinking. It’s about provoking better thinking.
Collaborative AI that makes teams think better, not less.
"AI's biggest opportunity lies in teamwork and accelerating outcomes that teams are driving, not just individual productivity. The canvas is the best surface to bring teams together with AI."
Andrey Khusid, Founder and CEO, Miro
Take Miro's prototyping capabilities. You’ve seen this situation. A really great idea. The team’s excited. Then someone says "great idea, let me take that away and build it" and the collaborative energy dies.
Miro stops that happening. Because the idea never leaves the room. Prototypes get made right there. Live. On the canvas. That’s what the AI’s for. Momentum. Not just answers.
What love looks like
So yes, we love Miro.
Not because it’s flashy. Not because it's clever. Because it’s thoughtful. But because it knows when to be quiet. And when to speak up.
And in a world full of “solutions” that isolate people, Miro brings us together.
It’s not just a tool. It’s a place. A place where thinking lives. Where teams work out loud. Where AI doesn’t take over.
That’s why we use it. That’s why we trust it. That’s why it’s home.
Some tools shout. Some tools show off. Some tools think they’re the star. Miro doesn’t.
It doesn't dominate proceedings. It doesn't try to replace you. It just gives your thinking somewhere to go, somewhere it can be seen. By you. By the team.
Not hidden in slides. Not scattered in Slack. Out in the open.
And that’s what matters. Because it’s the difference between working hard. And actually working together.
The browser for work
A browser isn’t the internet. It’s just how you get there. That’s Miro. It’s not the work. It’s the space that makes the work happen.
A universal canvas where where ideas from different people, different disciplines, different time zones can exist and develop in the same space at the same time.
Every browser knows its job is to get out of the way and let you reach what matters. Miro understands the same thing.
Together Isn’t a Tab
Modern work is lonely. Everyone’s busy. No one’s present.
But in Miro, presence comes back. Not through chaos but through structure that invites contribution not control.
Half-baked ideas? Good. Bring them in. This isn’t about making a mess. It’s about giving thinking room to breathe. And structure to take shape. That’s what real creativity needs.
Miro gives you a stage, not a script. It trusts you to think for yourself.
When a tool gets out of the way, people step up. This is why, for modern teams trying to rebuild connection across offices and time zones, Miro remains peerless. Because it doesn’t fragment. It unites.
Why we picked it
We tried the lot. FigJam, Mural, Zoom Whiteboard, Microsoft Whiteboard—all the usual suspects.
They all sort of worked. Technically.
But Miro felt different. It thinks like we do.We don’t want automation. We want augmentation. Not tools that do the work for us. Tools that give the work a home.
Because when your platform becomes your practice, you need one that was built for working together. Not just working.
Collaborative AI, not solo AI
Most AI is needy. You prompt. It answers. Repeat. You're stuck in a loop of explaining context, trying to extract something useful.
Miro flips this with a simple but profound idea. The canvas is the prompt.
It sees what the team sees. It knows what you’re trying to do.
And it joins in. Not as a robot. As another brain in the room. The kind that nudges. Pushes. Questions. Connects. It’s not about replacing your thinking. It’s about provoking better thinking.
Collaborative AI that makes teams think better, not less.
"AI's biggest opportunity lies in teamwork and accelerating outcomes that teams are driving, not just individual productivity. The canvas is the best surface to bring teams together with AI."
Andrey Khusid, Founder and CEO, Miro
Take Miro's prototyping capabilities. You’ve seen this situation. A really great idea. The team’s excited. Then someone says "great idea, let me take that away and build it" and the collaborative energy dies.
Miro stops that happening. Because the idea never leaves the room. Prototypes get made right there. Live. On the canvas. That’s what the AI’s for. Momentum. Not just answers.
What love looks like
So yes, we love Miro.
Not because it’s flashy. Not because it's clever. Because it’s thoughtful. But because it knows when to be quiet. And when to speak up.
And in a world full of “solutions” that isolate people, Miro brings us together.
It’s not just a tool. It’s a place. A place where thinking lives. Where teams work out loud. Where AI doesn’t take over.
That’s why we use it. That’s why we trust it. That’s why it’s home.












