Hero imageMobile Hero image
  • Facebook
  • LinkedIn

October 16, 2025

This post explores the growing confusion around AI agents, and why the term has quietly started meaning everything and nothing at the same time. When anything that moves, responds, or automates gets called an “agent”, it becomes harder to tell what we’re actually building. Or trusting. And now that OpenAI Agent Mode is entering the mainstream, well… let’s just say things are about to get interesting. There’s a lesson in all this. Somewhere between a Mars orbiter and a product launch. And it starts with asking what the word agent should really mean.

While the technology behind OpenAI Agent Mode isn’t brand new, its arrival marks something else: the start of a broader shift in expectations. For the first time, millions of users now get to interact with AI systems that look, talk, and act like real digital teammates. The kind you can hand something off to.

An agent is a system that reliably handles complex work, takes on tasks autonomously, and executes at a high level of proficiency. Even when it hasn’t seen that task before. It implicitly uses reasoning to address new challenges. It’s not just a tool. It’s a teammate. That’s the perspective I work from. It’s also consistent with what’s now becoming shared direction across the field, from OpenAI to DeepMind, Anthropic, and Microsoft.

So let’s get specific. This is the definition you can rely on.

An AI Agent is defined by four key traits:

Acts autonomously
Operates through a persona
Sets goals and reasons
Interfaces with the world

Not just when prompted. It initiates and sustains actions on its own. A role, mindset, or identity that shapes how it interprets and responds. It can define objectives and create a plan to reach them. Through APIs, tools, code, systems, or files. It doesn’t just talk. It acts.

You may also have come across the term multi-agent systems. And yes, even within a single well-designed agent, there can be internal structure: components that reason, fetch, plan, and execute. In that sense, some agents already behave like small multi-agent systems on the inside. But what makes a true multi-agent system isn’t how many functions live under the hood. It’s the way multiple agents interact with each other toward a shared goal. There’s a shared direction taking shape. Agents that plan, act, adapt, and interface with the world. But what’s often missing is a clear line around what isn’t an agent.

Let’s talk about that too.

Tools like n8n, Make.com, or tightly scoped wrappers that string together prompts and actions. These aren’t agents. They’re automations. They don’t take ownership. They don’t reason across steps. They don’t carry intent. You can’t hand them a task like you would to a teammate. They just follow instructions, often enhanced by Gen AI.

And that’s okay.

Not everything needs to be agentic. Automation plays an important role. But if we blur the line too much, we lose the ability to design for trust, initiative, and adaptability. And that’s where the real shift is happening: anything a person does through a digital interface can now be done by an agent. From managing calendars and filing reports to writing code and booking travel. And just like we’ve learned to question whether an image is real or generated, we’ll soon be asking the same about actions. Who submitted that? A person, or their agent?

That shift won’t happen overnight. But it’s already begun. And it’s going to reshape how we build systems, structure teams, and define accountability. We can call everything an agent if we want. Maybe that ship has already sailed. But before the next spacecraft crashes into the wrong planet, it’s worth remembering why that happens. Back in 1999, a Mars orbiter disintegrated because two teams used different units of measurement. One calculated in feet. The other in meters. Everything looked fine. Until it wasn’t. The mission failed not because the math was wrong, but because the definitions were never aligned.

Are we talking about autonomous agents? Or automated systems with AI support?

That question won’t matter in every conversation. But in some, it’s the difference between clarity and chaos.

If this helped make the landscape a little clearer, use it. Quote it. Share it. Drop it into your next deck. And if you do, feel free to point people here. That’s how useful thinking travels.

Curated Source Material

This post is based on field work, but if the topic sparks something deeper, here’s a selection of reports that adds context behind these changes:

The Rise and Limits of Tool-Use in AI Agents, Anthropic, 2025

AI Agents in the Wild, OpenAI Technical Report, 2025

Understanding Autonomous AI Systems, DeepMind Research, 2025

Karl Fridlycke

Karl Fridlycke

Lead Gen AI Strategist, Sogeti Sweden

Read more articles

Why now for agents
Agentic Design Core Patterns part 2
Agentic Design Patterns part 1