Innovativa lösningar som driver ditt företag framåt
Kundnytta i praktiken och skräddarsydda helhetslösningar
Våra experter delar med sig av sina kunskaper och trendspaningar
Din framtid kan finnas hos oss
Lär känna oss som företag och vad vi har att erbjuda
Skriv in nyckelord för att söka på webbplatsen. Tryck enter för att skicka.
Generativ AI
Cloud
Testing
Artificiell intelligens
Säkerhet
October 15, 2025
Most companies now report using AI somewhere, McKinsey puts it at 78%, yet only a small fraction convert it into measurable value, with Boston Consulting Group estimating roughly 5% realizing impact at scale. This piece explains why now is the moment to close that gap, and how to do it without hype or drama.
If you do not yet have a controlled agent pilot underway, you are already behind on the learning curve. Not because the models suddenly became magical. Because your stack did. Because governance did. Because the work you need now stretches across tools and time. The point is not speed for its own sake. The point is to establish guided experimentation, so you learn where agents belong, and where they do not, while risk stays contained and reversibility stays high.
You no longer need a shadow stack to try agentic work.
The enterprise stack has matured. What used to be scratch-built and unsustainable now exists out of the box – with plug and deploy solutions available in the platforms you already run. Identity, policy, observability, logging, key management, access control, versioning – ready on day one. At the same time, task horizons have changed. Agents can pursue goals over longer periods, plan, use tools, coordinate. And the guardrails are no longer an afterthought: evals, traceability, small blast radius, human review when it matters. None of this forces you into agents. It lets you choose them when they are the best instrument for a real problem.
Lead with the problem, not the technology.
Do not introduce agents to introduce agents. Start with the problem. You do not buy a drill; you buy a wall put up straight and true. The toolbox matters – automation, assistants, agents – but you hire the job, not the tool. Sometimes the cleanest fix is mechanical. A set screw beats any model when it removes the root cause. When logic is crisp and deterministic, classic automation will usually win on stability and cost. When it is a single bounded answer, an assistant or a search workflow is enough. Use an agent when the work benefits from sustained goal pursuit across tools within clear constraints. Agent is a means, not the goal.
Clarity lowers risk. Keep a shared language in the room. Automation is fixed logic with repeatable outcomes. An assistant is conversational help for short tasks with low autonomy. An agent acts toward a goal, plans, calls tools, and keeps state over time. In production, keep a human in the loop where it matters and set an explicit allowance for autopilot inside tight boundaries. Widen or tighten that allowance as evidence accumulates. Start where maturity is real and the surface area is small. Expand when the numbers, and the reviews, say so.
Before anything reaches real users, act like production from the start. Contain risk by design. Keep access on least privilege. Instrument every step so decisions can be reviewed. Define the success signal. Capture a baseline. Keep the feedback loop short enough to steer. Make rollback easy and owned by name. None of that is bureaucracy. It is what makes “now” possible.
Culture will decide whether the shift pays off. As I have argued before: judge outcomes rather than provenance, blind the obvious choke points for bias, and normalize assistance where it clearly raises quality or reduces lead time. Do it in policy and in practice. That is how adoption stays honest – and fast – without another spacecraft crashing because the trajectory was wrong.
What leadership should do now is straightforward and measured. Acknowledge why it is now: platform maturity, longer task horizons, real governance. Then behave like builders. Hire the job, not the tool. Be willing to conclude that a simple assistant or a piece of classic automation is the right answer when it is. Keep human in the loop where it matters and be explicit about the autopilot you allow. Treat reviewability as a design constraint, not a post-mortem wish. Run one carefully scoped pilot where maturity already exists, publish the deltas that matter, and decide – scale, pause, or retire. The point, throughout, is simple: it is “now” because the stack and governance have matured – it works because you lead with the problem. Make way for AI in Action – you can learn more about how some of our clients are already putting these principles into practice.
If this helped make the landscape a little clearer, use it. Quote it. Share it with your leadership team. That’s how useful thinking travels.
Curated Source Material
This piece comes from the field. If you want the research layer, here is a curated set of sources that frame the patterns discussed.
– Research: The Hidden Penalty of Using AI at Work, Harvard Business Review (Aug 1, 2025).
– How People Are Really Using Gen AI in 2025, Harvard Business Review (Apr 9, 2025).
– The state of AI: How organizations are rewiring to capture value, McKinsey (Mar 2025).
– The Widening AI Value Gap: Build for the Future 2025, Boston Consulting Group (Sep 30, 2025).
– State of AI Report 2025, Air Street Capital (2025).
Lead Gen AI Strategist, Sogeti Sweden