
10 UX/UI Best Practices for Modern Digital Products in 2025
March 19, 2025Beyond generative AI
Generative AI has dominated headlines for two years, yet the results have underwhelmed. Nearly eight out of ten companies report using generative AI but just as many admit they’re not seeing meaningful bottom‑line impact[1]. This is the “Gen AI paradox”: investment grows, productivity doesn’t. The obvious conclusion is that content generation and chatbots aren’t enough. If you want systems that can actually carry out work – not just answer questions – you need agents that plan, reason and act on your behalf.
Gartner calls agentic AI one of the top technology trends of 2025 and predicts that by 2028 these agents could automate 15 % of routine decision‑making and be embedded in a third of enterprise applications[2]. Yet hype masks how immature the field is. The majority of organisations remain stuck in pilot mode[3]; only 11 % have deployed agents at scale[3]. This article separates signal from noise. We define agentic AI, compare it with robotic process automation (RPA), lay out a blueprint for adoption, dissect the governance and security challenges and provide a brutally honest view on return on investment. Prepare to have assumptions challenged.
What exactly is agentic AI?
Most definitions of agentic AI bury the lead. In plain English, agentic AI refers to autonomous systems that sense their environment, plan multi‑step actions, make decisions and then execute tasks without requiring constant human prompting. According to Microsoft’s definition, an AI agent “plans, reasons and acts to complete tasks with minimal human oversight”[4]. The University of Cincinnati’s research distils this into five core capabilities: autonomy, contextual reasoning, adaptable planning, natural language understanding and the ability to take action[5].
At a systems level, most agentic frameworks follow a four‑step loop – sense, plan, act and learn[6]. Agents first assess a task and gather data, then break it into steps and decide what to do, execute the steps using tools and APIs, and finally learn from the outcome to improve future performance.
Crucially, agentic AI is not just a more sophisticated chatbot; it couples large language models (LLMs) with memory and planning modules so that it can maintain context, retrieve knowledge, chain reasoning steps and call external tools[7]. That ability to integrate with systems and take actions is what moves AI from suggestion to execution[8]. Some call this the sense‑plan‑act‑reflect cycle – a cycle humans follow instinctively but which is only now being replicated in software[9].
Comparing agentic AI to RPA and generative AI
It’s tempting to lump agents into the same bucket as robotic process automation. Don’t. RPA tools are deterministic scripts designed for structured, repetitive tasks; they execute predefined actions, rarely store context, and break when processes change[10]. Agentic AI is goal‑driven and outcome‑oriented: it uses LLMs and planning algorithms to decide what to do based on the desired result rather than following hard‑coded rules[8]. Agents can reason about ambiguous situations, adapt plans when conditions shift and work across unstructured environments such as email inboxes or supply chains. RPA is cheap and quick to deploy but brittle. Agentic AI is more flexible but also resource‑intensive and requires strong guardrails. The table below summarises the differences.
Capability | RPA | Generative AI | Agentic AI |
---|---|---|---|
Primary function | Automate repetitive, rule‑based tasks | Produce text, images or code from prompts | Plan, reason and execute multi‑step tasks |
Core mechanism | Predefined scripts and if/then rules | Large language models generating content | LLMs + memory + planning + tool use |
Adaptability | Low – brittle if the process changes | Limited – responds to prompts but doesn’t act autonomously | High – adapts plans based on context and feedback[10][11] |
Operational scope | Structured environments (e.g., invoicing, data entry) | Content generation and summarisation | Unstructured environments (e.g., supply chains, IT troubleshooting) |
Agency | Executes predefined actions | None – waits for prompts | Yes – acts on goals, makes decisions, performs external actions |
Governance needs | Basic access control, process monitoring | Content safety and hallucination mitigation | Comprehensive identity and lifecycle management, continuous monitoring, ethical and legal oversight[12] |
Bottom line: RPA remains useful for repeatable processes, generative AI for creating content, but agentic AI delivers the missing ingredient: autonomy and goal‑orientation.
Inside the agentic AI architecture
An effective agent is more than an LLM plugged into a prompt. Leading architectures – whether open frameworks like LangGraph or enterprise platforms like Okta and SAP – share several common layers:
- Perception and data ingestion. Agents collect data from multiple sources (internal databases, APIs, documents, sensors) and maintain a working memory of the conversation or workflow[7].
- Reasoning and planning. LLMs provide natural language understanding and generation, while graph‑based planners decompose goals into sub‑tasks and decide which tools to call[13]. Memory modules enable multi‑step reasoning and reflection so that agents can adjust their plans as new information arrives[14].
- Tool execution and integration. Agents call external tools (APIs, scripts, RPA bots, enterprise applications) to take actions. Frameworks like SAP’s orchestration agents show how agents can trigger workflows across production, logistics and finance systems[15].
- Learning and feedback. Reflection modules analyse outcomes, update internal models and feed lessons back into the planning module[16]. This continuous improvement loop is what differentiates agentic AI from static automation.
In enterprise settings, these components must be wrapped in a mesh of governance services – identity management, access control, observability, and risk management – to ensure that autonomous agents remain trustworthy and compliant[17]. Without that mesh, agents become a security and liability nightmare.
Where the market is today
Despite the hype cycle, adoption remains uneven. KPMG’s Q1 2025 AI Pulse Survey found that organisations are rapidly experimenting with AI agents: pilot adoption jumped from 37 % to 65 % in one quarter, yet production deployments remain stuck at 11 %[3]. Executives cited risk management (82 %), poor data quality (64 %) and personal trust in generative AI (35 %) as the top challenges to deployment[18]. Complexity and skills gaps are also obstacles – 66 % of leaders pointed to system complexity and 51 % to workforce skills[19].
The PagerDuty Agentic AI Survey (March 2025) paints a more bullish picture: 51 % of companies already have AI agents in production and another 35 % plan to deploy them within two years[20]. Two‑thirds of UK companies and 60 % of Australian firms are ahead of the curve[21]. Why the optimism? Agentic AI has delivered tangible returns: 62 % of executives expect more than 100 % ROI, with an average expected return of 171 %; U.S. respondents expect nearly a 192 % return[22]. Those expectations are fuelled by generative AI successes – 62 % of companies had already realised triple‑digit ROI on generative AI[23]. However, leaders also warned that rushing into agentic AI without planning leads to regrets; 36 % admitted that a lack of ROI expectations was a mistake they hope not to repeat[24].
What can we conclude from these numbers? Organisations see enormous potential in agents, but they underestimate the cultural, technological and governance hurdles. Without a clear operating model, the ROI figures will remain aspirational. For executives who want to get ahead, the next section lays out a practical blueprint.
A four‑phase blueprint for agentic AI
James Cullum’s “Agentic AI Operating‑Model Blueprint” is the most concrete guide published to date. The model emphasises that adoption is not a linear project but a continuous loop of discovery, design, deployment and stewardship[25]. Below is a summary adapted for a 90‑day sprint model that CIOs can start implementing today.
Phase 1 – Value discovery (weeks 1–2)
Before writing a single line of code, identify high‑leverage problems where agency – human or machine – would make a measurable difference. Cullum uses a Value Discovery Canvas to map pain points, decision bottlenecks and data sources[26]. Interview frontline staff to understand where people act as “human routers” passing data between systems and where judgment is required[27]. Quantify the cost of indecision, not just inefficiency. The goal is to select one or two pilot use cases that have clear success metrics.
Phase 2 – Design (weeks 3–4)
Design isn’t just about architecture; it’s about clarifying roles and building feedback loops. Define a clear mandate for each agent (what decision it can make and when it must hand off to humans), choose a modular architecture (start with a single agent and expand later) and ensure explainability[28]. Involve end‑users early; co‑design interfaces and discuss how they will override or approve agent actions. Use the Sense–Plan–Act–Reflect pattern as your baseline architecture[9]. For example, a logistics pilot might assign one agent to triage exceptions, another to cross‑check external data and a third to handle escalations[29].
Phase 3 – Deployment (weeks 5–8)
Most AI projects die during the pilot phase. Treat deployment as an engineering exercise: integrate the agent with existing systems via APIs, use cloud‑native CI/CD pipelines for version control, and build observability dashboards to track not only uptime but impact[30]. Start with a 30‑day pilot and define success metrics upfront: time‑to‑resolution, customer satisfaction, error rates, etc. Provide a feedback channel for human users; store both quantitative data and qualitative feedback for the learning loop[31].
Phase 4 – Stewardship (weeks 9–12 and beyond)
The biggest gap in most agentic AI projects is governance. Cullum argues that stewardship is a moral and technical responsibility requiring cross‑functional oversight, transparent reporting and mechanisms for redress[32]. Establish a stewardship board comprising technologists, compliance officers, risk managers and end‑users. Audit agent decisions regularly using both quantitative metrics (accuracy, bias, drift) and qualitative reviews. Publish transparency reports and invite feedback[33]. Use the NIST AI Risk Management Framework as a baseline for risk assessments; it encourages organisations to incorporate trustworthiness into the design, development and use of AI systems[34]. The final deliverables of this phase include a post‑pilot impact report and a recommendation for scaling or killing the pilot.
Implementation checklist
Cullum’s checklist distils the blueprint into actionable tasks[35]:
- Map value opportunities using the Value Discovery Canvas (problem statement, decision points, data sources, roles, success metrics).
- Define clear agent roles and feedback loops.
- Pilot with modular, explainable architectures and integrate with existing systems using cloud‑native tools.
- Build observability and monitoring from day one.
- Establish stewardship and governance structures early.
- Measure impact continuously and adapt.
Following this cycle doesn’t guarantee success, but it dramatically increases your odds. It forces you to treat agentic AI as a business transformation, not a side project.
Governance, identity and risk: the unglamorous essentials
If there is one lesson from early adopters, it is that identity and access management for non‑human agents must come first. Okta’s July 2025 whitepaper warns that AI agents present unique identity challenges: they are software instances rather than individuals, they spin up and down frequently, authenticate programmatically via tokens or certificates and often need granular, time‑limited permissions[36]. Without proper controls, agents become “super admins” in your environment – a compromised agent could execute high‑value transactions or expose sensitive data[37].
The numbers are sobering. According to Okta, 23 % of IT professionals have experienced credential exposure via AI agents and 80 % have seen unintended agent behaviour[38]. Only 44 % of organisations have policies governing agents[39]. Okta recommends treating AI agents as first‑class identities by integrating them into a unified identity fabric, enforcing least‑privilege access, standardising authentication across applications and adopting cross‑application access protocols that apply policies in real time[40]. This approach aligns with the NIST AI Risk Management Framework, which emphasises trustworthiness, transparency and continuous risk assessment[34].
Security isn’t the only risk. Wavestone’s RiskInsight report notes that LLM‑based agents are prone to hallucination and adversarial attacks[41]. Multi‑agent systems amplify that attack surface: a single compromised agent can cascade errors across a network of interdependent agents[42]. To mitigate these risks, adopt rigorous testing, red‑team exercises, and human‑in‑the‑loop approvals for critical actions. In regulated industries such as healthcare or finance, expect legal frameworks to evolve quickly – your governance team must monitor and adapt to new compliance requirements.
Use cases and early wins
Real‑world deployments illustrate both the potential and limitations of agentic AI. SAP reports that agents are beginning to turn reactive supply chains into intelligent, continuously improving networks[43]. They highlight early use cases:
- Demand forecasting and inventory optimisation. Walmart uses AI agents to forecast demand and adjust inventory across its network of stores, pulling historical sales data and external signals such as weather or local events[44]. Amazon’s fulfilment centres deploy agents to manage inventory, optimise shelf space and automate order picking[45].
- Real‑time logistics. DHL employs agents to monitor shipments, identify potential disruptions and suggest alternative routes to minimise delays[46].
- Manufacturing scheduling and self‑healing supply chains. AI agents analyse supplier data, customer changes and delivery targets to adjust production schedules and reduce idle time[47]. Sree Mangalampalli of FourKites estimates that only 25 % of the 33 types of supply‑chain AI agents identified are operational today[48], underscoring how early the market is.
Beyond supply chain, agents are being tested in fields as varied as customer service, travel, predictive maintenance, data analytics and even healthcare. The University of Cincinnati lists examples such as autonomous travel booking, AI‑driven customer support that diagnoses issues and escalates only when necessary, and continuous monitoring of equipment for predictive maintenance[49]. In customer experience, McKinsey notes that moving from generative chatbots to goal‑driven agents requires redesigning entire workflows and creating compliance layers to guarantee accuracy and safety[50]. The results can be impressive – a multi‑agent fraud‑detection system reduced false positives by 30 % at a financial services firm while increasing analyst trust due to transparent reasoning[51].
Calculating the business case
Financial projections around agentic AI are soaring. The PagerDuty survey indicates that companies expect an average 171 % ROI from agentic AI[22] and that 52 % of organisations expect agents to automate or expedite 26–50 % of workloads[52]. But such averages hide huge variance. ROI depends on selecting the right use case, investing in data quality and integration, and establishing governance. The KPMG survey reminds us that 82 % of leaders see risk management as their biggest challenge[18]; ignoring those risks erodes ROI.
When building a business case, focus on hard and soft benefits:
- Hard savings: labour reduction on repetitive tasks, reduced error rates, improved uptime, lower inventory costs, fewer false positives in fraud detection.
- Revenue uplift: faster response times drive customer retention, personalised offers increase conversion, predictive maintenance reduces downtime and increases capacity.
- Strategic flexibility: agents allow you to react to market changes faster than competitors – an option value that may be priceless in volatile markets.
Use Cullum’s success metrics framework – time‑to‑pilot, time‑to‑scale, agent uptime, user trust scores and business impact[53] – to evaluate pilots and decide whether to scale.
Change management and cultural shifts
Technology is only half the battle. Early adopters agree that training and organisational design are critical. KPMG’s survey found that training employees to work with AI agents is difficult because of system complexity (66 % of respondents), rapidly evolving technology (56 %) and skills gaps (51 %)[19]. The PagerDuty survey similarly shows that lacking a clear training plan is a top regret among generative AI adopters[54]. Here are pragmatic recommendations:
- Establish cross‑functional teams that include IT, business process owners, security, compliance and end‑users. Don’t let AI initiatives be IT‑only projects.
- Invest in upskilling. Provide targeted training on prompt engineering, human‑agent interaction, and risk management. Encourage employees to co‑design workflows with agents rather than merely “use” them.
- Communicate roles and expectations. Fear of job loss can torpedo adoption. Emphasise that agents augment rather than replace people, freeing them to focus on higher‑value work.
- Iterate on organisational policies. Define decision‑rights for agents and humans; update reward structures to recognise collaborative outcomes.
Governance checklist and templates
Implementing agents without a governance framework is irresponsible. Use this checklist, derived from Okta’s recommendations[55] and NIST guidelines[34], to get started:
- Identity management. Register every agent as a unique identity with a lifecycle policy. Use automated provisioning and de‑provisioning to handle the dynamic nature of agents[56].
- Authentication and authorisation. Standardise on API‑driven authentication (JWTs, mutual TLS) and enforce least‑privilege access with time‑limited permissions[57]. Adopt cross‑application access protocols to apply policies consistently across systems[58].
- Auditability. Maintain detailed logs of agent actions and decisions. Ensure that every action can be traced back to a specific agent and, where appropriate, to the human who authorised it[59].
- Risk assessment. Evaluate agents against the NIST AI RMF to ensure trustworthiness. Conduct adversarial tests to detect hallucinations and vulnerabilities[41].
- Stewardship board. Create a cross‑functional team responsible for regular audits, transparency reports and redress mechanisms[60].
Conclusion: act now, but act wisely
Agentic AI is not a fad; it represents a shift from machines that merely answer questions to systems that carry out work. Gartner’s prediction that one‑third of enterprise applications will embed agents by 2028 underscores the urgency[2]. Yet the road to adoption is littered with pilot projects that never scale. The executives who succeed will be those who balance ambition with discipline: they’ll pick high‑impact use cases, design modular architectures, invest in data and identity governance, and build human‑centric organisations. Those who chase hype or skip governance will see costs spiral and trust evaporate.
If you’re ready to move from experiments to outcomes, our team can help. We offer strategic advisory services, proof‑of‑concept implementations and comprehensive governance frameworks. Contact us to start designing your own agentic journey. Don’t wait until competitors deploy agents that reshape your industry – start building the capabilities now.