On Using AI better: Using Workflow Automations vs Agents

This is Part 1 of a series called On Using AI Better, where I talk about things I’ve learnt using AI to build workflows in my daily life. This article discusses the two big trends in AI - automation workflows and agentic workflows - how they work, when to use which, and how to use them effectively.

TL;DR

  • Workflow automation is effective for simple, predictable tasks but grows brittle and expensive to maintain as business logic and exceptions multiply.
  • An agent-oriented approach is more resilient. It mimics how an expert uses a flexible "toolbelt" of capabilities to solve dynamic, complex problems.
  • The focus shifts from building rigid, step-by-step workflows to creating a library of reliable, reusable tools that an agent can orchestrate.
  • This model requires a higher initial investment in tool design but delivers a more scalable and adaptable system for automating high-value, complex work.

The adoption of Large Language Models (LLMs) has surged since the release of ChatGPT. But in practical applications, LLMs are still rarely able to handle the entire job-to-be-done — addressing and resolving a customer ticket, finding and booking a reasonably-priced airline ticket, or managing inventory based on sales confirmations.

ChatGPT weekly active users - Chatterji and colleagues (September 2025) – How People Use ChatGPT
ChatGPT usage types - Chatterji and colleagues (September 2025) – How People Use ChatGPT

Automation workflows

Many have turned to workflow automation to handle multi-step processes. Workflows are directed graphs that chain actions and decision points and are used to automate more complex tasks. One of the poster childs for this is a workflow automation tool called n8n.

n8n is a low-code solution that allows for impressive multi-step automation flows that can be built in a short period of time. It enjoys a massive adoption across AI consultancies, and developers. For instance, companies use n8n to enrich their leads sheets by automating a process to scrape the socials of their leads, summarising it with an LLM, and adding the enriched data to the database.

A complex N8N automation workflow
N8N is popular among YouTube creators and AI agencies alike
Github stars indicate developer adoption. The N8N repository is rapidly growing.

Such automation workflows excel when their environmental factors can be well controlled or anticipated. Their counterparts are assembly lines – known input shape, known output shape.

Say you want to automate your LinkedIn posts with daily LLM slop. The corresponding workflow might look like this:

Simple LinkedIn content generation workflow

If you prioritize consistency over quality, that’s a fairly suitable use case: The Google and OpenAI APIs are reliable. Inputs and outputs can be well predicted and the workflow after some tuning – like removing those suspicious emojis 💡🔍🧠🔧📊🤖🎯🎉 – is likely to produce the desired result.

Where workflows break

The issue with such automation pipelines is that they become brittle as complexity grows. Conditions and exceptions are handled as new branches of the workflow logic. For simple use cases this is not a problem, but as more edge cases need to be considered, so grows the number of branches.

The same applies across individuals and teams: more workflows mean more maintenance to keep them up-to-date, to avoid logical redundancies across them, and documentation overhead.

In low‑complexity environments, operation‑focused workflows can make for quick wins. But in higher-complexity scenarios, benefits of workflow operations can easily plateau.

Heat shootout
Heat (1995) – Highly skilled individuals sublimate their personalities into a precise, timed plan. When the plan collapses it becomes apparent how reliant the characters were on its structure. Any deviation is catastrophic.

Agentic reasoning: Tool-centric, not pipeline-centric

An emerging approach to lift the complexity ceiling for automation is to mimic human reasoning capabilities.

What does that mean? Imagine you’re given a new task today, increase sales of your team by 200%. How would you approach it?

First, you would ensure that you a) understand both the request, as well as b) any potential references that are part of the instruction. In this case, it might be helpful to clarify things like “Does sales refer to gross margin, number of new customers acquired, or total contract value?” and “Are we comparing against last year’s performance or last week’s performance?”.

Since 200% is an ambitious target, you might ask further questions to understand the source of this expected growth should come from. “Are we launching new products or services that are expected to drive this growth? Are we expanding into new markets?”

Assuming that’s now clear, you would then make a plan on how to reach the objective. For example you might make a plan that breaks down the year into quarters; the first quarter cross-selling to existing clients, the second quarter building out referral incentives, the third quarter and so on.

In complex settings such planning is commonly explicit, and done iteratively on multiple levels (from high-level to low-level plans) or iteratively - like in agile project management. The goal is aimed at being achieved through interdisciplinary teams, requiring the combination of diverse skills.

Project management: Waterfall and agile management are used to plan and manage complex projects

Now you – or rather the team – would begin executing step by step as planned. But reality is often messy; the world market could see a sudden tariff bombshell announcement, referral incentives might be less effective than you imagined and so on. In some scenarios, continuing as planned isn’t an option. Depending on the obstacle, different parameters – like larger referral incentives –, even a new approach, consulting with supervisors or stopping entirely might be necessary.

This fairly intuitive approach is exactly what agentic reasoning imitates.

The Agentic Workflow as the Orchestrator

In an agentic model, workflows are repurposed. Instead of being a rigid, step-by-step checklist, a workflow is now a high-level orchestrator for a smart AI agent. The "agentic workflow" gives the AI agent freedom to dynamically figure out how to complete tasks on its own, but provides a safe and structured framework to guide its process. The workflow orchestrates the AI's thinking by guiding the overall plan, enforcing safety limits, and tracking progress.

In short, the workflow orchestrates the agent's reasoning process. It provides the structure that separates the 'what to do' (planning and reflection) from the 'how to do it' (tool execution), ensuring the entire process is safe and reliable.

Orchestrator workflow responsibilities

  • Reasoning loop guidance: Structures the agent's process through distinct phases—Triage, Planning, Execution, and Reflection—to ensure a consistent and robust problem-solving approach for every task.
  • Human‑in‑the‑loop: Gates for clarification and approvals where risk lives. E.g., requiring user confirmation before sending a high-impact email.
  • Autonomy caps: Limits on retries, spend, and total steps per execution to prevent runaway processes. E.g., allowing a maximum of 3 API call retries or a $5 cost limit.
  • Validation gates: Pre-step checks before executing sensitive or irreversible actions. E.g., verifying user permissions before deleting a database record.
  • Audit trails & communication: Tracks plans, tool calls, and decisions to communicate relevant events to the user, providing transparency that aids in both user experience and debugging.

In this model, tools are the core unit of capability and the orchestrator can only be as effective as the tools it has access to. Just as a project manager assembles a team of experts – a designer, an engineer, a data analyst – the agent’s reasoning loop orchestrates tools to achieve a goal. Each tool has a clear contract to perform a specific function, like retrieve_document or send_email. The agent, acting as the project manager, doesn't need to know how each tool works, only what it does. By decomposing a larger problem into smaller subtasks, the agentic model can use its versatile 'toolbelt' of composable, reusable capabilities to iteratively problem solve.

Case Study: Intelligent Support Ticket Routing

The Problem: Death by a Thousand Edge Cases

A SaaS company's support team implements an automation workflow to classify and route incoming tickets. Initially, it's a success, handling the most common issues and saving hours. But soon, the team hits a wall. The workflow, designed for the "happy path," begins to break as it faces real-world complexity.

  • The Patches Begin. A ticket arrives: "I can't log in." The workflow correctly classifies it as `technical_support` but has no context. So, the team adds a new rule: if a ticket is vague, flag it for human review. One hole is patched.
  • The Brittleness Grows. A user writes, "see attached," but forgets the attachment. The workflow forwards an incomplete ticket. The team adds another rule: parse the text for "attachment" and verify one exists. Another patch is added, making the workflow more complex.

Each patch for an edge case made the system more fragile and harder to maintain. The underlying issue was the rigid model itself, which was incapable of adapting to unexpected user inputs.

The Agentic Solution: A Resilient "Toolbelt"

Instead of patching the workflow, the team implements an agentic system. The core logic is broken down into a "toolbelt" of discrete functions:

fetch_ticket
classify_intent
validate_attachments
analyze_image
search_knowledge_base
summarize
route_to
request_clarification

The agent selects tools dynamically based on the context. Let's trace how it handles the "broken login" ticket.

  1. Triage

    A new ticket arrives: "my login is broken again, see attached." The agent begins by assessing the request:

    • The agent uses classify_intent and correctly identifies the intent as 'technical_support'.
    • However, it immediately calls validate_attachments, which reports that the referenced attachment is missing.
    • Initial Plan: Recognizing the ticket is incomplete, the agent's initial plan is to use the request_clarification tool. It pauses and sends a reply: "It looks like you mentioned an attachment, but I don't see one. Could you please provide it?"
  2. Re-planning

    The user replies with the missing screenshot. With complete input, the agent scraps its initial plan and generates a new strategy:

    • Step 1: analyze_image(attachment_id)
    • Step 2: search_knowledge_base(query=output_from_step_1)
    • Step 3: summarize(search_results=output_from_step_2)
    • Step 4: route_to(queue='Tier_2_Support', summary=output_from_step_3)
  3. Execution

    The agent executes the new plan:

    • analyze_image extracts "Error 502: Bad Gateway" from the screenshot.
    • This text is passed to search_knowledge_base, which returns an unexpected result: a high-priority P1 alert titled "Live Site Issue: Intermittent 502 Errors on Login Service".
  4. Reflection & Final Plan

    The agent now reflects on the execution. The plan was to find help docs for a single user, but the result was a P1 incident alert. A rigid workflow would likely have routed the ticket incorrectly. The agent, however, identifies the mismatch and re-plans.

    Agent's Internal Monologue:

    "Initial goal: Find a help article for a single user. Reality: A P1 site-wide incident. Continuing the current plan is not just wrong, it’s counterproductive—it would add noise to an active fire. I must pivot from individual support to incident response. New plan: link this ticket to the master incident and notify the user that we're aware of the problem and working on it."

    It generates a final plan based on this new understanding:

    • Step 1: link_ticket_to_master_issue(issue_id='P1-502-LOGIN')
    • Step 2: send_automated_reply(template='known_issue_response')

    The agent executes the plan, informing the user about the outage and linking their ticket to the master incident.

The Outcome: From Automation to Intelligence

The agentic system delivered tangible improvements over a traditional workflow:

  • For the Customer: Received a fast, relevant response that acknowledged the wider issue, which builds trust.
  • For the Support Team: The ticket was correctly linked to the master incident without manual triage, shielding engineers from duplicate reports.
  • For the Business: A single user report became a valuable, real-time signal, contributing to a faster incident response.

By adapting to new information, the system demonstrated how to move beyond rigid automation to achieve more effective results.

Conclusion

Choosing the right automation strategy requires balancing immediate needs with long-term goals. While workflow automation can quickly automate predictable processes or validate automation potentials, their inherent rigidity creates a ceiling on the complexity they can handle.

Agent-oriented design offers a path forward by shifting the focus from rigid, process-centric pipelines to a flexible, tool-centric model. By treating tools as reliable, composable capabilities and the workflow as the orchestrator for an agent's reasoning loop, companies can automate a new class of complex, dynamic work.

Below is a high-level comparison of the two approaches and their trade-offs:

Workflow Automation

Agentic Reasoning

Best For

Predictable, linear processes with few exceptions.

Complex, dynamic problems requiring judgment and adaptation.

Time to Value

Fast (Days to weeks) for a single, defined process.

Slower (Months) to build a robust, multi-capable system.

Scalability

Scales per-process. New workflows must be built and maintained individually.

Scales per-capability. Tools are reused across many tasks.

Maintenance

High. Becomes brittle and expensive as new edge cases are added.

Lower. Tools are maintained centrally, improving system-wide resilience.

Key Risk

Brittleness. The system breaks when the underlying process changes.

Upfront Investment. Requires more initial design and development effort.

Ownership

Ops-owned. Focus is on tuning individual workflows for specific tasks.

Platform-owned. Focus is on building a shared, reliable "toolbelt".

A comparison of workflow automation and agentic reasoning.