So, AI is the answer to everything. Great. Slightly over simplistic in my opinion, but then again, I don’t own a social platform, an e comm business, a sprawling call-centre overhead, or a global AI solutions leviathan.
I spend most of my time at the punter end of the conversation: wrestling with myself in a desperate search for ‘contact number, any contact number’ while trying to communicate with yet another incommunicable agentic brand or business.
It’s hard to fault them. They’re just doing what all the messianic AI apostles, prophets and investment houses are telling them to. The quietly catastrophic impact of Agentic AI on the more visceral side of customer experience is yet to fully surface – and the jury’s out on to what degree and to whom it will happen. But down at the punter end, when one applies the ‘Pub Rules’ model of research methodology, it is quite normal now for any person on any given day to bemoan their inability to ‘speak to a human’ – especially when there is a problem.
At this moment of mass migration by companies to everything AI, it’s worth reminding ourselves of the prescient words of one Daniel C Dennett. In his refutation of the slippery slope towards the singularity and a dystopian future of machines ruling humans, he makes a very simple and universal point.
Well, there certainly is a lot of ceding going on at the moment – and a lot of ‘well, that’s not really living up to the hype’. The gold rush mentality of companies, brands, and businesses integrating AI into every corner and layer of their operation under the guise of improved customer experience is, in some instances breathtaking. But all too often these are really just ‘stealth’ cost-cutting dolled up in the dressing up box of better CX. And the rigour applied to ensuring that they do not ‘collapse’ the very thing they claim to do – improving customer experience – is often non-existent in this rush to ‘optimisation.’
So what to do?
Well, change the strategic framing model for one. The inhumanity inherent in many AI solutions is primarily driven by one tiny snippet of language that has become the norm in every AI transformation meeting. A piece of language that sets all AI and tech above the human – and subjugates them to a secondary or tertiary role in the whole shebang.
The Human in the loop.
This is the source code of AIs ghettoisation of humans – compelling them to be forever seen as operating under and within its gift. If we are to define a more optimistic, fair, and ethical model for the proliferation strategies of AI, I’d suggest that we need to create a counterpoint to this framing. One that compels every potential AI transformer to consider the human as primary.
My recommendation?
The Loop in the Human deployed as the leading tenet.
I pondered this as a theory. Then I chose to explore it more formally.
So below is a friendly little proprietary White Paper exploration [aided ironically by AI – in service to my ideas of course] on how that new and more balanced strategic model might play out.
The Thin Air Factory 2025: White Paper Case: The Dual Framing of Agentic AI Strategy
EXECUTIVE SUMMARY
This white paper introduces a “Dual Framing Strategy” for Agentic AI, arguing that the prevalent “Human in the Loop” (HitL) approach is necessary but incomplete. HitL, an AI-centric viewpoint, focuses on automation, risk mitigation, and error handling, positioning humans reactively as validators or correctors.
The paper proposes “Loop in the Human” (LitH) as a human-centric strategic framing. LitH re-establishes AI’s philosophical driver as the augmentation and empowerment of human performance, making humans proactive co-creators and strategists.
A complete Agentic AI strategy combines both LitH and HitL. LitH defines success through proactive human-driven excellence and elevated human performance, while HitL defines safety through reactive AI-driven governance. This dual approach ensures that the pursuit of efficiency doesn’t diminish human capability and that oversight doesn’t hinder augmentation, leading to a balanced and effective Agentic AI implementation.
The Dual Framing of Agentic AI Strategy
This paper seeks to posit that current strategic framing of Agentic AI, dominated by ‘Human in the Loop’ (HitL), is a necessary but ultimately one-dimensional strategic posture. It reflects an AI-centric viewpoint that prioritizes automation, risk mitigation, and error handling—a critical but incomplete view of the human-AI partnership.
The white paper will argue for the introduction of the proprietary, human-centric term ‘Loop in the Human’ (LitH) as the primary strategic framing for all Agentic AI initiatives. LitH fundamentally re-establishes the philosophical driver of AI: the augmentation and empowerment of human performance.
Only when LitH and the complementary HitL are used in conjunction can an organization achieve a complete Agentic AI strategy that balances human-centric augmentation with AI-centric safety and control.
1. The Limitation of ‘Human in the Loop’ (HitL)
The existing strategic discourse is heavily weighted toward HitL, which primarily functions as a guardrail for autonomy.1 Academic and thought leadership publications consistently frame HitL around concepts like:
- Risk Mitigation and Accountability: Embedding human judgment at key decision points to safeguard reliability and ethics, especially in high-stakes domains (OneReach, iMerit).2
- Error Correction and Edge Case Handling: The AI agent escalates to a human when its confidence is low, the context is ambiguous, or a task is beyond its current capability (Medium, WorkOS).3
- System Refinement: Using Reinforcement Learning from Human Feedback (RLHF) to align AI behavior with human values and goals (OneReach).4
This framing is indispensable for safe deployment, but it is architecturally and philosophically reactive. The human’s role is to intervene, correct, or approve—acting as the system’s fail-safe, censor, or validator.5 This emphasis on preventing failure misses the strategic opportunity of driving success.
The current HitL strategic framing, while essential for governance and risk management, fails to capture the proactive, creative, and augmenting potential of human-AI collaboration.
2. Introducing the Strategic Lens: ‘Loop in the Human’ (LitH)
‘Loop in the Human’ (LitH) proposes a paradigm shift from viewing the human as a failsafe to seeing them as the proactive, value-creating engine for the Agentic AI system.
LitH is a strategic framing defined by its Human-Centricity:
| Focus Area | ‘Human in the Loop’ (HitL) | ‘Loop in the Human’ (LitH) |
| Philosophical Goal | Safe Automation and Risk Mitigation | Human Augmentation and Value Creation |
| Human Role | Reactive: Validator, Censor, Corrector | Proactive: Co-creator, Strategist, Commander |
| Trigger | AI Failure/Low Confidence/High Risk | Human Intent/Strategic Insight/Creative Need |
| System Output | Reliable, Safe, Aligned Decisions | Elevated Human Capability, Novel Solutions |
LitH is supported by principles from Human-Centered AI (HCAI) research and the concept of “Human-AI Teaming”:
- Elevating Human Agency: Research in designing Agentic AI emphasizes that systems should be built to complement human expertise and elevate human agency, not supplant it (ResearchGate, UST).6 LitH captures this imperative by defining the human’s role as the system’s “Commander”—directing goals and providing the high-level intent that the agent then executes.
- Proactive Collaboration: The concept of AI moving from a ‘tool’ to a ‘co-learner’ or ‘peer collaborator’ aligns with LitH (arXiv). LitH is the design mandate that ensures the human initiates a creative feedback loop, such as providing an unpredicted strategic correction or an ethical override based on non-quantifiable domain experience, thereby driving the agent to a better-than-automated outcome.
- The Philosophical Driver: LitH re-establishes the service mandate of AI—that the agent’s purpose is to amplify the user’s performance and knowledge, rather than the user’s purpose being to train or validate the agent. This aligns with the Industry 5.0 shift towards human-centricity, adaptability, and ethical AI integration (Amity).
3. The Complete Strategy: Conjunction of LitH and HitL
The full strategic potential of Agentic AI is unlocked only when the two lenses—LitH and HitL—are combined into a Dual Framing Strategy.
| Strategic Axis | Purpose | Framing Lens |
| Augmentation & Value | Proactive Human-Driven Excellence | Loop in the Human (LitH) |
| Governance & Safety | Reactive AI-Driven Safety | Human in the Loop (HitL) |
- LitH defines Success: The strategic objective is defined by the human’s elevated performance (e.g., faster innovation, better strategic decision-making, personalized outcomes). The agent is designed to proactively “loop in” the human for strategic direction, novel inputs, and creative collaboration.
- HitL defines Safety: The governance objective is defined by preventing failure (e.g., mitigating bias, correcting hallucinations, avoiding non-compliance).7 The system is architected to reactively “loop in” the human at points of risk and uncertainty.8
By adopting this Dual Framing, organizations can explicitly decouple the AI’s operational strategy (governed by HitL) from the Human’s value strategy (driven by LitH), ensuring that the pursuit of efficiency does not erode human capability, and the need for oversight does not stifle augmentation.
Conclusion
The existing reliance on ‘Human in the Loop’ presents a strategic blind spot, framing Agentic AI primarily as an automation challenge to be governed. The introduction of ‘Loop in the Human’ offers the essential counterpoint: a human-centric mandate that frames Agentic AI as an augmentation and co-creation opportunity.
The case for the Dual Framing—LitH (Proactive Augmentation) and HitL (Reactive Governance)—is supported by the emerging consensus in academic and industry papers that effective Agentic AI requires systems that both elevate human agency and maintain clear accountability (Capgemini).
So, there we are. A provocation? Maybe. A model that can be immediately deployed? Most definitely. A small step towards rebalancing the madness of AIs the answer now what’s the question. Without question.
But its one primary role above all others: to help us avoid ceding responsibility for everything to AI with no meaningful interrogstion of how that serves the human first and foremost.
Every transformative technology, from language, glyps, writing, and printing onwards, has always had to take time to shake out its bugs and weather the abuses of those who use it to favour the few, not the many. But they got there. It takes time. Thats what we need to create if we are to off-set the worst applications of AI in our shared human existence – the time to interrogate its most meaningful application beyond cost saving and control. And we might start by using strategic tools to keep reminding ourselves whom AI is in service to.
Loop in the Human anyone?
Julian Borra is a creative writer, strategist and published author with a soft spot for culture, purpose, sustainability, tech, and Pub Rules.
