• ABOUT

thinairfactoryblog

~ A topnotch WordPress.com site

thinairfactoryblog

Tag Archives: machine-learning

Language, Loops & the disarming human truth of everything AI.

05 Wednesday Nov 2025

Posted by Thin Air Factory in Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, chatgpt, machine-learning, technology

So, AI is the answer to everything. Great. Slightly over simplistic in my opinion, but then again, I don’t own a social platform, an e comm business, a sprawling call-centre overhead, or a global AI solutions leviathan.

I spend most of my time at the punter end of the conversation: wrestling with myself in a desperate search for ‘contact number, any contact number’ while trying to communicate with yet another incommunicable agentic brand or business.

It’s hard to fault them. They’re just doing what all the messianic AI apostles, prophets and investment houses are telling them to. The quietly catastrophic impact of Agentic AI on the more visceral side of customer experience is yet to fully surface – and the jury’s out on to what degree and to whom it will happen. But down at the punter end, when one applies the ‘Pub Rules’ model of research methodology, it is quite normal now for any person on any given day to bemoan their inability to ‘speak to a human’ – especially when there is a problem.

At this moment of mass migration by companies to everything AI, it’s worth reminding ourselves of the prescient words of one Daniel C Dennett. In his refutation of the slippery slope towards the singularity and a dystopian future of machines ruling humans, he makes a very simple and universal point.

‘It’s not that AI will take over. Its that we will cede responsibility to it before it is capable of doing what we think it can do

Well, there certainly is a lot of ceding going on at the moment – and a lot of ‘well, that’s not really living up to the hype’. The gold rush mentality of companies, brands, and businesses integrating AI into every corner and layer of their operation under the guise of improved customer experience is, in some instances breathtaking. But all too often these are really just ‘stealth’ cost-cutting dolled up in the dressing up box of better CX. And the rigour applied to ensuring that they do not ‘collapse’ the very thing they claim to do – improving customer experience – is often non-existent in this rush to ‘optimisation.’

So what to do?

Well, change the strategic framing model for one. The inhumanity inherent in many AI solutions is primarily driven by one tiny snippet of language that has become the norm in every AI transformation meeting. A piece of language that sets all AI and tech above the human – and subjugates them to a secondary or tertiary role in the whole shebang.

The Human in the loop.

This is the source code of AIs ghettoisation of humans – compelling them to be forever seen as operating under and within its gift. If we are to define a more optimistic, fair, and ethical model for the proliferation strategies of AI, I’d suggest that we need to create a counterpoint to this framing. One that compels every potential AI transformer to consider the human as primary.

My recommendation?

The Loop in the Human deployed as the leading tenet.

I pondered this as a theory. Then I chose to explore it more formally.

So below is a friendly little proprietary White Paper exploration [aided ironically by AI – in service to my ideas of course] on how that new and more balanced strategic model might play out.


The Thin Air Factory 2025: White Paper Case: The Dual Framing of Agentic AI Strategy

EXECUTIVE  SUMMARY

This white paper introduces a “Dual Framing Strategy” for Agentic AI, arguing that the prevalent “Human in the Loop” (HitL) approach is necessary but incomplete. HitL, an AI-centric viewpoint, focuses on automation, risk mitigation, and error handling, positioning humans reactively as validators or correctors.

The paper proposes “Loop in the Human” (LitH) as a human-centric strategic framing. LitH re-establishes AI’s philosophical driver as the augmentation and empowerment of human performance, making humans proactive co-creators and strategists.

A complete Agentic AI strategy combines both LitH and HitL. LitH defines success through proactive human-driven excellence and elevated human performance, while HitL defines safety through reactive AI-driven governance. This dual approach ensures that the pursuit of efficiency doesn’t diminish human capability and that oversight doesn’t hinder augmentation, leading to a balanced and effective Agentic AI implementation.


 

The Dual Framing of Agentic AI Strategy

This paper seeks to posit that current strategic framing of Agentic AI, dominated by ‘Human in the Loop’ (HitL), is a necessary but ultimately one-dimensional strategic posture. It reflects an AI-centric viewpoint that prioritizes automation, risk mitigation, and error handling—a critical but incomplete view of the human-AI partnership.

The white paper will argue for the introduction of the proprietary, human-centric term ‘Loop in the Human’ (LitH) as the primary strategic framing for all Agentic AI initiatives. LitH fundamentally re-establishes the philosophical driver of AI: the augmentation and empowerment of human performance.

Only when LitH and the complementary HitL are used in conjunction can an organization achieve a complete Agentic AI strategy that balances human-centric augmentation with AI-centric safety and control.


1. The Limitation of ‘Human in the Loop’ (HitL)

The existing strategic discourse is heavily weighted toward HitL, which primarily functions as a guardrail for autonomy.1 Academic and thought leadership publications consistently frame HitL around concepts like:

  • Risk Mitigation and Accountability: Embedding human judgment at key decision points to safeguard reliability and ethics, especially in high-stakes domains (OneReach, iMerit).2
  • Error Correction and Edge Case Handling: The AI agent escalates to a human when its confidence is low, the context is ambiguous, or a task is beyond its current capability (Medium, WorkOS).3
  • System Refinement: Using Reinforcement Learning from Human Feedback (RLHF) to align AI behavior with human values and goals (OneReach).4

This framing is indispensable for safe deployment, but it is architecturally and philosophically reactive. The human’s role is to intervene, correct, or approve—acting as the system’s fail-safe, censor, or validator.5 This emphasis on preventing failure misses the strategic opportunity of driving success.

The current HitL strategic framing, while essential for governance and risk management, fails to capture the proactive, creative, and augmenting potential of human-AI collaboration.


2. Introducing the Strategic Lens: ‘Loop in the Human’ (LitH)

‘Loop in the Human’ (LitH) proposes a paradigm shift from viewing the human as a failsafe to seeing them as the proactive, value-creating engine for the Agentic AI system.

LitH is a strategic framing defined by its Human-Centricity:

Focus Area‘Human in the Loop’ (HitL)‘Loop in the Human’ (LitH)
Philosophical GoalSafe Automation and Risk MitigationHuman Augmentation and Value Creation
Human RoleReactive: Validator, Censor, CorrectorProactive: Co-creator, Strategist, Commander
TriggerAI Failure/Low Confidence/High RiskHuman Intent/Strategic Insight/Creative Need
System OutputReliable, Safe, Aligned DecisionsElevated Human Capability, Novel Solutions

LitH is supported by principles from Human-Centered AI (HCAI) research and the concept of “Human-AI Teaming”:

  • Elevating Human Agency: Research in designing Agentic AI emphasizes that systems should be built to complement human expertise and elevate human agency, not supplant it (ResearchGate, UST).6 LitH captures this imperative by defining the human’s role as the system’s “Commander”—directing goals and providing the high-level intent that the agent then executes.
  • Proactive Collaboration: The concept of AI moving from a ‘tool’ to a ‘co-learner’ or ‘peer collaborator’ aligns with LitH (arXiv). LitH is the design mandate that ensures the human initiates a creative feedback loop, such as providing an unpredicted strategic correction or an ethical override based on non-quantifiable domain experience, thereby driving the agent to a better-than-automated outcome.
  • The Philosophical Driver: LitH re-establishes the service mandate of AI—that the agent’s purpose is to amplify the user’s performance and knowledge, rather than the user’s purpose being to train or validate the agent. This aligns with the Industry 5.0 shift towards human-centricity, adaptability, and ethical AI integration (Amity).

3. The Complete Strategy: Conjunction of LitH and HitL

The full strategic potential of Agentic AI is unlocked only when the two lenses—LitH and HitL—are combined into a Dual Framing Strategy.

Strategic AxisPurposeFraming Lens
Augmentation & ValueProactive Human-Driven ExcellenceLoop in the Human (LitH)
Governance & SafetyReactive AI-Driven SafetyHuman in the Loop (HitL)
  • LitH defines Success: The strategic objective is defined by the human’s elevated performance (e.g., faster innovation, better strategic decision-making, personalized outcomes). The agent is designed to proactively “loop in” the human for strategic direction, novel inputs, and creative collaboration.
  • HitL defines Safety: The governance objective is defined by preventing failure (e.g., mitigating bias, correcting hallucinations, avoiding non-compliance).7 The system is architected to reactively “loop in” the human at points of risk and uncertainty.8

By adopting this Dual Framing, organizations can explicitly decouple the AI’s operational strategy (governed by HitL) from the Human’s value strategy (driven by LitH), ensuring that the pursuit of efficiency does not erode human capability, and the need for oversight does not stifle augmentation.


Conclusion

The existing reliance on ‘Human in the Loop’ presents a strategic blind spot, framing Agentic AI primarily as an automation challenge to be governed. The introduction of ‘Loop in the Human’ offers the essential counterpoint: a human-centric mandate that frames Agentic AI as an augmentation and co-creation opportunity.

The case for the Dual Framing—LitH (Proactive Augmentation) and HitL (Reactive Governance)—is supported by the emerging consensus in academic and industry papers that effective Agentic AI requires systems that both elevate human agency and maintain clear accountability (Capgemini).


So, there we are. A provocation? Maybe. A model that can be immediately deployed? Most definitely. A small step towards rebalancing the madness of AIs the answer now what’s the question. Without question.

But its one primary role above all others: to help us avoid ceding responsibility for everything to AI with no meaningful interrogstion of how that serves the human first and foremost.

Every transformative technology, from language, glyps, writing, and printing onwards, has always had to take time to shake out its bugs and weather the abuses of those who use it to favour the few, not the many. But they got there. It takes time. Thats what we need to create if we are to off-set the worst applications of AI in our shared human existence – the time to interrogate its most meaningful application beyond cost saving and control. And we might start by using strategic tools to keep reminding ourselves whom AI is in service to.

Loop in the Human anyone?


Julian Borra is a creative writer, strategist and published author with a soft spot for culture, purpose, sustainability, tech, and Pub Rules.

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • November 2025
  • September 2024
  • June 2021
  • December 2020
  • August 2020
  • April 2020
  • March 2020
  • January 2020
  • October 2019
  • June 2019
  • April 2019
  • November 2018
  • August 2018
  • June 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • March 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013

Categories

  • Uncategorized

Meta

  • Create account
  • Log in

Create a free website or blog at WordPress.com.

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy
  • Subscribe Subscribed
    • thinairfactoryblog
    • Join 28 other subscribers
    • Already have a WordPress.com account? Log in now.
    • thinairfactoryblog
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar