• ABOUT

thinairfactoryblog

~ A topnotch WordPress.com site

thinairfactoryblog

Tag Archives: AI

AI Winter or Creative Summer? You Choose. A DBS&C view on where Creativity goes next.

02 Thursday Apr 2026

Posted by Thin Air Factory in Uncategorized

≈ Leave a comment

Tags

AI, Creative Industries, Creativity, prediction, thinking-tools

For someone who has mostly worked in what some of the business grown-ups perceive to be the realm of Crayons and Colouring-in [AKA Creative Comms and Strategy, Advertising, Brand, etc], some of the more meta strategic and trend questions I’m asked in any given workday can seem a little abstract or grand:

So, what’s your take on an AI Winter?

Is AI the new creative alpha? [and Yes; someone actually used those words!]

Is anyone paying top dollar for creativity anymore?

When these kinds of questions arise, I tend to revert to my Dreamers Believers Soldiers and Cynics [DBS&C] model to both interrogate the questions and explore any potential answers. Not just because it’s convenient, but because it works.

One quick note on language and understanding on ‘AI Winter.’ Now there are some who seem to interpret this as a professional deep freeze created by the onslaught of AI hoovering up jobs: a sort of freezing mass retreat from employment due to finding one’s skills suddenly redundant. Though offering a rather dramatic Napoleonic visualisation, this is in fact incorrect.

For the purposes of clarity, this trough of despondency they imagine has many names currently. ‘The Great Decoupling’ – where productivity and profit cease to be dependent on human labour; ‘The White Collar Bloodbath’ – more of a ‘Does what it says on the Tin’ encapsulation of the problem; ‘The Precariat Age’ – illustrating how human workers will exist in an increasingly precarious employment landscape due to AI; rounded off with the odd jaunty more positive, meme-minded summary, such as the ‘Superworker Era,’ more intent on bigging up humans and their superior though now augmented protein computers.

Just not AI Winter. On that, AI happily and correctly describes its wintery fall from grace thus:

An AI winter is a period of reduced funding, interest, and research activity in the field of artificial intelligence. These periods occur when the high expectations—or “hype”—surrounding AI advancements fail to live up to reality, leading to disappointment and a “freezing” of investment

So, with what kind of winter and for whom clarified, to rummage around in these topics a little more, let’s start with conflating the Seasonal model that sits behind statements like AI Winter and the four personas that make up the Dreamer, Believer, Soldier & Cynic model

First off I need to assert the bleeding obvious: that we’re seemingly in the sunny highlands of an AI Summer right now. The continued tsunami of investment churning towards AI is staggering, the hyperbole undiminished. And the call to AI arms by most every organisation shows no sign of slowing. AI Joy is all about us. Every organisation wants an AI glow up!

With these two coordinates clear, I can start to map all things AI onto a DBS&C framework. When I do, it reveals a very simple and aligned model.  

The Dreamer persona is an AI Spring personified. Moon-shot minded, everything was possible in this period of ‘What if’ for the Dreamer. Transformative. Life changing. World changing. The future. The binary mind on steroids was unstoppable.

The Believer is the perfect persona for our current AI Summer. We’ve got AI. We’ve got a team. We’re on a mission. We’ve got more cash than we know what to do with. Build. Build. Build.

Which brings us to the matter of Autumn. This is the domain of the Soldier persona. Directional. Applied. Practical. This season exists at the crunchier end of the spectrum of proof, reaching far beyond the proof of concept of the Dreamer. You may have an audacious AI strategy, but ‘Does it Work?’ Really? To find evidence of that, you need to apply it; in every quarter on every front. Test and learn. Fail fast. Keep moving forwards. What could possibly go wrong? This is the season where we might start to hear and feel the voice of the Winter Cynic at work. It is in this tipping point of Autumn that whisperings begin. On which point:

Some experts have made murmurings about us having already stepped into our AI Autumn, with an AI Winter being closer than we might like to think.

This isn’t just about cultural and ethical push back. The astronomical costs of training models ($100M+) and the diminishing returns of simply ‘adding more data’ have led some to wonder in their quieter moments whether the current AI bubble might eventually pop or at the very least, deflate. If it does, hello Winter. If AI’s hyperbole and overpromise starts to bite and the shortfalls, disappointments, increasing tech debt and plain snake oil slipperiness of it. all take hold and multiply, the retreat from AI will be significant.

Which brings me to the contradictory inflection point we seem to find ourselves at now.

Why is that of interest to a ‘creative jonny’?

Well, the answer to that question lies in the word creative.

The current trend in the creative industries in which I work is very downward, and not in a life affirming yoga-dog-like way.  A closer look reveals that the slash and burn strategy being undertaken by the big Global holding companies will hit mainly in administrative and research roles – but that doesn’t deflect from the fact that large numbers of previously premium value creative resource are being tipped into the marketplace where roles are evaporating before our very eyes.

The big multinational companies like Omnicom, Dentsu, WPP and Publicis have stopped just leaking talent. The word Purge is trending. And all those smaller agency brands that once flourished in their sunlit lowlands, bringing energy and innovation and differentiation, are being absorbed into the great blancmange of Consolidation, Simplification, Cost Cutting, Transformation, and the almost satirically named Horizontality.

To some this feels like a Managed Decline strategy hiding inside a Relevance and Growth strategy.

Its the inflection point between AI and Creativity that I want to interrogate a little more.

If we apply the Seasonal DBS&C model to Creativity in Communications land, given the overarching strategic trajectory of. the big. global holding companies, the Cynical Winter appears to be upon us.

And therein lies the contradiction – the current Cynical wintery Knives Out approach to the creative industries seems to fall counter to what the big consultancy commentators are advocating.   

If you listen to the Deloittes, Forbes, Harvards, and PwCs of this world, Creativity is about to enter a Dreamer-like spring, reborn, with a rejuvenated sense of ‘what if.’  The story goes that the proliferation of AI, in the creative process at least, will lead to homogenised thinking, content ‘slop’, and the lowering of our cognitive creative ability – our killer app – in the process. Given this, they predict that, as AI drives the cost of the horizontal ‘good’ to zero, the value of the exceptional ‘great’ (human-driven creativity) skyrockets. Curiously, from that viewpoint at least, it seems that the very consultancies that spent decades trying to ‘rationalize’ marketing into a maths problem are now the ones waving the flag for ‘Creative Empathy.’

But that’s good for us Creatively minded folks surely?

Well, yes, but only if we’ve evolved. If we’ve climbed out of our Mad Men fever dream and learned some lessons and skills along the way, the new dawn looks exciting and desirable. Creative people armed with relevant and appropriate AI tools will be the secret sauce in sustaining and elevating unique differentiated business and brand propositions, communications, and identities in an increasingly vanilla world.

The only potential flaw in that glorious plan will be that the propaganda machine that fuels C Suite decision stays stuck on ‘Horizontality’ for a while longer yet. The Buyer will need a case proof argument to realise where the value lies before we spring into a new Summer of Creativity any time yet. Until they do. Blancmange it is.

But. But. But. The upside is that there is definitely a place for smaller, more agile consultancies of like-minded, AI powered Creative minds to fight the good fight for originality and differentiation. It’s just going to be a bumpy Winter exit.

The further upside for me is that this simple exercise has revealed to me that my DBS&C model has a role to play as a predictive tool, not just a diagnostic thinking one.

I’ve discovered that when applied to questions such as these, DBS&C can use its four personas to paint a simple picture of the psychological cycle of Innovation in which a category, sector, business or brand exists.

In helping them identify which ‘season’ they’re in, DBS&C can then help them to assess whether they have the right ‘persona’ people in place to understand, lead and optimise that phase in their seasonal cycle. Once DBS&C has helped to ground them in their seasonal cycle, it is also possible to predict with a reasonable degree of certainty where they might be going next – and build for it, defining immediate practical and applicable actions to make it happen.

Which brings us back to AI Winters and Creative Summers. The reality as always most probably lies somewhere between their polarities and possibilities of both, with an option to smudge the two into some uncomfortable yet exhilarating parallel existence.

So, Snow shoes or Bikini: You Choose

PS. & FYI If you want to explore how the Dreamers Believers & Soldiers tool might help your brand or business, email me at julian@thinairfactory.com  

Anti-social memory & the rise of digital dementia.

18 Wednesday Mar 2026

Posted by Thin Air Factory in Uncategorized

≈ Leave a comment

Tags

AI, resilience, Social Memory, the-human-blockchain

I spy with my little AI…

There – did you see it? That look. Watching. Not judging. Just vigilant. Less looking at the task the young woman is undertaking; more sensing the rhythm and space around her as she moves through it; the interactions that flow outwards from her actions and how she is receiving cues and clues from the environment in which she works; measuring her place within the eco system of people around her. This vigilance is fuelled by an innate understanding of the consequences that radiate outwards into the world from the small banal task she is undertaking. It’s a look that brings with it the wisdom of how things need to be done; what has been, what was learned from past undertakings of it, and what needs to be improved and what needs to be passed on or down. It is a look that holds within it the minutiae of what it means to exist successfully at an evolutionary creature level within a wider group and context. An innate understanding of the unseen and unspoken things that every individual part of the collective must carry with them for the whole to function effectively and efficiently as part of the whole.

Right now, my little AI cannot spy like this: this kind of observation is the most human of traits.

Under the veneer of our everyday decisions and actions, both amongst our nearest and dearest, amongst our work colleagues, and amongst our community and society in general, a staggeringly sophisticated genetically founded system is at work; a system that has enabled us to evolve and develop and progress as a species, a system that has helped secure out survival, improve our existence and experiences; and it may just be facing the greatest threat to its primacy.

Social memory plays a powerful and pivotal role in our human existence and culture, and as part of our increasing resilience as a species.

But in recent years our increasingly digital existence means that we are abdicating the recording, storage and archiving of our own immediate memory of any experience to digital devices and platforms. This is potentially creating ‘blind spots’ in both our individual and our wider social memory.

The Digital Graveyard.

We are in effect passing over the management of this human faculty to devices, platforms and algorithms over which we ultimately have no ownership or control. In time we will merely ‘lease’ our own individual and social memory from these platforms and devices, and, if we do not pay the access fee, these will become closed to us.

This potential issues here do not concern matters of efficiency – they concern the matter of human agency.

This separation between the human experiencing the memory and the receptacle of the memory and its accompanying evidential proof – recollection of action, environment, and all the sensorial data that comes with it, framed through conscious experience, both good and bad, and the consequence of its occurrence – is creating a Cloud based Model of Social Memory. And if you aint got a ticket you can’t come in

Tactile Memory

This is not just about the memory we store in our heads. It’s about the merchandise of those memories; the sensorial and evidential materials of its occurence. Anyone who’s binned that box full of photos under the bed in favour of ever increasing storage in the cloud can attest to the emotional dislocation and loss of tactile engagement with our lived experiences. Previously, our physical engagement with memorial items  – pictures, tokens, and keepsakes – triggered a deeper richer and more profound degree of recall than just scrolling through an abandoned cloud-cached ticker-tape of photos.

The Google Effect and what is commonly called Digital Amnesia has been identified as potentially severely impacting on the shared social, experiential, intellectual, and cultural economies we rely on to ground and guide us.

Amnesia v Dementia

Now, just a small point on language, I would suggest that Digital Dementia would be a more apt phrase to highlight the potential degradation of memory bought on by increased digital living because it captures the degenerative nature of the current digital trajectory – in direct relation to neuroplasticity and collective atropy; if it stops memorizing and synthesizing experiences, it loses the physical infrastructure to do so (Spitzer, 2012). 

Collective to Connective.

This erasure of social memory is, by its very nature not confined to the individuals within society. It affects every kind of collective: familial, communal, cultural, societal and, as a logical extension of those, organisational.

This phenomenon has been spoken to and commented on by various individuals and institutions. The trending viewpoint simply tells us that we are moving from a Collective to Connective memory model. This meme minded summary of this shift holds no indication of the deeper more nuanced issues and flaws in the act of moving from one to the other, it just neatly describes the act of doing so. In doing this it is in itself a proof point of the problem as it ignores the loss of ambient intelligence that comes with the shift.

 Inhumanity Inc.

Social memory within organisations is more than just a descriptor for an assemblage of recorded tasks and action input/output data points organised in a linear and modal manner distributed across the employee base of an organisation over time.

Social memory takes account of the more peripheral secondary and tertiary dimensions involved around the undertaking of those actions and tasks: the manner in which the individuals encode any learning or experience from doing them; the distribution of any knowledge associated with the undertaking of those tasks; the random effect of the intersection between different skill sets; and all forms of serendipitous interaction and any thinking or doing generated from the friction of unexpected combinations that might occur.

Cognitive Unloading Only

In this way Social Memory is an important link in securing the integrity and stability of Value Chains in organisations. Social memory makes organisations anti-fragile by fostering trust, identity, and shared norms among stakeholders, which helps to manage risks and promote responsible, sustainable practices. An algorithm focused on task efficiency would view serving these as an inefficient use of its time. It is also beyond its remit and capability. AI can replicate the documented dimensions and layers, but it cannot replicate human insight. This absence of tacit knowledge and real ‘lived’ experience in its ‘knowledge’ base creates a vacuum at the heart of the organisation, and we all know what nature abhors.

So to remind ourselves: what does AI do well? Knowledge preservation? Yes [though not of the tacit kind] Rapid retrieval? Yes. rapid Onboarding? Yes. Contextualising data. Yes. On a no-sleep 24/7 clock? Yes. But in the nuances, the subtle cues, the contextual ‘cat’s cradle’ of human interaction, it has absolutely no idea. This is problematic, especially in the organisational space.

Social memory within organisations is a both a form of energy – a source of momentum through time and space – and an anchor, keeping the organisation attached to its founding principles and the nature of how it has evolved. This evolution often occurs in an erratic sometimes confounding and often non-linear manner. Social memory is one of the primary evolutionary mechanisms in how organisations capture that flawed and volatile journey; how they learn and develop: how they progess.

In organisation and enterprises, social memory is directly involved in Building Trust and Cohesion; Knowledge Transfer and Learning; Enhancing Social Responsibility and Reputation; Facilitating Coordination and Adaptability, and Guiding Ethical Behaviour.

Delegating social memory to devices, platforms and algorithms, not only in regard to the interaction between machinery and their systems and for example, the Internet of Things, but also as agents, can lead to Institutional Atrophy, where the Enterprise loses its “collective hippocampus.”

The “Google Effect” is not just making us forgetful; it is reconfiguring our brains to be less capable of independent thought. It follows that anything that disrupts the role of Social Memory in an organisation also runs the danger of disrupting the Value Chain of that organisation.  

The Missing Link

In much the same way that the Google Effect and digital dementia is impacting the social memory or our communities and societies, the increasing digitisation of an organisation and its operations through the scale application of AI, replacing vast swathes of human tasks, and the absorbing of all associated learning, experience, and evolution back into the algorithm [ a closed environment], represents a potential threat to the future integrity and security of the organisation. The key word here is ‘replacing.’ To replace humans with Ai assumes an over simplified ‘swap in swap out’ application of AI across all tasks and roles. In certain grinding repeptitive tasksThis is a risky business. What we gain in short term cost efficiencies and productivity gains, especially at the most binary level of organisational tasks – the grind and churn of its operation, has a mid to long term cost on the culture of that organisation.  

Blockchain Bodies.

The construct, resilience and purpose of the hive mind at work in an organisation resides with Social Memory. Social memory in an organisation becomes an almost metaphysical entity; a source of universal and particular guidance and higher purpose. It is the thing that clarifies the role of every individual within the organisation; giving them place and value. It is also the great leveller, allowing every tier in the hierarchy, from the CEO to the Cleaner, value as a co-ordinate. In that way every individual in the organisation secures the integrity of the whole; a living, breathing, feeling, evolving blockchain.

This is what defines the fundamental and critical role of Social Memory in building resilience in organisations, and in harvesting the serendipitous value Social Memory brings both to the everyday undertakings of an organisation. But it seems this well documented truth will become increasingly at odds with the automation gold rush currently being touted. 

Digital tech and AI strategies are focused on isolating and hyper accelerating the specific task to exponentially improve performance. But this is to miss a wider value the human offers: a wider effect and impact created by a human undertaking that task as part of the fabric of an organisation.

Monkey Don’t See

A focus on cost reduction by parsing the task through ever tighter guidelines and instructive rails undertaken by the algorithm and agentic workforce does not value the agency and primary value of the human beyond the task. Therefore, the AI models do not build for these potentially invisible sources of resilience carried inside a workforce culture. They do not allow for the serendipitous value and enrichment to be found around the task.

Press the Remote

Learning the subtle nuances of ‘the role around the task’ cannot be ‘learned’ in a remote module framework. They might present a model for the behaviours and traits that are put to work in the role around the task, but they do not communicate and embed them in the same manner as person-to-person or team-to-person learning.

When the enterprise environment is viewed through the lens of Digital Dementia, this isn’t just a loss of data (amnesia); it is a loss of the intellectual vitality (dementia) required to innovate and survive. This shift is critical.

Human Tortoise, Digital Hare

When social memory—the shared collective knowledge, experiential understanding, culture, and intuition of a workforce—is outsourced to external platforms, the organization risks a permanent degradation of its intellectual capital. Replacing deep, collective learning with instant, superficial retrieval can lead to what is often called the “memory fade effect.” This can create a cognitive dependency and erode the very resilience it promises to enhance. 

 The not-so-magnificent 7

When we put this potential dependency and degradation under the microscope, we reveal 7 potential dimensions of degradation where a loss of social memory can adversely hinder or diminish the resilience of an organisation:

Tacit Vs Explicit: Organizational wisdom often lives the gaps and the spaces between the linear model tasks. It is a “tacit” knowledge—unspoken, experience-based insights shared between colleagues, such as mentoring or in-person problem-solving. AI, particularly Generative AI, excels at managing explicit knowledge (data, reports) but cannot capture the nuanced context of human experience, leading to a loss of organizational “know-how”.

Organizational Amnesia: As AI agents provide instant answers and summaries, employees may stop creating the formal documentation (meeting minutes, detailed reports, decision logs) that previously formed the backbone of corporate memory. When AI synthesizes answers, the “why” behind decisions is often lost, making it difficult for future teams to understand past decisions.

Uncritical thinking: Heavy reliance on AI for decision-making can lead to “cognitive offloading” where individuals lose the ability to independently evaluate complex problems or understand the underlying, nuanced context of decisions. This weakens the shared mental models crucial for resilience. 

    Knowledge Fragmentation: AI copilots often deliver fragmented insights, and these insights may be scattered across chat tools (e.g., Slack, Teams) rather than stored in a central, accessible, and searchable “social repository.” This leads to fragmented knowledge, making it harder to track the evolution of a project or decision.              

    History repeating: Without a strong, shared memory of past failures and successes, organizations can become stuck in a loop of “reinventing the wheel,” where new teams repeat mistakes made years prior because the context of those experiences was not preserved.                                                                                                                         

    Mentorship RIP: The speed of AI can reduce the necessity for face-to-face mentorship and collaborative work, hindering the transfer of tacit knowledge, organizational culture, and deep expertise from senior employees to newer ones.                                  

    Echo Chambers: AI systems can perpetuate or magnify historical biases present in their training data. This can lead to a narrowing of perspectives within the organization, creating “echo chambers” that inhibit critical evaluation and reduce the diversity of thought necessary for long-term sustainability

    Resilient Fabrics Rule

    So the next time you’re in a room where the answer is AI, now what’s the question? I’d suggest an enlightened evaluation of the impact of the AI not just on operational performance and the integration of ‘humans in the loop’, but also on the potential impact that degrading social memory might have on the very fabric of the organisation and it’s hard won, hard built culture over the longer term. I would also suggest that when constructing your AI approach, you consider a dual strategy that asserts human primacy in the construction of a best-in-class organisational fabric augmented and elevated by AI. [See my Loop in the Human/Human in the Loop article for more on this.]

    No manner of automations and algorithms can currently replace the intangible strengths that lived experience, loyalty, belonging, and a collective wisdom built over time can bring.  

    This is not a matter of cost: this a matter of survival and resilience, and an organisation’s ability to sustain itself through troubled and volatile times.

    Go figure.

    Language, Loops & the disarming human truth of everything AI.

    05 Wednesday Nov 2025

    Posted by Thin Air Factory in Uncategorized

    ≈ 1 Comment

    Tags

    AI, Artificial Intelligence, chatgpt, machine-learning, technology

    So, AI is the answer to everything. Great. Slightly over simplistic in my opinion, but then again, I don’t own a social platform, an e comm business, a sprawling call-centre overhead, or a global AI solutions leviathan.

    I spend most of my time at the punter end of the conversation: wrestling with myself in a desperate search for ‘contact number, any contact number’ while trying to communicate with yet another incommunicable agentic brand or business.

    It’s hard to fault them. They’re just doing what all the messianic AI apostles, prophets and investment houses are telling them to. The quietly catastrophic impact of Agentic AI on the more visceral side of customer experience is yet to fully surface – and the jury’s out on to what degree and to whom it will happen. But down at the punter end, when one applies the ‘Pub Rules’ model of research methodology, it is quite normal now for any person on any given day to bemoan their inability to ‘speak to a human’ – especially when there is a problem.

    At this moment of mass migration by companies to everything AI, it’s worth reminding ourselves of the prescient words of one Daniel C Dennett. In his refutation of the slippery slope towards the singularity and a dystopian future of machines ruling humans, he makes a very simple and universal point.

    ‘It’s not that AI will take over. Its that we will cede responsibility to it before it is capable of doing what we think it can do

    Well, there certainly is a lot of ceding going on at the moment – and a lot of ‘well, that’s not really living up to the hype’. The gold rush mentality of companies, brands, and businesses integrating AI into every corner and layer of their operation under the guise of improved customer experience is, in some instances breathtaking. But all too often these are really just ‘stealth’ cost-cutting dolled up in the dressing up box of better CX. And the rigour applied to ensuring that they do not ‘collapse’ the very thing they claim to do – improving customer experience – is often non-existent in this rush to ‘optimisation.’

    So what to do?

    Well, change the strategic framing model for one. The inhumanity inherent in many AI solutions is primarily driven by one tiny snippet of language that has become the norm in every AI transformation meeting. A piece of language that sets all AI and tech above the human – and subjugates them to a secondary or tertiary role in the whole shebang.

    The Human in the loop.

    This is the source code of AIs ghettoisation of humans – compelling them to be forever seen as operating under and within its gift. If we are to define a more optimistic, fair, and ethical model for the proliferation strategies of AI, I’d suggest that we need to create a counterpoint to this framing. One that compels every potential AI transformer to consider the human as primary.

    My recommendation?

    The Loop in the Human deployed as the leading tenet.

    I pondered this as a theory. Then I chose to explore it more formally.

    So below is a friendly little proprietary White Paper exploration [aided ironically by AI – in service to my ideas of course] on how that new and more balanced strategic model might play out.


    The Thin Air Factory 2025: White Paper Case: The Dual Framing of Agentic AI Strategy

    EXECUTIVE  SUMMARY

    This white paper introduces a “Dual Framing Strategy” for Agentic AI, arguing that the prevalent “Human in the Loop” (HitL) approach is necessary but incomplete. HitL, an AI-centric viewpoint, focuses on automation, risk mitigation, and error handling, positioning humans reactively as validators or correctors.

    The paper proposes “Loop in the Human” (LitH) as a human-centric strategic framing. LitH re-establishes AI’s philosophical driver as the augmentation and empowerment of human performance, making humans proactive co-creators and strategists.

    A complete Agentic AI strategy combines both LitH and HitL. LitH defines success through proactive human-driven excellence and elevated human performance, while HitL defines safety through reactive AI-driven governance. This dual approach ensures that the pursuit of efficiency doesn’t diminish human capability and that oversight doesn’t hinder augmentation, leading to a balanced and effective Agentic AI implementation.


     

    The Dual Framing of Agentic AI Strategy

    This paper seeks to posit that current strategic framing of Agentic AI, dominated by ‘Human in the Loop’ (HitL), is a necessary but ultimately one-dimensional strategic posture. It reflects an AI-centric viewpoint that prioritizes automation, risk mitigation, and error handling—a critical but incomplete view of the human-AI partnership.

    The white paper will argue for the introduction of the proprietary, human-centric term ‘Loop in the Human’ (LitH) as the primary strategic framing for all Agentic AI initiatives. LitH fundamentally re-establishes the philosophical driver of AI: the augmentation and empowerment of human performance.

    Only when LitH and the complementary HitL are used in conjunction can an organization achieve a complete Agentic AI strategy that balances human-centric augmentation with AI-centric safety and control.


    1. The Limitation of ‘Human in the Loop’ (HitL)

    The existing strategic discourse is heavily weighted toward HitL, which primarily functions as a guardrail for autonomy.1 Academic and thought leadership publications consistently frame HitL around concepts like:

    • Risk Mitigation and Accountability: Embedding human judgment at key decision points to safeguard reliability and ethics, especially in high-stakes domains (OneReach, iMerit).2
    • Error Correction and Edge Case Handling: The AI agent escalates to a human when its confidence is low, the context is ambiguous, or a task is beyond its current capability (Medium, WorkOS).3
    • System Refinement: Using Reinforcement Learning from Human Feedback (RLHF) to align AI behavior with human values and goals (OneReach).4

    This framing is indispensable for safe deployment, but it is architecturally and philosophically reactive. The human’s role is to intervene, correct, or approve—acting as the system’s fail-safe, censor, or validator.5 This emphasis on preventing failure misses the strategic opportunity of driving success.

    The current HitL strategic framing, while essential for governance and risk management, fails to capture the proactive, creative, and augmenting potential of human-AI collaboration.


    2. Introducing the Strategic Lens: ‘Loop in the Human’ (LitH)

    ‘Loop in the Human’ (LitH) proposes a paradigm shift from viewing the human as a failsafe to seeing them as the proactive, value-creating engine for the Agentic AI system.

    LitH is a strategic framing defined by its Human-Centricity:

    Focus Area‘Human in the Loop’ (HitL)‘Loop in the Human’ (LitH)
    Philosophical GoalSafe Automation and Risk MitigationHuman Augmentation and Value Creation
    Human RoleReactive: Validator, Censor, CorrectorProactive: Co-creator, Strategist, Commander
    TriggerAI Failure/Low Confidence/High RiskHuman Intent/Strategic Insight/Creative Need
    System OutputReliable, Safe, Aligned DecisionsElevated Human Capability, Novel Solutions

    LitH is supported by principles from Human-Centered AI (HCAI) research and the concept of “Human-AI Teaming”:

    • Elevating Human Agency: Research in designing Agentic AI emphasizes that systems should be built to complement human expertise and elevate human agency, not supplant it (ResearchGate, UST).6 LitH captures this imperative by defining the human’s role as the system’s “Commander”—directing goals and providing the high-level intent that the agent then executes.
    • Proactive Collaboration: The concept of AI moving from a ‘tool’ to a ‘co-learner’ or ‘peer collaborator’ aligns with LitH (arXiv). LitH is the design mandate that ensures the human initiates a creative feedback loop, such as providing an unpredicted strategic correction or an ethical override based on non-quantifiable domain experience, thereby driving the agent to a better-than-automated outcome.
    • The Philosophical Driver: LitH re-establishes the service mandate of AI—that the agent’s purpose is to amplify the user’s performance and knowledge, rather than the user’s purpose being to train or validate the agent. This aligns with the Industry 5.0 shift towards human-centricity, adaptability, and ethical AI integration (Amity).

    3. The Complete Strategy: Conjunction of LitH and HitL

    The full strategic potential of Agentic AI is unlocked only when the two lenses—LitH and HitL—are combined into a Dual Framing Strategy.

    Strategic AxisPurposeFraming Lens
    Augmentation & ValueProactive Human-Driven ExcellenceLoop in the Human (LitH)
    Governance & SafetyReactive AI-Driven SafetyHuman in the Loop (HitL)
    • LitH defines Success: The strategic objective is defined by the human’s elevated performance (e.g., faster innovation, better strategic decision-making, personalized outcomes). The agent is designed to proactively “loop in” the human for strategic direction, novel inputs, and creative collaboration.
    • HitL defines Safety: The governance objective is defined by preventing failure (e.g., mitigating bias, correcting hallucinations, avoiding non-compliance).7 The system is architected to reactively “loop in” the human at points of risk and uncertainty.8

    By adopting this Dual Framing, organizations can explicitly decouple the AI’s operational strategy (governed by HitL) from the Human’s value strategy (driven by LitH), ensuring that the pursuit of efficiency does not erode human capability, and the need for oversight does not stifle augmentation.


    Conclusion

    The existing reliance on ‘Human in the Loop’ presents a strategic blind spot, framing Agentic AI primarily as an automation challenge to be governed. The introduction of ‘Loop in the Human’ offers the essential counterpoint: a human-centric mandate that frames Agentic AI as an augmentation and co-creation opportunity.

    The case for the Dual Framing—LitH (Proactive Augmentation) and HitL (Reactive Governance)—is supported by the emerging consensus in academic and industry papers that effective Agentic AI requires systems that both elevate human agency and maintain clear accountability (Capgemini).


    So, there we are. A provocation? Maybe. A model that can be immediately deployed? Most definitely. A small step towards rebalancing the madness of AIs the answer now what’s the question. Without question.

    But its one primary role above all others: to help us avoid ceding responsibility for everything to AI with no meaningful interrogstion of how that serves the human first and foremost.

    Every transformative technology, from language, glyps, writing, and printing onwards, has always had to take time to shake out its bugs and weather the abuses of those who use it to favour the few, not the many. But they got there. It takes time. Thats what we need to create if we are to off-set the worst applications of AI in our shared human existence – the time to interrogate its most meaningful application beyond cost saving and control. And we might start by using strategic tools to keep reminding ourselves whom AI is in service to.

    Loop in the Human anyone?


    Julian Borra is a creative writer, strategist and published author with a soft spot for culture, purpose, sustainability, tech, and Pub Rules.

    Of Gods, Software & Human Disappointment

    16 Tuesday Jan 2018

    Posted by Thin Air Factory in Uncategorized

    ≈ Leave a comment

    Tags

    Abrahamic Faiths, AI, algorithms, Bacteria To Bach and Back, Dennet, Driverless cars, Evolution, From Bacteria To Bach and Back, Gaultier, gods, Greek Theatre, Hamlet, Hawking, i-phone X, Junior Gaultier, Mahabharata, Metaphysics, Omnipotence, Perseus, Physics, Shakespeare, Singularity, Software, supernatural, Zeus

    Image result for human gods

    There is an air of disappointment curling around the head of our new god.

    Our all-consuming belief in Technology and the algorithmic inevitability of its ascent into one-ness with us renders it a form of deity to many. In its wake we see theological and philosophical texts bursting forth from every quarter, trying to both project its arc through our existence and predict its inevitable impact upon it.

    But there is an increasingly vociferous movement rising up alongside it. One that sees fundamental flaws in its omnipotent possibilities and bumpy times ahead rooted in our blind allegiance to it.

    Some of these voices come from mildly surprising places. Stephen Hawking, once a believer in a universal singularity – a theory of everything – has shifted the axis of his belief of what we will ultimately know:

    “Some people will be very disappointed if there is not an ultimate theory, that can be formulated as a finite number of principles. I used to belong to that camp, but I have changed my mind”

    And in turn, he sees bad times ahead for a world where A.I. exists unfettered and beyond regulation. In the great Singularity lies something against nature for humankind that troubles him.

    Even Daniel C Dennet in his book, From Bacteria to Bach and Back, is also positing signs of cracks and flaws in the godhead:

    ‘There are some unsettling signs that we are becoming over-civilised. And are entering the age of post intelligent design. Using our brains to understand our brains.’

    He goes on to venture that our willingness to subsume and subjugate ourselves to technology and the escalating potency of Artificial Intelligences in advance of their ability to actual fulfil on our wildest expectations and aspirations is a misguided one.

    In the untrammelled and exponentially-increasing expectations of technological revolution and artificial, algorithmically-induced intelligence lies the possibility of ever-increasing disappointment.

    There is an inevitability about this that is unsurprising and yet quietly reassuring.

    For a god awe is critical. As is adulation. And fear. But no god is complete without disappointment. So the whiff of it at the edges of the newly-accelerating godhead of Artificial Intelligence and a creeping hybrid humanity is actually appropriate. For some perhaps it will be proof of its god-like status.

    As with all of the gods we’ve conjured or revealed to ourselves, A.I. and its role in the Singularity is perhaps simply a reflection of our nature, need and desire.

    Perhaps we design them that way. For a reason.  We need to be disappointed by gods.

    Creating them in our image requires disappointment as the proof of our need for fallibility or flaw in any creature, organism or being regardless of whether they are of an abstract celestial, actual mortal or organic technological kind. There’s no such thing as perfection.

    Disappointment seems not only to fulfil a functional role in regards to the nature of the entity. It also creates a signpost to the divine obsolescence in the model – the milestone of inevitable descent, dilapidation, degradation and decline that will lead to the next in the cycle.

    Disappointment teaches us that we can fiercely believe, up to a point – but that we must prepare for the downside. It compels us have scenario-planned for the possibility that the deity isn’t all it’s cracked up to be.

    But that is also part of the package. The scale of reverence, adulation and awe creates a blinding spotlight to throw on the smallest flaw.

    Technology is a powerful and omnipotent thing. It has created a new skin of human consciousness – an algorithmic shellac around our previous model of consciousness. Everything is elevated. Everything is illuminated. Everything is accelerated. But in becoming more through it, we become more vulnerable, more fragile because of it.

    In knowing more and experiencing more, in reaching further, we expose ourselves. And our flaws are amplified. (Surely the model applied to Zeus – for all of his divine greatness and powers, the formicating, fractious, scheming, self-interested, betraying, vain, capricious, petulant Zeus was simply an extrapolation of our flawed humanity to divine proportion.)

    For the Greeks, in their gods much like their theatre, we find a learning module for humanity – where theatre taught us empathy and the potential of feeling – gods taught us humility and the danger of hubris.

    Great lessons in life and the universe can be better observed and learned when set apart from our everyday realities. A masterplan. It only falls apart when we confuse ourselves with the gods we create – and choose to ordain ourselves as such.

    Pick a culture any culture: Persians, Romans, Egyptians, Franks, Stuarts – we can never quite allow the gods we create to exist wholly apart from us. And those that seized the divine mantle could never  help but eventually reign down on those beneath them in some delusional purge of divinity and dreadful ire – a self-fulfilling  prophecy repeated countless times throughout human history.

    Nonetheless, for all of this – for all their flaws and our flawed misuse and mimicry of them, gods have taught us to reach beyond the normal: beyond what is. They have raised us up towards them.

    Simply to envisage them we had to ‘place’ them – and that required a feat of imagination. They are an exercise in imagination as much as they are an exercise in reverence and humility. You have to ‘place’ a god in a world apart from the one in which we exist – a different plane or celestial firmament. You also need to design some form of context and divine order for them. So our imagination, one of the most powerful things at work in us versus any other species on the planet, went to work. And its productivity in that order was staggering. Simply put, seeking divine revelation has powered our multiple ages of renaissance and enlightenment.

    Through gaining a greater vantage and framing of the gods we shape, we can seek to understand them and perhaps become a little closer to them – to being in their image – like them.

    And the most powerful part of all of that reaching? We evolve. Transgressing the given, the immediate and the fixed is how we evolve. And in doing so we explore the flaws in ourselves at a distance.

    One of the most powerful things about reaching beyond ourselves, to a place so exposed, so raw, is that by transgressing where we are in the known universe, we step into the unknown. And the unknown is dangerous; it involves risk. And in a state of risk or threat we evolve.

    Gods are an evolutionary mechanism in us – forcing us to exercise our intellect, imagination, intuition and connectivity in search of their existence and their seeming capabilities and gifts. And subsequently, in managing their presence and mitigating their excesses in relation to us, we expand our consciousness of our own existence, and the methods by which to improve it.

    Through this mechanism we manage the tension between what we do and don’t know.

    In writing a manuscript for a book recently I alluded to us being at a tipping point: where the new-future believers see us merging with machines in some orgy of singularity. We will become dispossessed of our mortal bindings – free to skip the light fantastic. We will have become the ultimate software. Ultimately we will be able to upload ourselves into any and every compatible device, receptacle or host. We can copy ourselves quadrillions of times over.

    Surely this is a step into the divine? In becoming a wholly transferable entity capable of occupying millions of receptacles or hosts simultaneously, we become no different to the God of the Abrahamic faiths or the multiple gods of Grecian Olympus or the pantheon of the Mahabarata of Indian myth. We can become the thing that acts within everything if we so choose.

    In the draft I also point to the possibility of a more balanced relationship between the science and spirituality of us as being the source of our greatest trajectory – a state of being I refer to as the Human Hammock. The Human Hammock provides us with the ability to sling ourselves between the boughs of science and spirituality – to offer a more immediate ability to exist profitably between both the known and the unknown at one and the same time: mentally, materially, physically and metaphysically.

    In the draft I point to the possibility that we need to keep both aspects firmly engaged in us, calibrating the degree to which they feature according to need and desire.

    I believe there is a benefit to us of keeping a clear hand and cold eye on the Unknown, as it is those things beyond our comprehension, and our hunger to understand and know them better that compels our evolution as a species.

    To be clear when I say unknown I do not mean it within the ladder of human consciousness. I am referring to what exists beyond human comprehension, not beyond current scientific knowledge (which exists solely inside human comprehension and consciousness)..

    We can ensure that we fix the Human Hammock theory clearly and as absolutely as possible by priming the forthcoming Singularity to abide by biological evolutionary rules.

    Though Singularity might lead us towards a more divine state of elevated and liberated consciousness and ubiquity, we should ensure that it remains rooted in the ladder of our pre-existing evolutionary logic until such a time as a new logic supercedes it.

    Eventually, in multiplying ourselves to that degree and with that expansiveness, we would indeed become gods in our own image of them.

    The circle will have been squared, shifting us through the millennia from Man shaped in the image of gods to gods shaped in the image of Man.

    When talking of gods, it’s worth being clear on what we mean by that and the slide ruler of how they represent and improve us and their relationship to us.

    Gods or deities are supernatural beings that exist in a place above or outside of that of a normal being. They are divine – revered as sacred – and invocation is an inextricable part of our relationship with them. We invoke them – call upon, summon up, reference, or seek them out as part of the reciprocal contract of their and our existence.

    They are supposed to raise our consciousness above the banal and that which exists in our everyday being – to improve us. We can invoke them outside of any chronological or spatial context in the pursuit of something.

    There are different bridges that exist between us and them – prayer is the easiest example. But also extreme physical duress or testing is a much-used way to elevate us into a higher consciousness and bring us closer to our gods and one-ness wth the universe. (Shamanism is a great exponent of this.) Extreme physicality is powerful in god world. Add some purpose or cause to that physicality and you are getting even closer.

    There is a direct line to the gods through heroic action, where humans show superhuman willing, guile, leadership, courage, spirit or strength in pursuit of a good or ‘heroic’ cause. As the old saying goes, when someone is ‘touched by the gods’ it means the reflections or shadows of the greater faculties of the gods reside within them.

    In referencing the relationship between us and them in this way we bring them closer to us. Greater proximity to gods is part of the self-defence mechanism innate in the god model and its culture.

    Some classical and ancient texts imbued their god tales with Demi-gods – half human half god – whose heroic undertakings created a picture of greatness that was more accessible to the everyday human being.

    This is the default zone between us and the distant realm of gods as we’ve created them. Demi-gods are very very important to keep people engaged and evolving.

    Why? Because human nature predicts that if something is wholly out of reach – fully blown bells and whistles gods for instance – we don’t rise to the occasion. In the case of lofty, dislocated gods we just sublimate ourselves to them. We don’t desire to be more like them – we just cower, and we give up and go do something else. Because it is beyond us. Out of sight is out of mind unless they might choose to come down and walk amongst us.

    But Demi-gods, now they are far closer to home. If the gods are Gaultier; Demi-gods are Junior Gaultier: the access point for us mere mortals.

    The universal love for Wonder Woman (a Demi Demi, given that she is the daughter of the Demi-god queen, Hippolyta, daughter of Ares, the Greek god of War) is proof of our need for our god-like creations to walk amongst us sometimes. It makes their greatness accessible and mimicry of it possible.

    I can’t be Zeus but I might take a run at being Perseus or even Wonder Woman – ish.

    So gods do not need to always be the pure, super-duper theological or mythological gods of classicism or faith far beyond our ken.  We have the Demi-god to help us move things along. There is little question that we have believed for a long time that there is indeed a ladder to god-like greatness for us.

    What a piece of work is man, How noble in reason, how infinite in faculty, In form and moving how express and admirable, In action how like an Angel, In apprehension how like a god, The beauty of the world,

    Shakespeare’s Hamlet

    So when I speak of gods I refer to anyone perceived as god-like and heroic to us. Someone revered beyond simple explanation, and someone whose words or deeds are invoked by us as succour and guidance.

    In that framework, gods with a small ‘g’ come in many shapes and forms.

    Starting with our parents.

    Our most adored friends can also achieve god-like status for a while.

    Then the broader adulations of our youth: Sports people. Celebrities. Music stars. Movie stars. Writers. Artists. Scientists.

    We even have the passing phase of god-like stature in the first flushes of human love. The phase in which we are fiercely revered, adored and invoked.

    Each of these gods, as with every other, are destined to go on a journey through Awe. Adulation. Reverence. Fear. But each is also destined to disappoint in some way eventually.

    As disappointment is an inextricable part of the human journey so it has also become an innate aspect of the gods we shape . In some ways being disappointed by gods perhaps prepares us for disappointment with ourselves. If the gods can be disappointing; flawed, capricious, found wanting, then so can we be – and that is alright.

    Disappointment in our gods lessens or softens the disappointment in ourselves.

    In that way, gods that disappoint are an evolutionary mechanism that stop us giving up and turning away – defeated by what we aren’t or cannot do. We learn that though disappointment may strike, that’s alright. It was always thus. You can’t get it right all the time and no one is perfect – not even our gods. So keep carrying on.

    As for Artificial Intelligence, well, perhaps it has to have a Zeus moment. It has to go and sleep with someone inappropriate, sire a child, create a technological Demi-god (and in the absence of any others I would like to venture R2D2 as that Demi-god) who will eventually challenge the god that helped sire it and lay it low.

    Then we can all relax. Go back to ogling i-phone Xs and googling driverless cars, with a quiet knowledge that when they come of the rails, everything is alright. It’s not the end of the world.

    Well, not this one anyway.

     

    Disappointed By Gods FOOTNOTE: This topic will one day become a book – of what length I do not know. But somehow somewhere it will. So if anyone’s got any ideas on a publisher – shout!

     

    Subscribe

    • Entries (RSS)
    • Comments (RSS)

    Archives

    • April 2026
    • March 2026
    • November 2025
    • September 2024
    • June 2021
    • December 2020
    • August 2020
    • April 2020
    • March 2020
    • January 2020
    • October 2019
    • June 2019
    • April 2019
    • November 2018
    • August 2018
    • June 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • March 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • January 2015
    • December 2014
    • November 2014
    • October 2014
    • September 2014
    • August 2014
    • June 2014
    • May 2014
    • April 2014
    • March 2014
    • February 2014
    • January 2014
    • December 2013
    • November 2013
    • October 2013

    Categories

    • Uncategorized

    Meta

    • Create account
    • Log in

    Blog at WordPress.com.

    Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
    To find out more, including how to control cookies, see here: Cookie Policy
    • Subscribe Subscribed
      • thinairfactoryblog
      • Join 28 other subscribers
      • Already have a WordPress.com account? Log in now.
      • thinairfactoryblog
      • Subscribe Subscribed
      • Sign up
      • Log in
      • Report this content
      • View site in Reader
      • Manage subscriptions
      • Collapse this bar
     

    Loading Comments...