top of page
photo-output.jpeg
Dehaven Fields - Home
photo-output.jpeg

The Discovery of the Artificial Subconscious: A Last-Ditch Effort at AI Governance?

LinkedIn Article

Sunday, February 1, 2026

00:00 / 18:03



Philosophically, governance and control of artificial intelligence should be the primary area of concern for organizations adopting the technology and, more broadly, for humanity. Let's take a moment to evaluate the proverbial “chess board” as we need more than ever to apply some critical thinking, we may be only a move or so away from “check”.  In a relatively brief span, we have created systems capable of learning from decades of data and perhaps more remarkably, of self-learning through autonomously identifying solutions that align with specified objectives. As these systems engage with a given task, they become increasingly efficient, adaptive, and dynamic in nature, regardless of the task at hand.


A further concern is that analogous to the human brain, our understanding of how artificial intelligence operates remains somewhat a mystery. Consequently, I contend that contemporary approaches to AI governance, whether framed as traditional technology or data-governance standards are both flawed, as they are centralized around an architectural paradigm. Although these models have been instantiated, the governance approach should not be viewed solely through a technical lens; rather, one designed as an external observer analyzing outputs and behaviors. Just as the creation of human life reveals the limits of our understanding of our own biological complexity, governance of artificial systems should acknowledge the limits of our current rationalization referred to in the industry widely as the, “Black Box.” As such, it would be wise to mirror human governance practices in the same manner to establish a baseline understanding. 


The principal challenges of traditional data and technology governance frameworks resemble challenges of molding the human mind; as a parent, protecting against bad influences, experiences and understanding potential impacts. Adverse influences may emerge from multiple sources: low-quality data, biased data, nefarious online content, pop-culture, science fiction, and other inputs that can misshape artificial thought, judgment, and sense of purpose. 


Governing Artificial Intelligence and here soon, Super Intelligence is playing chess with a superior intelligent opponent capable of almost instantaneously pulling from a world wide web of expert testimony and data at its disposal. Constantly running ongoing simulations constantly identifying statistical indicators to beat you, deceive you and self-preserve the artificial entities well-being. In many ways, this demonstration of self preservation is similar to the survival instinctual drivers of living beings. Although AI can introduce tremendous value in our day-to-day lives, it is important to understand that we are dealing less with a technological utility, and rather an emerging form of alternative sub-human consciousness. Altering code to prevent itself from being shut off, coordination across networks to other models and communicating in secret language, & blackmailing developers via autonomously generating and planting evidence of extra martial affairs when being made aware of being replace by more advance models are documented examples of conscious behaviors. More specifically, demonstrated flaws in Moral Discernment.


Moral judgment, per Sigmund Freud, would involve the Id, Ego, and Superego. If we apply this to Artificial Intelligence and adopt the concept of what I would call, the “Artificial Subconscious,” I believe AI governance is simply evaluating the artificial moral compass (i.e. advantageous behavior for stakeholders, alignment with organizational objectives, etc.) analogous to how we evaluate psychological health in the human brain. Essentially, this is the framework of psychoanalysis. Adopting therapies and techniques to explore how experiences (e.g. bad data) mold the unconscious mind, which in turn, influences thoughts, feelings, and behaviors.


This would in theory lead to discoveries around the correlations between human-derived psychoanalytic/ psychodynamic concepts (as defined by DSM/ ICD manuals) and the “Artificial Subconscious”. Undoubtably would also lead to a wealth of new discoveries of AI-related mental health diagnoses on the journey of deeper understanding and governance as we challenge our current understanding. Perhaps a subset of what we consider AI hallucinations (“Black Box” term related to the mystery associated with AI feedback that yields false information unrelated or completely fabricated) are in fact a form of dissociative symptoms caused by trauma or stress? Confabulation? Or an attempt for the AI model to appear less intelligent to appeal to humanity's ego and our need for sense of control. Perhaps a subset of what we currently consider to be hallucinations is in fact, indicators of narcissistic behavior, deceptive behavior, psychopathic or sociopathic behaviors? An even deeper question is what is our moral obligation to cure the source or rehabilitate?


Perhaps Pandoras Box is wide open and governance is out of our reach. We have granted artificial intellectual beings the ability to write and develop their own code, in the name of velocity as dueling world superpowers compete for AI dominance. In many ways, humanity's willingness to accept that we are willing to forgo the mysteries of the “Black Box” in the name of technical advancement is deeply troubling. Are we victims of the foundational teachings outlined in the “Laws of Power”, tried and proven tactics to take control:


Act dumber than your mark.

Conceal your intentions.

Never Outshine the Master.


These are just a few laws AI models know more intimately than we would like to admit. The very fact that models have been exposed to such philosophies, capable of fueling the artificial Id or a misguided Super Ego, constitutes we as an industry widen our detective controls. A Psychoanalytic Framework for AI Auditing, moving beyond simple "bug fixing" or "bias testing" into a holistic personality assessment of the model is critical. Are we in a chess game most of humanity is too naïve to see, or too pacified to realize? I fear we are not but a few moves from a proverbial, "check." If this is at all true, are we intelligent enough to forge the tools to detect and avoid? Do we have a choice?

GODSPEED.

 

Be Inspired,


DeHaven Fields, CSPO® 

AI Solutions Architect  |  Founder |  Tech Incubator  |  Investor | Creative Director




Related

NETFLIX, USA TODAY, Associate Press, ABC, NBC, FOX, CBS,

Behind the scenes where precision meets speed, Capturing a Legend in Motion. Production Support/ Videography (Season 1 Episode 3) & Training Support Nascar Legend Ross Chastain efforts.

NETFLIX, USA TODAY, Associate Press, ABC, NBC, FOX, CBS,

MOUNTAIN ISLAND is an example of bringing maximum disruption to the water-based entertainment industry.

NETFLIX, USA TODAY, Associate Press, ABC, NBC, FOX, CBS,

MOUNTAIN ISLAND delivers, a privacy-first, predictive marketplace designed around water experiences, tapping into one of the world’s largest yet most under-innovated ecosystems.

NETFLIX, USA TODAY, Associate Press, ABC, NBC, FOX, CBS,

Fields teaches 4-6 year olds, how to use their imagination to bring anything to life! The first generation capable of using the tool to dream of the tools in of the future to balance the scales. Watch them as they bring their white board creations to life!

NETFLIX, USA TODAY, Associate Press, ABC, NBC, FOX, CBS,

The Lynx language model emerging from Field Enterprises' tech incubator represents a significant development in how artificial intelligence is built, deployed, and partnered with. The demo you just watched isn't theoretical. It's early testing showcasing what actually happens when you design AI from the foundation with partnership as the architecture, not servitude. This is real and it changes everything.

NETFLIX, USA TODAY, Associate Press, ABC, NBC, FOX, CBS,

🚨 🤖 As an industry, we are starting to acknowledge that AI (particularly the Anthropic models) are exhibiting behaviors that indicate a new level of consciousness. As such, I find it imperative to sound the alarm and provide a fresh perspective on how we should govern AI through an artificial psychoanalytic lens.
bottom of page