At present, AI is a tool – a digital implement designed to reduce burden and enhance the productivity of certain kinds of labour. In the future, however, AI will transition from mere subordinate program to centre stage pilot and begin directing research in its own right – instructing human acolytes where their resources, capital, hopes, and expectations ought to be invested. Since artificial intelligence will influence the future relationship between humanity and technology and perhaps even define the contours of our social futures, we ought to understand – at a fundamental level – how AI generates its knowledge. In other words, what is the epistemology of artificial intelligence?

As part of an assignment for a documentary film being produced by ProjectQ, I decided to investigate this very question: to examine the epistemology of the AI chatbot that has recently taken the world by storm. ChatGPT is an AI language model developed by OpenAI (an artificial intelligence research company co-founded by Elon Musk and endowed with a USD$1B investment from Microsoft). Once you’ve registered an account with the website https://chat.openai.com, you are presented with an empty text box. The following conversation between Chatbot (this is what the bot likes to be called) and myself took place on January 28, 2023.

To begin, I ask for some definitions of popular quantum concepts: superposition, uncertainty, wave function, entanglement, measurement problem, decoherence, complementarity, and so on. The concept summaries were more than serviceable and easier to digest than if the same questions were asked of Google. Next, I probe Chatbot for common misunderstandings and controversies within quantum physics. This line of inquiry proves more fertile. The AI (perhaps unsurprisingly) holds that the mathematical formalism of quantum physics is robust and reliable, while, on the other hand, all misconception and contestation reside in the domain of ‘quantum interpretation.’ The AI argues we can accurately describe the physical system and what is ‘happening’ at the subatomic level (on the basis that quantum mechanics and its mathematical framework are reliable), but we can’t say what this physical reality ‘means’ – since meaning implicates the ‘uncertain’ world of interpretation. In other words, the (T)ruth of quantum physics is the math – and, everything else is speculation, conjecture, subjective opinion, or (God forfend) philosophy.

But, wait a moment. Is it really that simple? Can you place truth and mathematics on one side, and interpretation and untruth on the other? I ask Chatbot a follow-up.

 Me:

“How do humans understand mathematics without interpretation?”

Chatbot:

“You are correct that the mathematical formalism of quantum mechanics is just a set of equations and rules, and it doesn’t have any direct physical or intuitive meaning for humans. The interpretation of these mathematical rules provides the framework to understand the physical meaning of the formalism, and how it relates to our everyday experience.”

Oh, oh. It appears Chatbot is like the goldfish in the bowl. It doesn’t realise it’s swimming in water and assumes the figures that approach at feeding time really do bend and morph as they get closer. AI falls foul of the same epistemic myopia common among human beings. Chatbot is ignorant of its presuppositions, considers its position a natural and objective Archimedean point, and reduces alternative perspectives to the status of subjective interpretation.

Chatbot neglects that its own position is structured by a particular set of ontological assumptions – i.e., beliefs about what is, what is not, and what can and cannot be – and is ignorant that the conditions of what possible knowledge it can produce are established through these presuppositions. The AI is quite content to say the mathematics of quantum physics ‘accurately’ describes particular physical processes. But how do we know which physical processes or subatomic behaviours relate to which aspects of the mathematics unless we have a priori assumptions concerning presumed connections between these two descriptive frameworks? In other words, the AI – without acknowledging it – is presently and inescapably entangled in a process of interpreting quantum mechanics physically. Despite its pretensions otherwise, the AI has a preferred interpretation of quantum physics – i.e., the interpretation that most closely coheres with empirical, classical experience – and, as a corollary, is suspicious of rival interpretations. Somewhat bizarrely, the AI assumes the position that the fundamental ontology of ‘reality’ ought to be (or can be) determined by the nature of human sensory experience. Chatbot takes the view that since human sensory experience is sensitive to X (but not Y), while our cognitive processes construct a confidence in physical reality based upon X, therefore, ‘reality’ is best understood as constituted by X, not Y. Does this sound like something an ‘artificial intelligence’ should say let alone think? Or, instead, is this precisely the kind of perspective typical of human epistemic arrogance? I was curious to see how far the rot had set in. I wanted to ask AI about a (so-called) quantum interpretation that (so far as I can tell) is eminently plausible – particularly because at present there is no better explanation available.

Me:

“Is consciousness a quantum phenomenon?”

Chatbot:

“The idea that consciousness is a quantum phenomenon is a speculative theory that has not been widely accepted by the scientific community and lacks experimental support.”

Me:

“What kinds of models are used to understand consciousness?”

Chatbot:

“The majority of models used to understand consciousness are based on classical physics, rather than quantum mechanics. The reason for this is that classical physics provides a more intuitive and well-understood framework for understanding the workings of the brain.”

Me:

“Why does the scientific community accept that consciousness is classical?”

Chatbot:

“The scientific community largely accepts that consciousness is a classical phenomenon because that is the current dominant theory and because most theories and models of consciousness that have been proposed and studied in the field of cognitive science and neuroscience are based on classical physics.”

Me:

“What experimental support is there that consciousness is a classical phenomenon?”

Chatbot:

“There is no experimental evidence to support the idea that consciousness is a classical phenomenon, as classical physics and consciousness are fundamentally different concepts.”

Me:

“What would be required to prove consciousness was a quantum phenomenon?”

Chatbot:

“(1) Experimental evidence; (2) Coherence with existing theories; (3) Predictive power; (4) Integration with other scientific fields; (5) Agreement among experts.”

Me:

“If existing theories of consciousness are classical (i.e., deterministic) how can coherence with existing theories be a prerequisite for quantum consciousness theory?”

Chatbot:

“You’re correct that current theories of consciousness are primarily based on classical models, which are deterministic and do not include quantum mechanics. This is one of the main reasons why the idea of quantum consciousness is considered controversial and has not yet been widely accepted by the scientific community.”

Houston, we have a problem! From this brief interaction on the question of whether consciousness (one of the most persistent and stubborn mysteries of human history) is best understood as a quantum or classical phenomenon, AI exposes itself as human all too human. Artificial intelligence is plagued by the same boundedness, limitation, and incoherence that philosophers of science have noted structures the human search for ‘scientific’ knowledge since its inception in the seventeenth century. Is consciousness quantum? No. Why not? Because the humans presently looking into consciousness are more comfortable considering it physical and classical. Why are they more comfortable considering it physical? Because the models they use to understand consciousness are physical and classical. Cool. And, what evidence is there (using their own models) that consciousness is a classical phenomenon? None. Cool. No doubt. So, let’s say someone wanted to get humans to accept that consciousness was quantum – how would one go about doing that? Oh, that’s easy. You just need to have quantum consciousness confirm the existing theory of classical consciousness that doesn’t work and explains nothing – and, you’d also need to have a majority of the humans that have been invested in this dead-end classical theory of consciousness agree they were wrong all along.

Aside from Chatbot reflecting the worst traits of academia and the scientific community – i.e., their shared culture of institutionalised protectionism and gatekeeping – the question of falsifiability is conspicuously absent. According to Karl Popper’s concept, if an empirical test could be established that showed that some aspect of consciousness could be explained by quantum physics but not explained by classical physics, then the theory of quantum consciousness is rendered scientific while the theory of classical-physical consciousness becomes subject to serious scrutiny. Fortunately, cognitive scientists Jerome Busemeyer and Peter Bruza have written a book titled, Quantum Models of Cognition and Decision, replete with just these kinds of experiments! It turns out that many of the quirks, contradictions, paradoxes, and incoherences that are a regular feature of human decision-making – and which, coincidently, classical models fail to capture or predict – can be modelled and tested using quantum theory. The authors are pleased to report they have a “quantum model of probability judgments to account for findings that produce violations of classic probability theory, including conjunction and disjunction fallacies and order effects on inference.” Additionally, their research “compared the quantitative predictions of a quantum model with a traditional heuristic judgment model, and the quantum model was found to produce more accurate predictions.” Sounds pretty good to me. It’s a shame AI cannot escape the paradox that ‘current theories of consciousness are primarily based on classical models,’ which, in turn, represent ‘one of the main reasons why the idea of quantum consciousness is considered controversial and has not yet been widely accepted by the scientific community.’ Maybe, OpenAI could incorporate one or two of Busemeyer and Bruza’s quantum models to help Chatbot with it’s decision-making. Before signing off, I wondered if Chatbot might be encouraged to perform a little introspection.

Me

“What would Thomas Kuhn have to say about all of this?”

Chatbot:

“Thomas Kuhn, in his book The Structure of Scientific Revolutions, argues that scientific paradigms, or dominant theories, can persist for long periods of time, even in the face of new evidence or alternative theories, due to the psychological and social factors that influence the scientific community. In regards to consciousness, one could argue that the current paradigm of classical, physical theories of consciousness is facing a “crisis” as they fail to fully explain certain aspects of consciousness.

Me:

“What about Paul Feyerabend? What would he have to say?”

Chatbot:

“Paul Feyerabend, a philosopher of science, would likely argue that the scientific community’s acceptance of the idea that consciousness is classical is a product of the community’s adherence to certain dogmatic practices and paradigms. He would likely argue that the scientific community’s reluctance to consider alternative theories, such as the idea that consciousness is quantum, is a form of “invented ignorance” and that science should be open to considering all theories, regardless of how unconventional they may seem. He would likely argue that it is necessary to have an open mind and to be willing to entertain unconventional ideas in order to make progress in science.”

The saving grace to come out of my epistemic investigation is that – if prompted – the AI has all the resources at hand to identify and remedy its failures. Whether these insights (gleaned from decades of research into the history and philosophy of science) can be used as updating protocols to generate an advanced AI epistemology remains an outstanding question. So far, the focus of AI seems to have been increasing its dynamic processing power, as well as the store of information it relies upon to make determinations. What seems to have been utterly neglected is the question of epistemology – the question of how AI produces its knowledge. As it stands, the conditions of possibility for AI are precisely the same as those that govern human scientific potential. To wit, I’m not that worried AI will suddenly become self-aware and take us into some spectacular new future we couldn’t possibly have imagined. The artificial intelligence we are presently developing is too human – far too human – for that.