In response to another popular permutation of reductionism, it is clear that AI advancements themselves, a very popular topic indeed in contemporary technological research, cannot provide us with machine consciousness, computers lacking all self-awareness, emotion, and conception of beauty, bereft of joy, awe, or delight, loveless and wholly insensate, and even before we consider the proposed full limits of Artificial Intelligence as presented by Hubert L. Dreyfus in his classic What Computers Can’t Do where he states:
In discussing CS [cognitive simulation] we found that in playing games such as Chess, in solving complex problems, in recognising similarities and family resemblances, and in using language metaphorically and in ways we feel to be odd or ungrammatical, human beings do not seem to themselves or to observers to be following strict rules. On the contrary, they seem to be using global perceptual organization, making paradigmatic distinctions between essential and inessential operations, appealing to paradigm cases, and using a shared sense of the situation to get their meanings across.
It is noticeable that, from his very first publications up until now, the AI community has been very conspicuous in burying its head in the sand over the writings of Dreyfus, with the occasional haughty dismissal (somewhat expectable, if unsubstantiated), and no solid academic rebuttals have ever been presented that can counteract his compelling blend of Heideggerian philosophy, phenomenology, and hard science—the most one could do in one instance was claim that humans do indeed follow hard-set rules, which we are so far unaware of, echoing the words of Alan Turing, in his 1950 ‘Argument from the Informality of Behavior’, although admittedly this does seem, from a practical standpoint, to have parallels to the unfalsifiable ‘just over the horizon’ gene-hunting manias of bio-psychiatric genetics researchers. When is enough considered enough, and a scientific moratorium imposed on what, so far, has proven a considerable waste of time and money?
It is accurate, as of 2026, to state that computers—despite wild publicity and hype (and, again, the atmosphere of unquenchable, quasi-religious enthusiasm in favour of AI research’s purported ‘success’, a belief-level common also to devoted psychiatrists, which may not entirely be a coincidence given the contemporary model of—human—cognition which is adhered to by both disciplines and the bio-reductionistic overlaps between these fields of consciousness research)—are still unable to perform tasks that would require deep context and meaning, just as they lack the nuance of emotional intelligence, and thus cannot respond effectively to human emotions, context-heavy cultural references, or subtle human interactions, or indeed interpret them at all.
There is no empathy, and no amount of personalised feedback or interactive gamification by humans can install a legitimate phenomenological drive to compassion or interpersonal understanding. They cannot perceive us; there is no theory of mind.
Neither is there an ability present in the computing machines of today to make the intuitive leaps humans rely on in their decision-making.
Besides this, they have no genuine creativity or sensible artistic impulse, and no agency, relying instead, rather obviously, on human input.
Even artificial general intelligence (‘AGI’)—a theoretical form of AI that surpasses human cognitive ability in all areas— cannot handle causal problems dependant on a model of reality, as proved by Ragnar Fjelland in 2020 in his paper for Humanities and Social Sciences Communications. 7 (1): 1–9, titled “Why general artificial intelligence will not be realized”.
This paper states that, reliant as they are on the work of Yuval Noah Harari and Francis Crick, proponents of AGI and the strong AI model have made the glaring error that mathematician and philosopher Edmund Husserl famously recognised in Galileo’s Platonic thought (an idea also present in the deterministic mathematical ideas of Pierre Simon de Laplace), namely that, in objectivist fashion, they presume the world is ‘nothing but’ one of bio-chemical algorithms in vast assembles of nerve cells, which in truth, and as elaborated on by Theodore Rosdak with his thought experiment example of a Buchenwald psychiatrist’s ignorant incomprehension over why his patients displayed to him as very upset (!), does nothing to help us understand other people, by putting ourselves in their shoes, as context is forever missing, providing a serious oversimplification of humanity and social phenomena, and abstracting reality into something idealised and metaphysical, governed by mathematical functions rather than the causal relationships evidenced by empirical science.
Computers not being in our world (i.e. there being no genuine connect with us outside of what we ourselves contrive; a gap forever in place), the claims of Big Data advocates that the data ‘speak for themselves’ are thus hollow, as, despite not all neural networks requiring a programmer—the deep reinforcement learning of the company Deep Mind/Alphabet’s artificial neural network game, AlphaGo, could train on earlier versions of itself rather than competent players, and indeed can handle tacit knowledge, albeit of an unrealistic kind—the data models utilised must still be selected by humans, and consist of numbers.
As it stands, despite the expectable stiff expenses of research and development, DeepMind continues to run at a loss on account of deep reinforcement learning’s disconnect with real world problems in a changing world, having lost over one billion dollars over the course of three years, from 2016-2018, all victims of the ‘fallacy of initial success’, as with the rest of the AI industry.
It may be a shame for some to burst this bubble, if long overdue, but these vital human abilities will not—cannot—ever be achieved by computing technology, fundamentally as by the nature of these machines at all, the immense, irresponsible time-sink and energy-heavy resources drain of quantum computing research—high-tech machines functioning far beyond the capabilities of the best classical supercomputers—accounted for in this pronouncement.
Even quantum entanglement and superposition cannot determine quantum phases of matter, susceptible as these computers are to decoherence. As mathematician Gil Kalai observed in 2025, the phenomenon of noise (i.e. random fluctuations and errors) seriously affects the outcome of the process, with the potential to corrupt many qubits all at once, and the machines lack quantum error correction. Since this correction effort increases exponentially with the number of qubits, it becomes impossible to create a low enough error level to implement quantum circuits. Solving some difficult problems (such as detecting the mass of the black hole binary GW231123) would take a—so far theoretical—20 million qubit quantum computer an estimated many billions of trillions of years, and current machines are nowhere near that number of qubits, operating barely past the 1000 qubit mark, in fact.
Quantum computers remain less complex than the human brain, lacking the intricacy of the brain’s neural networks, comprised of around 86 billion neurons interconnected by trillions of synapses, a brain that excels at parallel processing, pattern recognition, and learning, even before emotional and social intelligence is considered.
So even if they could—which they can’t—reach that level of qubits, would it be worth it?
Also, reliant as quantum computers are on the generation of random numbers, is it even correct to claim they are modelled on the reality of human thought?
Indeed, in Shadows of the Mind, in the conclusions of his chapter 3, “The Case for Non-Computability in Mathematical Thought,” the physicist Roger Penrose also acknowledges a clear non-axiomatic quality to the process of thinking, saying, “we appear to be driven to the firm conclusion that there is something essential in human understanding that is not possible to simulate by any computational means”, having speculated further in the previous lines, asking of us “is it conceivable that there is an essentially non-random nature to the detailed behaviour of some chaotic systems, and that this ‘edge of chaos’ contains the key to the effectively non-computable behaviour of the mind?”
Furthermore, edge-of-chaos dynamics are discussed at length in the first chapter of a fascinating Advances in Consciousness Research book titled “Fractals of Brain, Fractals of Mind,” edited by Earl Mac Cormac and Maxim I. Stamenov—and which may render my other writings to some degree obsolete— which reminds us, on the very first page that when various scales of complexity in the (nonlinear dynamical) brain are considered, the brain can be observed to take on a fractal-like structure where neural structures at many different spatial scales are embedded recursively, making reference to the many scales of supra-neural structure in the ‘Neural Darwinism’ neural model of Maurice Edelman, 1987 (among discussing many other researchers and theorists), and going on to suggest that, as by Chris G. Langton (the paper “Computation at the edge of chaos: Phase transitions and emergent computation” in Physica D. Nonlinear Phenomena, Volume 42, Issues 1-3, June 1990, pages 12-37) complex systems may be positioned on a continuum between highly ordered and highly chaotic.
In the specific example of a brain system, the movement to a more ordered state makes up a recognition-based, engaged, unreceptive mode of interaction whereas movement to a more chaotic state requires an alert, ready, receptive mode of interaction (according to the article “How brains make chaos in order to make sense of the world”, by Christine Skarda and Walter J. Freeman, Behavioral and Brain Sciences(1987)10:161–195).
One way to explore this is to examine cellular automata i.e. simple computational devices which are theorised to switch from one discrete state to another depending on neighbour-states at the previous discrete time step, much as, as by Stephanie Forrest’s paper in Physica D: Nonlinear Phenomena, Volume 42, Issues 1-3, June 1990, Pages 1-11, titled “Emergent computation: Self-organizing, collective, and cooperative phenomena in natural and artificial computing networks”, large systems of identical automata display properties which are very non-computational. An analogy of these extremes would be the behaviours of solids and gases, respectively.
In an further linked analogy to the process of sublimation, we can consider ‘class four automata’, which display properties not seen in either highly ordered or highly chaotic cellular automata, the complex behaviours described as extended transients, i.e. metastable dynamics produced by the tension between order and chaos, propagating unpredictably, albeit with clearly observable coherent patterns in their evolution (hence effervescent). Extended transients enable the possibility of long-range interactions at the global scale—at the edge-of-chaos, cellular automata can influence each other according to a power law distribution (Stuart Kauffman, 1991) where nearby sites communicate frequently in small ‘avalanches’ of changes, whereas distant sites communicate rarely, albeit with large change avalanches, with extended transients revealing the most effective trajectories, optimally positioned between total order and total chaos, the resulting behaviour resembling the dynamics of real-world complex systems capable of producing solitary waves, i.e. a ‘soliton’: a nonlinear, self-reinforcing, localised wave packet, providing stable solutions to a range of—weakly—nonlinear dispersive partial differential equations describing physical systems, and ensuring a nearly lossless energy transfer of wave-like propagations (again, with initial reference to Chris G. Langton, 1990).
To return, however, to the basic nature of thought, according to the overview given in Chapter 7 of Stairway to the Mind, by Alywn Scott, Roger Penrose clarifies matters by outlining “four philosophical positions that one may assume”:
1) All thinking is computational; in particular, feelings of conscious awareness are evoked merely by the carrying out of appropriate computations.
2) Awareness is a feature of the brain’s physical action; and whereas any physical action can be simulated computationally, computational simulation by itself cannot simulate awareness.
3) Appropriate physical action of the brain evokes awareness, but this physical action cannot even be properly simulated computationally.
4) Awareness cannot be explained by physical, computational, or any other scientific terms.
Scott goes on to explain:
A is the position of strong artificial intelligence, or functionalism and D is the position of the mystic. Both are rejected by Penrose so the choice is between B and C. B, he suggests, is the view that would generally be regarded as “scientific common sense” because the simulation of a physical process is not the same as the actual process. (“A computer simulation of a hurricane, for example, is certainly no hurricane!”) Nonetheless, C is the position that Penrose believes to be closest to the truth. View C holds that not all physical actions can be simulated on a computer, and Penrose argues—as did [Eugene] Wigner—that such non-computable physical laws may lie outside the present purview of physics.
In his short, informative presentation on the debate between Roger Penrose and Emanuele Severino in Artificial Intelligence Versus Natural Intelligence, Fabio Scardigli summarises this argument, explaining to us that the authors consider, as Roger Penrose does, that ‘true’ intelligence requires consciousness, something that our digital machines do not have, and never will. These authors are also opposed, like Penrose, to the standard AI view of human beings as a kind of ‘wetware’. They contrast the strong AI belief that consciousness emerges from brains alone, as a product of something similar to the software of our computers, as well as the physicalist view that consciousness ‘emerges from functioning’, like some biological property of life.
He goes on to say that these researchers hold that the essential property of consciousness is the ability, the capacity, to feel. Of course, the ability to feel implies the existence of a subject who feels—a self. Therefore consciousness is inextricably entangled with a self which (or who) feels inner experiences. Central to the discussion is thus the construction of a theory of ‘qualia’ (i.e. specific instances of subjective experience, for example, the taste of a tomato, or the pain sensation of a broken rib, as opposed to propositional attitudes, which are merely neutral, content-bearing beliefs about an experience).
Benjamin’s postscript by email:
The other part of the ‘popular reductionist positions’ is of course psychiatric bio-reductionism. I wanted to take down both at once in this chapter, as a nod to my preceding chapter (“Psychiatry is a Sham”), insinuating that they’re both wrong as they’re utilising the wrong model of (human) cognition.
My final (serious) paragraph in the chapter reads:
It feels to me that this matter requires much more investigation, where science ends—the current body of multidisciplinary research is not suitably balanced. By that, I do not mean that we should make a naïve capitulation to the patent ridiculousness of New Age mysticism, only an acknowledgement, as with the naturalists of the German Romantic movement, that positivist scientism cannot address some questions, much as, as noted by Stephan Zweig in his analysis of the European psyche in his 1939 book The Struggle with the Daimon: Hölderlin, Kleist, Nietzsche, we have unfortunately come by now to a fiercely analytic tradition in the West, at odds with our natural disposition, if very hard to shake at times (and with disastrous consequences for all three men considered, particularly Friedrich Nietzsche).









