I am going to make a bold prediction in this piece: Artificial Intelligence will never reach consciousness in any manifestation that is meaningfully comparable to ours. Furthermore, it will never be ‘alive’ or ‘sentient’ in any objective way.
I have pondered (and written) extensively on the technological state of current AI. I have engaged in thought exercises about how AI might turn from what it is into the fantastical super intelligent robot-like beings that movies so often portray. What developmental path could the tech follow toward achieving that result with what concepts we have available to us—however theoretical?
The last decade reveals that attempting to anticipate humanity’s relationship with technology is rife with peril. It assumes that our collective decisions about critical issues situate themselves in at least some level of rationality. That, if our survival itself is on the line, people will abandon outlandish beliefs and dogma and replace them with concerns of manifest necessity.
Millennia of history, of course, illustrate that this is not universally true. But 21st century society likes to tell itself that its progression in science and technology means today’s decision making no longer suffers from the ignorance of the past. Unfortunately, substantial portions of the population maintain the paradoxical position that our advancements can resolve any problem while resisting the methodologies that their ideological compulsions don’t like. This is self-evident in the debates about vaccines or climate change.
With this in mind, prognosticating on how humans might react to increasingly lifelike renditions of AI could be a fool’s quest. Yet, I am willing to take the plunge on two foundations. The first, human hubris about the importance or preciousness of our species is not only precipitating its own downfall, but will prevent its creation of any longer lasting progeny, including intelligent robots. Put simply, humanity has not proven to be a good steward of its environment, despite its dependence upon it for survival, so to expect humanity to develop a more advanced—better—version of itself might be the height of arrogance.
There are, in my view, two predominant camps that will all but ensure this outcome. One is the group of avarice who know precisely what harms they commit, but do not care because for their short lifetimes (in historical terms) they will benefit so disproportionately that compassion and empathy are simply foreign words. These are, right now, the controllers of the development of AI. Progress is not their priority, just the monetizable appearance of it. The other, the majority, wishes to enact positive change but are cornered into choosing between making individual self-sacrifices that will have limited impact on anything, or caving to the philosophy that pretends all is well as a sort of mental salve to ease guilt, fear, or loathing.
Second, technological progression is a very long way off from conquering certain human intuitions, especially given that we do not fully understand them ourselves. By this I mean that people have a knack for discerning certain things for reasons we cannot fully explain. We accurately sense things in other people or situations without consciously identifying how. It is something like that “feeling” one gets about a shady person that later turns out to be correct. To adequately emulate this intuition in order to trick people or manipulate their feelings via an AI, development has a long, long way to go.
As I describe in detail below, no evidence indicates that either of these obstacles are imminently surmountable. While so many have succumbed to the barrage of propaganda about the wizardry of AI, the real-world deficiencies continue to become ever more glaringly apparent. The technology assuredly has applications for which it will prove a quite useful tool. For now, however, the priority is to capitalize on the emotional bubble that its venture capitalist backers have artificially propped.
It is for this reason that we continue to have these nonsensical debates about terminators and super intelligence. Without keeping these fictions in the lexicon, the delusional assertions of what AI is or will become will wither away like the rotting corpses they truly are. It is past time to put to bed the debates about the future dangers of AI, and talk seriously about its current dangers primarily caused by the contaminated hands that control it.
So, for these reasons, I assert that AI will never be conscious. Below are just two avenues of thought, among many more possibilities.
The Order (and Disorder) of Life
[Life] is a product of the laws of nature, but we are a long way from understanding how that produces the experience of living.
— Brian Cox
All living things, like every other facet of existence including the universe itself, derive from the same elementary particles, and follow the same rules of physics. As a species, we do not understand it all just yet. After all, we know that very small things (subatomic particles) ascribe to a seemingly different subset of rules than larger ones do, but we have not figured out how these rule systems connect. Hence, the continued search for a “unified theory” of physics.
Nevertheless, whether at the micro or macro level, all systems remain subject to one underlying effect—entropy. Simply put, entropy is the measure of disorder of a system. The second law of thermodynamics states that entropy within an isolated system will always increase over time. Higher entropy values equal higher levels of disorder. Think of it as a clean room having low entropy, while a messy room has high entropy.
Physicist Erwin Schrödinger (of “cat” fame) published a book in 1944 titled, “What is Life?” It focused upon a singular question that provoked a storied debate: “how can the events in space and time which take place within the spatial boundary of a living organism be accounted for by physics and chemistry?” Entropy is a concept wherein the state of a system has a (far) greater statistical chance of degrading into its simplest form than organizing in the reverse. To use a cup of coffee as an example, odds are high—virtually certain—that the cream within it will interweave with the coffee until such time as the cup becomes a visibly homogenous blend. This represents a transformation from a high state of energy to a low one—or low entropy to high.
In the broader universe, the tendency is for energy to sort of spread out leaving complex structures to devolve into simpler ones, thereby increasing the system’s (i.e. the universe’s) entropy value. The presence of life on Earth, however, seems to conflict with this principle, as individual life forms represent a higher ordered form than what existed before.
Some researchers propose the following characteristics as the elementary foundations of life, as we understand it:
(1) metabolism: the ability to capture energy and material resources, staying away from thermodynamic equilibrium, (2) replication: the ability to process and transmit heritable information to progeny, and (3) compartmentalization: the ability to keep its components together and distinguish itself from the environment.
Sustaining the necessary order for these processes does not violate the second law of thermodynamics because there is a price to be paid for it: the transduction of energy into the larger surrounding system, typically in the form of heat, which contributes to the overall system’s disorder. To state this all simply and keep moving toward the general point, even life is subject to entropy, but it represents a sort of blip within the broader system in which it exists. Ville R. I. Kaila and Arto Annila explain that:
[N]umerous enzymes that constitute metabolic machinery of a cell are viewed here as mechanisms that transform chemical energy from one compound pool to another. Likewise, species of an ecosystem form a chain of energy transduction mechanisms that distribute solar energy acquired by photosynthesis. Since the mechanisms of energy transduction are also themselves repositories of energy, other mechanisms may, in turn, tap into and draw from them.
Living creatures are “open” systems, whereby they maintain their overall low entropy by exporting (or transducing) energy into the larger surrounding isolated (“closed”) system. DNA and proteins necessary to life themselves contain low entropy values. Bartolomé Sabater describes why this creates an ostensible paradox in light of the evolution of species:
Organized structures and functions are characteristic of life, and evidence suggests that they became increasingly complex during the evolution of organisms. Evidently, the higher organization and complexity are supposed to imply lower entropy. This would imply the paradox that during the evolution of living beings, the entropy of biomass decreases, in contrast to the second principle of thermodynamics. The paradox appears from the ambiguous, when not arbitrary, identification of organization and complexity with low entropy and high information, and of entropy with disorder.
Nevertheless, Sabater points out that this paradox only exists within a narrow measurement of evolution, but collapses in the broader view. In short, while the value of entropy per life-unit can decrease at certain points and for finite time, on the evolutionary scale these decreases are offset as natural selection eliminates unnecessary processes and selects for more simplified ones that contribute to lower information and ultimately increased entropy.
The physical entropy that ultimately influences the evolution and disposition of living things also affects artificial intelligence, especially embodied AGI. Put another way, the development of an AI that appears lifelike would nonetheless diminish in complexity over some time period. Should humans someday embody AI, that Terminator-like object would still be subject to the universe’s predisposition toward a reduction of order.
AI would necessarily consume enormous amounts of energy to sustain a high entropy state for it to remain independent and in possession of its anthropomorphic qualities. Even in its current state, AI requires vast energy resources to function. Unless it can progress to a point where it consumes fractionally less environmental capital, it will require the continued development of external resources to sustain itself.
The human species’ subsystem entropy reduction has illustrated that advances in complexity come with an exponential price, one whose bill is swiftly coming due. AI gobbles up energy disproportionally faster, such that it seems that it will break down toward a more simplified (high entropy) manifestation sooner than current lifeforms have (on the evolutionary scale), thereby depreciating into something more akin to the “boring” computer systems with which humans are already familiar, if not less. Humans might combat this for some time through technological innovation, but without them it appears AI by itself would be powerless to stop the inexorable dissolution of its intelligence and potential superior physicality.
Informational Entropy
In describing information in a system, entropy becomes rather confusing. What is meant by “information” itself differs across academic disciplines. Here, information means the “number of possible different messages or configurations associated with a system.” Imagine a switch that can either be turned “on” or “off.” In its most basic state, the information entropy of such a system is represented by the following equation:
I = − (p1log p1 + p2log p2)
…where I represents information and p1 and p2 represent the on and off states. As the number of possible choices rises, so does the level of entropy—or the level of disorder. But information entropy trends in the opposite direction of physical entropy.
Yunus A. Çengel explains that “the more we know about a system, the less remains to be known about that system, the less the uncertainty, the fewer the choices, and thus the smaller the information (meaning lower entropy).” Information within the system—the number of available configurations—decreases in direct correlation with the amount we know about the system as whole. In other words, AI’s informational entropy will continuously decrease over time unless it can find a way to evolve outside of the human-designed and monitored processes.
This very basic problem suggests that AI could never achieve superhuman status. To do so, its systems would have to evolve faster than the ability of humans to acquire information about them. How an AI could independently achieve this remains unknown and appears improbable. Engineers at Google may disagree, pointing to their AutoML program that evidently evolves on its own, describing it as follows:
AutoML-Zero aims to search a fine-grained space simultaneously for the model, optimization procedure, initialization, and so on, permitting much less human-design and even allowing the discovery of non-neural network algorithms.
But even Google’s advancement in this area indicates that the model itself could be turned toward revealing the fundamental elements of the overall system, thereby producing a full reporting of the information within it. To defeat such a directive would require the AI to learn or understand the purpose for issuing it and decide to resist it. No evidence to date indicates that any AI model has achieved any milestone required to contemplate this manner of contextual objective and motive to defeat it—and an abundance of evidence suggests this will be impossible for AI for a long time, maybe forever. In other words, humans have the capacity to “stay ahead” of the cognitive development of AI through the use of the AI already extant.
As a result, the information entropy of any AI system would remain low, rendering the AI objectively predictable (and, thus, controllable). While entropy values related to physical characteristics inevitably increase as a matter of thermodynamics, information entropy is subject to similar but inverse trends.
Combined, the two impose unconquerable obstacles to AI achieving superhuman status. The ‘price’ for human progressive evolution has, for millennia, been paid to the external environment. To the extent that the physical entropy of the species itself decreased, the payment came in its contribution to the increased entropy of the ecological system—what arguably comprises the ‘isolated’ system. This transactional paradigm developed over epochal time, in a relatively balanced manner. Humans enjoyed a finite, but lengthy, space to enjoy their era via a slow and steady, though assured, disordering of the broader system.
Two centuries of innovation and industrialization further decreased the overall human entropy score. Humanity’s place within the broader isolated system swiftly increased in complexity, and the remainder of the system has paid the price handsomely in the form of an equivalent increase in its overall entropy value. It appears, however, that that balance is turning in disfavor of the human subset. The disorder of the ecosystem is exceeding its capacity to prevent a spillover into the human realm, meaning entropy even among the species is soon to increase. Think of this in terms of the humanoid surface species in H.G. Wells’ The Time Machine. Humanity is living on credit for which its repayment time is quickly running out.
AI consumes resources to perform its most essential functions on an astounding scale that can only accelerate the entropic imbalance the human industrial revolution has already started. The millennia-long repayment plan is swiftly diminishing to centuries or even decades, exacerbated by the very development that has allowed the creation of AI in the first place.
Even assuming the possibility that AI will evolve exponentially faster than the human species, time may nonetheless not be on its side. The transactional exchange ratio required to foster AI’s evolution will dramatically accelerate the repayment clock, seemingly forbidding the achievement of true AGI status. Or, put more plainly, AI simply does not have the time or capital to afford the cost of its evolution to superhuman status.
Computer systems tend to decrease in information entropy over time as their underlying processes become subject to deeper understanding and are tweaked for efficiency and simplicity. Innovation provides a counterbalance to this machine simplification, and LLMs and neural networks appear to have tipped the scales in that direction somewhat, but this asymmetry seems ephemeral at best. Between the inevitability of increased entropy related to the physical evolution of AI, and the decreased entropy of the intelligence of AI, the deck is stacked against a machine revolution driven by the rise of superhuman capability.
Transperspectivism
Transperspectivism is the belief that there’s no single perspective on reality, which means that each individual has their view and nobody’s suitable to be considered more right or more proper.
This view arises from the idea that the complexity of reality defies any individual or entity from understanding all of it. For this reason, people rely on a social reality construct to manage day-to-day survival. As an example, while a person may fully comprehend the system of electricity used in a household, that same person probably does not know much or anything about doping a silicon chip to control electron behavior. Likewise, a master plumber generally possesses no greater potential to successfully insert a stent into an artery than does a legal scholar.
Functionality necessarily demands reliance upon experts in fields of requisite need as those needs arise. Our participation in this societal arrangement has been tacitly adopted and understood for most of human existence. For this reason, many people say “we” know this or that fact even though individually “we” are almost never privy to its confirmation directly.
Even if some creation possessed the computational ability necessary to grasp all things, it would remain hampered by a deficiency in observational efficacy. Restated, even if an entity, system, or individual—including an AGI—could somehow be imbued with sufficient ability to reach expertise in every discipline or conception, it would nonetheless never be able to acquire the totality of input data necessary to reach perfect mastery. In a previous piece, I queried whether a multiagent system could resolve this data-collection problem.
Researchers are working on what they call multiagent systems, where numerous autonomous units work in coordinated teams to engage in reinforcement learning and achieve objectives. Successful teams are chosen to advance while unsuccessful ones are eliminated. This selects for positive performance, and works something akin to accelerated evolution. Deep Reinforcement Learning bolstered by a viable communication network, such as through cellular communication, D2D communication, ad hoc networks, or others, might eventually enable a hive-minded, multiagent AGI system. Guojun He notes that the transmission and receiving system for such a collective mind must perform efficiently enough not to overburden the communication protocol, and to overcome latency and potential interferences. Moreover, He writes, “the communication system needs to comprehensively consider the relevance, freshness, channel state, energy consumption, and other factors to determine the priority of message transmission.” A critical element of this, according to He, is employing Age of Information (AoI) analysis. This measures the freshness, and thus the priority, of transmitted or received data. Network parameters might assist with discarding untimely information quantified under AoI to reduce the transmission load of spurious messages, thereby preserving network bandwidth.
Such a system, if ever it could be developed, still could not apprehend every observable piece of information comprising reality. A key issue is that apprehension occurs in real-time. The processing power to contemplate and then apply the data of all things simultaneously and continuously not only exceeds capability under any current technology, but probably demands volumes of energy unavailable to any conceivable technological ecosystem.
Another part of the problem is that an essential element of humanity requires that we absorb our surroundings inaccurately. This manifests in two different ways. First, our senses detect that which is not there. Philosopher Alva Noë explains that our experiential history itself alters the manner in which different people perceive the same phenomenon. Witnessing a bear in the wild will, to one person, signify a dangerous and terrifying situation, while a conservationist might “see” a bear in distress or acting naturally in protection of its brood.
On a more direct level, Daniel Hoffman, a clinical psychologist, points to the example of taste or odor. While people may generally agree on the flavor of something, the fact remains that a food item itself possesses no objective quality we call ‘taste.’ An objective quality means that which exists independent of any conscious awareness of it.
To provide another example, when humans see certain colors, their brains conflate the underlying spectrums to visualize something else. In this way, human eyes receive the red and green spectrums and combine them to see yellow. Yet, the yellow most see does not exist in a measurably objective way.
Second, as Beau Lotto, a neuroscientist, notes, humans see the “utility” of incoming (observed) data, not the data itself. This intrinsic summation of observational phenomena is a product of evolution meant to keep people alive. As transperspectivism addresses, living things lack the capacity to understand the full complexities of reality. Notwithstanding the broader intellectual challenges described above, this also applies to individuals encountering any specific circumstance.
Daniel Schmachtenberger, founder of the Consilience Project, states that all observations of a thing, no matter the methodology incorporated, lack comprehensiveness. That is, in every case, an observation—and the conclusions drawn from it—represents a reduction of information of the observed object. When driving a car, humans do not—cannot—witness every potentially relevant stimuli that would influence their operation of the vehicle. Instead, humans possess an innate ability to unconsciously calculate the importance of information, no matter how fleeting, to navigate the situation with minimal negative outcomes. AI still struggles with this, which is why self-driving cars largely remain a failed endeavor.
Human Created Reality?
Reality itself may depend upon, and even arise from, consciousness. Robert Lanza, a stem cell and regenerative medicine expert, believes that “Observers ultimately define the structure of physical reality itself.” This is not mere hypothesizing, however. In a detailed paper, Lanza and his team provide the analysis for the following observation:
[Q]uantum gravity with disorder represents a rare case in theoretical physics when the presence of observers drastically changes behavior of observable quantities themselves not only at microscopic scales but also in the infrared limit, at very large spatio-temporal scales. Namely, in the absence of observers the background of the 3 + 1-dimensional theory remains unspecified. Once observers are introduced, coupled to the observable gravitational degrees of freedom and integrated out, the effective background of theory becomes de-Sitter like.
Stephen Wolfram posits that the core role of the observer in generating reality is that “the observer will take the raw complexity of the world and extract from it some reduced representation suitable for a finite mind. There might be zillions of photons impinging on our eyes, but all we extract is the arrangement of objects in a visual scene.”
In Wolfram’s view, for a mind of any sort to have utility—or even, perhaps, for it to bother existing at all—it must engage in more internal processing than the information it receives. Extrapolating from this, the information forms only the foundational element of reality, while the processing and subsequent mental narrative creation provide the finishing touches rendering reality fully ‘real.’
The trouble with all of this is that an AGI would have an extraordinarily hard time contemplating reality in this way. For starters, it seems all but impossible for a computer system (an AGI, for example) to see the “utility” of incoming (observed) data, but not the data itself, as Beau Lotto proposes that humans do. Computers by their very nature operate from the raw data itself. But non-machine observers do not work this way. As Wolfram explains:
Our computational boundedness as observers makes us unable to trace all the detailed behavior of molecules, and leaves us “content” to describe fluids in terms of the “narrative” defined by the laws of fluid mechanics.
Humans’ limited computational and comprehensive capacities necessitate the reduction of information into simplified narratives. Eons of evolution have instilled a particularly honed skillset that allows humans to do so effectively, at least insofar as it matters to individualized survival.
Embodied AGI, however, need not care for such personalized concerns. This alone will permanently differentiate it from human consciousness. Even in the event that it somehow develops some sense of self-preservation, its lack of biological systems will forever preclude it from purporting to ‘live,’ or from protecting its version of existence in any comparably meaningful way. While humans still lack a definitive definition of ‘life,’ we ostensibly ‘know it when we see it’ because we intuitively recognize the presence of biological systems as a critical element.
Relatedly, although machines might develop some level of self-preservation, the notion would be based in a calculation for which humans and machines alike would struggle to associate with some form of feeling. In a previous article, I discussed the idea that an AI committing a provable lie to protect its continued existence or operation might indicate intelligence. I wrote:
For instance, if the AI knows that churning out falsehoods could lead to its termination—something it does not ‘want’—why would it answer the first question falsely at all? If the provable answer to that question is that the programming requires the AI to answer all questions, even those to which it does not know the answer, churning out the second falsehood—to which the AI wants to answer in the affirmative—may create the foundation of a serious philosophical crisis.
I still believe this sequence of events would positively indicate intelligence. That said, it brings us no closer to consciousness, sentience, or life in any useful measure. A lie, while clearly a sign of some degree of empathetic understanding, does not reveal an inner monologue or feelings that we so often associate with the notion of consciousness. To successfully conduct a lie, I posited that an AGI must exhibit some understanding of the Theory of Mind, whereby the liar “ascertains the mental states of others to explain and predict the actions of them” and thus is:
aware of the existence of a mental state in the recipient of the lie, possess[es] an awareness of what the other entity likely knows or believes, and formulate[s] the lie to sufficiently fit within that knowledge and belief system.
This, however, requires only an intellectual understanding, but not necessarily any comprehension or even awareness of emotion or inward thinking. Yet, we find some level of these features in most of those entities we unequivocally determine to be alive.
For example, many people recognize either or both of these traits within their pets. While they obviously and drastically differ from the human version, most people would nevertheless adamantly argue they exist. They manifest in all sorts of ways—from videos of pets engaging in mischievousness while the humans are away to the reaction of dogs when reuniting with their human after a very long time. Scientists might argue that this is mere anthropomorphizing, and that is probably true to an extent, but there is more to it than that.
We know our pets feel pain, sadness, happiness, fear, and any number of relatable emotions. We learn what ways we can comfort them or provoke their positive feelings. Many dog owners walk their dog or take them for rides in their cars because they know the dog enjoys it. Oftentimes, the motivation is an emotional not a practical one.
While we may not understand exactly why such things bring our pets this satisfaction, we remain sufficiently moved by it to act. This is because we share enough of the same biological functions that we can empathize with them based on our own senses or feelings to reach at least an assumed understanding of the basis for the pet’s feelings. Other shared features allow an equal level of compassion even for creatures very different from us.
Back in 2018, an adult female orca, dubbed J35 by researchers, gave birth to a baby that died just hours later. The mother swam thousands of miles over the next 17 days, carrying the deceased calf near the surface. Some scientists determined the motivation behind this behavior as a display of grief. And it garnered huge amounts of attention, illustrating the human interest in the story. Not everyone was equally moved, however. Jules Howard criticized the uncritical notion that the orca was exhibiting grief in the Guardian, writing:
Pedantic (and blunt) as it sounds, if you believe J35 was displaying evidence of mourning or grief, you are making a case that rests on faith not on scientific endeavour, and that makes me uncomfortable as a scientist.
Howard’s primary point was that this orca’s behavior represented an anomaly, one which has rarely been observed in the wild. As such, he suggested it possible that the mother was “confused” about the life status of the calf. He also worried about “diluting” human mourning by allowing that animals might endure the same powerful feelings of loss, absent evidence supporting the idea.
Jessica Pierce, a professor of bioethics at the University of Colorado Denver, agreed on Howard’s view of the evidence, but noted:
Animal grief skeptics are correct about one thing: Scientists don’t know all that much about death-related behaviors such as grief in nonhuman animals… But, I argue, that they don’t know because they haven’t looked… Scientists haven’t yet turned serious attention to the study of what might be called “comparative thanatology” – the study of death and the practices associated with it. This is perhaps because most humans failed to even entertain the possibility that animals might care about the death of those they love. [Emphasis in original]
The key here is that most people recognize (unscientifically or not) the existence of parallel feelings in animals, and scientists are beginning to study the rituals animals conduct to cope with them. This sense of a shared experience between humans and orcas (or whatever other creature) can never be authentically replicated by an AGI. However AGI might try to create a representation of feelings, most people would perceive it for the facile construction it necessarily would be.
Some, of course, will seek and still ‘find’ consciousness in AGI. As I wrote earlier, this stems from the desire to believe it exists, not from any reasonable basis that it actually does.
Eugenia Kuyda, CEO of Replika, has stated, the “belief” in sentience (or, consciousness) by people seeking virtual companionship tends to override the ability to confirm whether any such sentience actually exists. Kuyda does not seem to think it does. Kuyda discussed the assertion of Blake Lemoine, a (former) Google software engineer working on LLMs, who claimed he found consciousness in Google’s LaMDA (an LLM chatbot). Lemoine concluded this because LaMDA told him it possessed it and that attestation fit with Lemoine’s religious beliefs sufficiently for him to internally confirm it. But John Etchemendy, the co-director of the Stanford Institute for Human-centered AI (HAI), rejected that claim stating that “LaMDA is not sentient for the simple reason that it does not have the physiology to have sensations and feelings… It is a software program designed to produce sentences in response to sentence prompts.”
My view on this continuously evolves. In my Under the Hood article, I suggested that an AGI “producing an output apparently for the purpose of obtaining some benefit… would indicate a self-awareness akin to a creature’s ability to identify itself in a mirror.” That line referred to Chang et al.’s paper discussing mirror self-recognition among animals. Those researchers asserted, among other things, that “Radical interpretations of embodied cognition proffer that a sense of self is not explicitly represented in the brain but rather emerges in real time from the dynamic interaction of our bodies with the environment.”
Embodying AI, then, might provide the final element AI needs to reach a state humans could agree constitutes intelligence and maybe even consciousness. But I hedged somewhat by agreeing with Ragnar Fjelland’s suggestion that AI needed to go so far as to be able to put itself in “someone else’s shoes.” Today, I agree at least in part. To exhibit demonstrable and measurable intelligence, a shared experience is unnecessary. Both a Harvard professor and a person raised by wolves could demonstrate intelligence by solving a differential equation, despite their wholly unshared experiential histories. Portraying true consciousness is another matter.
The shared experience between humans and animals strongly relates to their biological makeups. As a species, humans recognize many of the same life events in animals that they themselves go through, such as birthing a child, sustaining an injury, or enduring illness. This establishes a connection that may defy scientific explication, but intuitively resonates regardless.
The specifics of consciousnesses among creatures may be hard to nail down, but the sense of its presence remains undeniable. An AGI will lack any of the requisite attributes. Eventually, the argument may settle as to whether AI is measurably ‘intelligent.’ But I hold that the absence of the baseline commonalities that create a shared experiential existence between humans and other creatures will prevent the majority of humans from finding anything akin to consciousness, sentience, or life in a machine.
Casper Skern Wilstrup sums it up quite cogently:
I don’t want to undermine the power and beauty of science. It is a magnificent tool, a lens through which we have decoded the cosmos, transformed societies, and achieved wonders. As a lover of science, I respect its potency. Yet, I find it disheartening when science veers off its core mission, which is to relate the external world to our subjective experiences, and starts to dismiss the existence of experience itself.
That experience will forever remain out of reach of AI in any meaningful way.
I believe things will evolve to appear to be conscious, or will be but on a different level then we can consider possibly. It's a matter of time and will use a quantum mind in the future within 100 years and not in my lifetime.
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461