The explosive popularity of chatbots and generative AI models has introduced the concept of artificial general intelligence (AGI) to society faster than ever before. Many experts and enthusiasts now believe that AGI—software capable of performing any intellectual task that a human can—is not only possible but also imminent. Enough so to move markets, scramble governments, and rekindle interest in the technological singularity and other transhumanist ideas. Amid growing cracks, from the diminishing returns of scaling laws to unclear profitability and dissenting voices, the hype is set to continue. Between the sudden factual advances of AI capabilities, heralded by transformer-based Large Language Models, unashamed predictions from stakeholders and fearmongering, notably by those that for decades taught and inspired many of today’s AI researchers, 2025 promises to be another year of surprises that will test the limits of Generative AI. Whether the data-hungry imitators that constitute the latest generation of generative models are sufficient for AGI is a hot topic of debate.
Yet, because it’s holiday season and we’re all recouping from another eventful year, let’s table this question for another post to engage instead in a lighthearted discussion about the coming (fictional) “technological immortality” envisioned by transhumanists to close off 2024.
The rapid advancement of artificial intelligence (AI) has rekindled interest in transhumanist ideas, notably the concept of silicon-based immortality through mind uploading or brain chips. These technologies promise a form of digital immortality, allowing humans to transcend our biological limitations. However, these approaches may be fraught with peril, potentially resulting in philosophical zombies—entities that mimic human behaviour without possessing genuine consciousness—or even brain death. Perils that proponents carelessly gloss over when engaging in such wishful thinking. While amusing, this and other equally farfetched ideas seem to gain mainstream adoption, primarily among younger generations.
From Singularity to Immortality
Let’s posit that the emergence of AGI is a certainty, a simple matter of a few years. And suppose AGI will indeed bring the technological singularity that the likes of Ray Kurzweil and other transhumanist thinkers envision. What then? Between mind uploading and cybernetic brain extensions could humans effectively become immortal, or at least prevent involuntary death? A kind of heaven on earth through technology. That’s certainly the quasi-religious sentiment of (many) transhumanists and fellow enthusiasts now spectating an apparent exponential growth in AI performance that no expert foresaw.
Mind uploading involves transferring one's entire personality, memory, and consciousness to a computer or digital platform, creating a digital replica of the mind. Brain extensions, that is, brain chips, brain-computer interfaces, and neural implants like those manufactured by Neuralink, aim to extend human cognitive abilities by bridging carbon and silicon. Transhumanists envision these technologies as pathways to immortality, freeing humans from the constraints of biology.
The allure of those technologies is undeniable. A technological utopia of abundance comprising immortality will have people flock to free themselves of biological decay. But it's essential to address the fundamental questions surrounding consciousness and personal identity. Obviously despite the tremendous advances of [sciences], between where humanity is today and what must be happen for said utopia to exist, there’s a lot that requires suspending our disbelief. There are at least two crucial questions anyone entertaining the possibility of technological immortality ought to consider: 1) How sure are you your conscious self will persist and not convert you into a philosophical zombie? 2) How strong do you believe your mind is?
Persistence of the Conscious Self
After all, fundamental questions inevitably arise concerning the nature of consciousness and personal identity when contemplating the extension of life through digital means. Unless the intent is merely to create a digital parrot of you, achieving genuine silicon-backed life extension necessitates solving the hard problem of consciousness.
David Chalmers’s "hard problem of consciousness" posits that explaining how subjective experiences arises from physical processes is extraordinarily difficult. Without a comprehensive grasp of consciousness, ensuring that a digital clone possesses authentic awareness remains elusive. Many theories attempt to explain consciousness, ranging from the timeless concept of a soul that transcends the physical world to more scientific perspectives, such as an emergent property arising from the persistent flow of neural activity, possibly through the execution of a kind of the "Consciousness Operator" argued by Joscha Bach. However, how can we test for genuine consciousness in a digital self when contemporary theories and measures of consciousness fail to generalize? There's a significant risk that "uploaded" entities might merely simulate conscious behaviour without actually experiencing anything, thus becoming philosophical zombies—entities that behave as if they are conscious but lack any internal subjective experience. This notion is derived from philosophical debates about what, in practice, distinguishes a conscious entity from a non-conscious one from the perspective of an external observer.
Moreover, the continuity of self over the physical substrate is paramount for personal identity, leading to questions about whether the digital entity is indeed the same person. If like me you subscribe to the that the conscious self requires physical continuity to persist, then many mind-uploading approaches fall short (there goes teleportation too...). Without physical continuity, the promised personal immortality may be illusory, any form of existential discontinuity resulting, in the best-case scenario, in an independent clone of yourself. To circumvent this issue, silicon-based brain extensions might become indispensable for mind uploading, particularly to facilitate a gradual transfer of consciousness from the organic brain to the silicon substrate.
On the other hand, neural implants such as the ones Neuralink aims to commercialise, and medical brain-computer interfaces are fraught with challenges. In today's world, one-way communication is already a reality. Neuralink's chips and other non-invasive brain-computer interfaces enable paralyzed people to send commands from their brains to external devices. For example, they can control computers or robotic limbs using only their thoughts. Similarly, in people suffering from conditions like Parkinson's disease or epileptic seizures, brain implants function to modulate neural signals to alleviate symptoms. However, they currently do not facilitate two-way communication. That is, they don't provide feedback mechanisms that allow the user to perceive or interpret the data being processed by the device. But two-way communication is an entirely different realm, and its development raises significant concerns. Such technology could potentially disrupt one's mind in ways that are difficult to predict or control. For instance, if the implant were to malfunction, it could alter thoughts, emotions, or even identity in profound and irreversible ways. Moreover, technical corruption within these systems could be hard to detect. Subtle glitches might manifest as changes in behaviour or cognition that are attributed to natural causes, making it challenging to rectify the issue.
No Weak Minds Allowed
Assuming the hard problem of consciousness is cracked, and these technologies can safely extend or transfer your mind onto the digital realm, in practice, escaping the limitations of the biological realm goes hand in hand with suffering the woes of the digital realm. Energy shortages and radiation will remain life-threatening risks there too. Property ownership rights and energy bills won’t suddenly disappear; instead, expect competition for resources to intensify, now happening at the speed of electrons and photons. Then hacking will take on a whole new dimension, posing the same existential threat as technical corruption introduced earlier. Here’s hoping you invest in solar panels, a supercomputer, and a good antivirus before-hand. Good luck!
Additionally, another facet of intentional corruption will be mental takeover, where another uploaded mind could confine or completely erase your own. In science fiction, many works portray this risk using “hive minds”, in which individual minds are subsumed into a larger collective, losing their personal agency and identity. Such scenarios highlight the potential for powerful digital entities to exert control over others, whether through advanced hacking techniques or superior computational capabilities. Furthermore, if theories like the Consciousness Operator are true, artificial intelligence entities could occupy these substrates. Extending this speculative thought, could then AI entities transfer to carbon-based substrates?
Who knows what the future holds. Nevertheless, we don’t need to believe in sensationalistic theories to view mental disruption or takeover as concerning risks. After all, in biology, the dominance of a single mind per substrate is the normal state. That is, all of us humans have a single experiencing mind residing in our brain. In truth, that’s not the only possibility. People suffering from “dissociative identity disorder”, a rare yet true disorder, have multiple personalities co-exist in their brain, battling for bodily control. Whether this is the consequence of faulty neural connectivity that creates disjoint brain subregions to co-activate with the conscious self, or entirely distinct conscious entities, is unfortunately unclear.
The Next Stage of the Anthropocene Era?
Whilst far from being an immediate problem, I caution transhumanists to revisit their aspirational plans for silicon-based immortality. Beyond influencing science fiction and utopic visions, the fictional nature of these concepts nevertheless has real implications in shaping the aspirations and motivations of younger generations of entrepreneurs, engineers, and scientists who drive technological progress. Instead of prematurely discarding the organic Homo genus in favour of a digital offspring, perhaps we should first look inward to engineer the next evolutionary stage of the Anthropocene lineage.
Focusing on genetically engineering human evolution to extend life spans or improve biological functions might offer safer paths toward enhancing human capabilities without risking the loss of personal identity. If you watched David Baker and his fellow laureates of the Nobel prize in Chemistry inspiring lectures during the award ceremony last December this actually seems more grounded than awaiting the technological Singularity. Pursuing advancements that enhance human life within our existing biological frameworks might provide more reliable next steps toward a better future for Homo sapiens, leading to longer and healthier lives for billions rather than just a few with access to supercomputers.
On that note, happy 2025 everyone!
A longer life for humans is not a good idea. We are already too populous and crowding out the rest of the biosphere. Silicon immortality is also a horrible idea. It will just further the goals of the machine and make an even more resilient machine. Death is part of life and it should not be messed with.