Raja Kanuri, PhD scholar, Vrije Universiteit Amsterdam
The thought of AI being sentient can be described as both scary and thrilling at the same time. While this idea bears witness to the great scope of modern science, it simultaneously warns humanity of the emergence of an (new) intelligence which may override human civilizations. The common man, who is a mere spectator of such intelligence, like many other technological progressions, does not seem to have a clue of what the future holds. Thus, the question whether AI can be sentient is a relevant subject for philosophical reflection. The arguments and projections seem equally strong on both sides.
With its origin in the noun form ‘sentience’, sentient means to be an abode to the quality of consciousness or sensation. For example, humans are called sentient beings because they can host, demonstrate, and manage senses and consciousness. This word can be compared to the Sanskrit word ‘jīva’ which can be translated as living being, or entity. Although the six different Vedic philosophies position the relationship between the jīva (individual being) and the paramātmā (supreme soul) differently, they agree on the structure of the jīva. The jīva as an entity consists of the various organs of perception, action, subtle essences, energies, and the mind which is the locust to all the others. Āyurveda, which evolved out of the sāṃkhya philosophy, through its doṣa and guṇa theories explains the manifestation of the various elements into the experiences individuals have at the physical, psychological and spiritual levels. Overall, the Vedic perspective of the jīva places individuals in relation to the cosmos, with themselves holding a central role in the cosmos, making them sentient beings.
On the other hand, Artificial Intelligence is a man-made form of machine intelligence. The words ‘Artificial’ and ‘Intelligence’ have stirred up the argument on whether AI is the correct term for this technology. Due to its origin in the human mind and the application of various sciences, AI can be argued to be man-made rather than nature-born. Having originated during war times and gone through many phases of development to what it is today, AI is a conglomeration of various disciplines. Due to its disruptive nature, this machine intelligence has re-shaped economies, industries and the lives of common people. Due to the nature of its origins, AI also carries the criticism of thinking as a corporation rather than a human. AI can be translated as kṛtrima-medhā in Sanskrit and attributed to only one kind of intelligence, in comparison to the various levels of intelligence humans possess.
‘Artificial Intelligence’ as a type of intelligence is an uncountable noun. On the other hand, ‘an’ AI refers to a specific kind of system that demonstrates artificial intelligence. As many say, the kind of intelligence we see today is only the narrow AI while the general AI remains far ahead in the future. Examples of such narrow AI that we see in our everyday life are Google Maps, Facebook, Tinder and Netflix, and are also known as machine learning solutions.
Bringing both definitions together, and applying the neti-neti or falsification approach, the following are potential arguments for answering the question “Can an AI be Sentient?”
- Yes, an AI can be sentient. – This argument holds water if the AI demonstrates the presence of the same senses as presented by the texts. It would share a relationship with the paramātmā and demonstrate different senses which come together to demonstrate various kinds of intelligence. As Sadhguru says, “The fundamental difference between a human being and a machine is perception. Perception is something a machine will never possess.” Thus an AI cannot be sentient.
- No, an AI cannot be sentient at all. – This argument could be accepted if the systems did not demonstrate any senses. But the demonstration of feelings and emotions by certain chatbots, robots and some applications negate this argument. Thus, although superficial, an AI could be trained to demonstrate emotions to a certain extent, indicating sentience to some level.
- An AI could be partly sentient. – This argument seems rationally acceptable considering the argument above. If the AI in question is a trained robot which is employed in the field of health care, demonstrating and responding to emotions is a critical requirement. Yet, the so-obtained sentience is not self-born. Such a sentience can be called transferred partial sentience.
- Some AI could be partly sentient, while some cannot. – This argument too could be an extension to argument 2. While narrow AI is executed through many applications such as credit-related decision making in banking, cancer diagnosis in health care, and finding dates on Tinder, not all applications apply ‘senses’ or ‘consciousness’. In cases where machine learning solutions are applied to fix a problem, or to find a solution for practical purposes, the technology can be considered merely a tool. On the other hand, when the technology takes the role of interacting with sentient beings through communication, such as chatbots, dating applications, etc, the machines’ behavior resembles sentience.
Another important aspect of sentience is free will, the ability to initiate, participate and conclude decisions and actions, due to being in an emotional state. At the same time, a sentient being also demonstrates the three guṇas, subtle essences, the four-fold mind, along with the organs of action and perception. While these qualities are absent in an AI system, modern advancements in the field of machine learning – different kinds of training, the machines’ self-learning abilities, and the black-box challenge – make sentience a possibility in machines. One must remember that such a sentient being can only be developed by humans, and that the origin of such artificial sentience is not nature-born. It is equally vital to pay attention to whether the development of an entity of such intelligence is a ‘need’ or a ‘want’ of humanity.
Overall, the ongoing development of AI reminds us of John Hammond in Jurassic Park, who creates the park being mesmerized by the illusion of power. The chaotician Ian Malcolm states that life cannot be contained by power, and the paleobotanist Ellie warns Hammond that the ‘illusion of control’ is indeed an illusion, by which time it is almost too late. The movie ends well, with the main characters saved – but life is not a movie, and everyone is a main character in their own life. As learned societies, it is imperative to reflect and weigh if usage and creation of intelligence is needed or wanted. Either way, how can such intelligence be carefully designed and regulated before the dinosaurs make their way into civilizations which were built over centuries. Who is John Hammond? Who are the children? Who is the blood sucking lawyer who proposed merchandise and a coupon day? Who is Alan Grant? Which AI is the one in “an” AI?