189. THE GHOST IN THE MACHINE - Still Haunting AI
I love music, and I truly believe that listening to just one song can transform your mood, for better or for worse. Then there is the magic of live music. Attending a festival where your favourite band is performing is an experience like no other. Surrounded by thousands of fans, everyone connected by the shared love of the song, it feels almost surreal. In those moments, the power of the music creates an incredible atmosphere, making it the closest I will probably ever get to a corporate ‘religious’ experience. Yet, whenever I try to describe that feeling to a friend, I find it almost ineffable. Words fall short of capturing the depth of that experience. This is the notion of ‘qualia’ where humans have subjective experiences, and stimuli, such as sound waves, are not just vibrations making waves in the fluid of the cochlea which produce sound signals that are sent to the brain, but they also cause the subjective ‘experience’ of listening to music.
This notion has troubled me for years. For instance, I might be looking at the same green tree as the person next to me, yet their perception could differ slightly from mine. They may see the world with a slight ‘filter’ and though we are looking at the same tree they could be having a completely different experience to me. And the most infuriating thing is I cannot know, at the present time, if our experiences are different and how or why we have this subjective nature of experience.
This is known as the ‘hard problem of consciousness,’ coined by David Chalmers, ‘hard’ because it has yet to be solved. The ‘easy problem of consciousness’ has been explained by science as we understand the functions or mental states that relate to the consciousness. So, seeing the colour green has been explained by light waves hitting the rod and cone cells in our retina, sending electrical signals to the brain. But the ‘hard problem,’ as Chalmers puts it, “why is the performance of these functions accompanied by experience?” has yet to be solved.
Now, my dad is one of those classic tech enthusiasts who cannot get enough of AI, idolises Elon Musk and is always telling me about how one day “robots are going to take over.” He is extremely sceptical about the value of studying philosophy and believes that the future lies solely in science and AI. He holds the belief that the use of robots and artificial intelligence will rise exponentially and will easily replace all our jobs in the near future (bar those who code and design this technology). I am determined to prove him wrong. Contrary to his belief that philosophy is a useless, dying field I see the moral philosophers and ethicists as being essential to the development of AI. They are going to be the ones who are crucial to helping programme artificial morals and ethics. It is the job of the philosophers to decide whether to code an absolute moral framework into AI, or incorporate all the different ethical theories, in proportion to the people who believe in them. Additionally, I also believe that humans will always remain superior to robots as fully replicating a human, with consciousness and qualia, into a form of AI is extremely unlikely, meaning humans will keep their jobs and not be completely replaced by AI.
I personally disagree with reductive materialists who believe that we are simply a bunch of atoms, carefully joined together to produce the species that we are today, as then the value of a human life could be said to be around £1224.72 (the price of all the atoms that make up a human). The whole is more than the sum of its parts; a birthday cake is more than just flour, eggs, milk – it is a birthday cake (Lennox’s analogy). There seems to be something more than physical processes that make up us humans. To further show this take Jackson’s ‘knowledge argument’. Imagine a colour scientist, Mary, who is locked in a black and white room all her life. She knows all the science and physical processes (i.e. the wavelengths, rod and cone cells etc…) that cause us to see colour, yet the first time she steps out of the room and sees a red tomato she learns a new truth about what it is like to actually experience the colour red. Here it seems that the consciousness cannot be explained in purely physical terms (as then she would learn nothing new upon leaving the room). Since we have not bridged this explanatory gap, I believe that we cannot yet replicate a human and their conscious experience artificially.
In my view, at the present time, AI and robots cannot have qualia or consciousness, (though many materialists would disagree) thus the ‘ghost in the machine’ or qualia/consciousness will continue to metaphorically ‘haunt’ AI as they will never have it. Machines seem very different to humans as they react to stimuli in a very rudimentary way. For example, when a camera takes a photo, it may capture the scene and adjust its aperture, but it does not really see the world around it. The reverse sensor of a car may produce a warning noise when it approaches a wall, but it does not really feel alarmed. This just shows how machines lack the conscious experience that accompanies actions as a human. However, can a conscious AI system be possible in the future? Yes, some computer programmes have passed the Turing Test and have fooled human judges, and others have beaten humans in games (such as IBM’s Deep Blue computer vs the world chess master Garry Kasparov). But are they really consciousness and sentient and will this ever be possible? Optimists like inventor Ray Kurzweil, and many others believe it is simply a matter of time that the processing power of computers will be so enormous, and the sophistication of our program so amazing that they will have an intelligence and conscience vastly superior to our own. This is sometimes called the singularity - a point in history where machines become more intelligent than humans. Such a possibility brings to mind ideas such as the Frankenstein story – will we create a monster more powerful than we are, capable of destroying us? Philosopher Nick Bostrom from Oxford University, and many others, talk about the dangers of artificial intelligence. Some even say it will be the last invention that we make, because once machines become intelligent, we become obsolete as a species. I personally do not believe that we can yet replicate our conscious experience until we can understand it ourselves, though to quote Emerson Pugh – “If the human brain were so simple that we could understand it, we would be so simple that we couldn’t.” This along with the fact that I believe the consciousness is something possibly non-physical (as mentioned above) makes it reasonable for me to conclude that we cannot replicate our consciousness just yet, though it is important to keep in mind the contrasting view of those optimists mentioned above who believe that a conscious, sentient machine will be possible.
With these views established, it is useful to think of the possible implications of a conscious vs non-conscious AI system. With the rise of AI, many (including my dad) have predicted that numerous jobs will be rendered obsolete, since these robots can complete these jobs much faster, and more accurately than us humans. We have already seen this change occurring, as factory workers have been replaced by machines, and the same can be said (in the future) for paralegals or legal researchers, as AI models will be able to sift through masses of legal documents saving law firms much time and money. Now, think of the implications if we get to the point where these AI models are actually sentient and conscious. To complete our tasks effectively, arguably they would have to have some degree of consciousness to understand what we are talking about. For example, if we told a robot to “take out the trash” they would need to be somewhat conscious and aware to understand that the ‘trash’ is anything in a bin, and to ‘take it out’ means not simply place the bin outside but decant it into the bigger bins ready to be collected. Thus, if these AI machines were conscious then we would effectively be using them as our slaves; they do not have a choice in the matter and are also not getting paid. This raises important questions on the morality of the use of AI, as it is believed that slavery is immoral, thus is subjecting a robot with the ability to feel pain and have subjective experiences to work for nothing immoral?
So, AI models would have to be conscious and somewhat sentient to understand us and be fully useful, but then we would not be able to use them to their full potential, as this would be an immoral act of slavery. This creates a paradox as we will not be able to utilise AI machines to their full potential if they are conscious as this would be immoral, but we will also not be able to utilise them to their full potential if they are not conscious as they would not fully understand us and our commands and thus would not complete the job to a better standard than a human.
Ultimately, I believe that until we bridge the explanatory gap and truly understand the ‘hard problem of consciousness’ we do not have to worry about machines becoming intelligent and conscious. This diminishes the concerns about the implications of a conscious AI system – such as robot rights, slavery, and the danger of them taking over. I would even go one step further and argue that with our current understanding about the brain, it is plausible to conclude that the consciousness/qualia could be nonphysical. Since, AI and robots are purely physical entities, the consciousness cannot be replicated in them, which means we do not need to be concerned about the potential issues surrounding AI rights or the fear of them taking over our lives. The ‘ghost in the machine’ continues to haunt AI while keeping us and our jobs safe.
Author: Ayushi Soni (she/her)
Ayushi is a philosophy student interested in the philosophy of mind, philosophy of mathematics and the meaning of life. Feel free to email her (ayushi.school22@gmail.com) if you are interested in having a philosophical debate or conversation!