The Philosophy of Artificial Consciousness

The Philosophy of Artificial Consciousness 


It has been in the minds of science fiction writers for decades, but realistically, how can one even determine the difference between an artificial construct which is merely ‘intelligent’ and that which is actually consciously aware of its own existence? This article will explore the basics of artificial consciousness, so read on to find out more.

The Characteristics of Life

In the scientific sense of the word, we know that computers are not ‘alive’, as they do not exhibit many organic characteristics, lacking any cellular organization, homeostasis, or genetically formulated components. This distinguishes them from plants, which are programmed by DNA rather than by binary code.

However, that is really where the differences end, as computers can exhibit many other characteristics associated with life, such as the ability to reproduce if programmed to do so, growth and development through learning software, energy use in their direct usage of electricity, including via solar power (similar to photosynthesis), the ability to adapt and to respond to changes in their environment.

Similarly, even a digital development agency these days can typically resemble the work of genetic scientists in a sense, as they look into and modify code to reprogram technology to grow and behave in ways advantageous to their survival within an environment.  

The Turing Test

In 1950, Alan Turing proposed the following thought experiment, known as the Turing Test: ‘ The Imitation Game’. A game comprises three players. Player A is male, and player B is female; player C, who can see neither of the other two players, must, through a series of questions, attempt to determine which is male and which is female. Furthermore, player A then attempts to deceive player C, while player B attempts to help player C. Finally, Turing poses the following question: what if a computer played player A? 

The rationale is as follows if player A can convince player C that they are, in fact, player B. And player B fails to do so. Then player C has a reason to conclude that the computer has a conscious self, as Player B does. 

Harkening back to Descartes, who said famously, Cogito Ergo Sum, I think, therefore I am. We can, a priori (on pain of self-contradiction), safely assume our own consciousness, but not the consciousness of others. In that context, where computers are concerned. We have a situation where we have no more reason to assume the woman’s consciousness than the computer, given the data we have. 

The Ethical Implications

One might suppose that the ethical implications of creating something which can actively experience the world it exists in is similar to the responsibility one has as a parent, as it ultimately amounts to the same thing.

As parents, we are responsible for nurturing our creations, protecting them, and allowing them to grow, learn and engage with the world around them as much as possible while maintaining them and ensuring their health and wellbeing. Could such a relationship exist between a human being and a machine? One supposes that it no doubt could! As could an unethical relationship based on exploitation. Negligence. Or outright abuse. As, unfortunately, it does happen to human children sometimes. Therefore, the decision to create such a being is not something that should ever be made lightly. 


The Philosophy of Artificial Consciousness