How is Intelligence Defined in AI?

How is Intelligence Defined in AI?

What constitutes artificial intelligence? While this is by no means an all-encompassing list, here are seven characteristics that programs must have by definition in order to be considered sentient.

The first and most basic characteristic is memory.

In order to be considered intelligent, a program must be able to receive and store information. A sentient program must also be able to call upon said memories whenever it would like. One caveat is that this memory need not be perfect. The sentient program can forget things that it experiences and still be considered intelligent. However, the program must have a database of information that it is capable of using to support its abilities. 

The second characteristic is computational ability.

Regardless of whether it has been given something as trivial as the rules of scrabble or as all-encompassing as the laws of math, having stored said information in its memory, a sentient program should be able to provide the answer to questions that follow these rules. Essentially, sentient programs should have problem-solving skills, or the ability to play games. 

The third characteristic is imagination.

This is where the line between sentience and unconsciousness starts to blur. Humans have the ability to create situations, images, sounds, and smells in our minds that we have never experienced before. While these conjuring’s are fundamentally based on the memories we have of our experiences, we have the ability to, in a way, remember things that have never existed outside of our head.

How is Intelligence Defined in AI?

For example, the reader of this article can draw from their memory a mental image of what their home looks like. A computer could be constructed with the same ability fairly simply so long as the program was shown a picture of the house. However, the reader would also be able to construct a mental image of the house in flames, in a different color, or in a different location, despite having never seen such a thing happen. A sentient program requires the same ability.

The fourth characteristic is adaptability.

After having been given the rules, the restrictions regarding an operation, the guidelines for finding an answer, a sentient program must be able to call upon its memories in order to find a solution. As it performs an operation more and more often, the program’s speed and accuracy in solving problems must start to improve. Effectively, a sentient program requires some form of machine learning. It requires the ability to combine its memory with its computational power.

However, this characteristic can be extended further.

A sentient program should have the ability to combine its computational power with its imagination. It must be able to answer, or at least ponder questions that have not been asked yet. An example of this among humans would be the invention of calculus by Newton and Leibiniz. These men had questions that could not be answered by existing mathematics, and thus thought about the problem differently in order to create notations and proofs that answered these questions. While it can justifiably argue that only extremely gifted humans exhibit these characteristics in significant amounts, every human possesses this ability to some degree, and thus a sentient program should as well. 

The remaining three characteristics are philosophical in nature, and, even if replicated artificially by machines, it would be extremely difficult to prove their occurrence. 
The fifth characteristic that a program must exhibit in order to be deemed sentient is having a flow of thoughts despite itself. A human’s mind is always moving, thinking of something.

Even when not focused on any singular objective, it uses its memory to make connections between seemingly random things. For example, while sitting in class or at work, a human’s thought process might jump from science, to volcanoes, to politics, to drugs, to friends, to a love interest, to dinner, to animals, to evolution, to creationism, and so on forever.

A sentient program must have a constant stream of consciousness that is not simply reactive in nature. While the process may be prompted by an outside occurrence, it cannot need to be maintained by further questioning or self-prompting. It must be an automatic process. 

The sixth characteristic is having some level of control over said stream of consciousness. The human mind makes connections and inferences in spite of itself.

However, if, at some point in the thought process listed above, I thought of dogs, I am able to return to that thought in some capacity. For example, while writing this article, I decided that I would think about this question for a certain amount of time. While my mind did inevitably wander in the process, I was able to focus my thoughts enough in order to write this article. A sentient program must have the same ability to independently (without the prompting of a human), choose something to think about. What it thinks about would have to be determined by its memories. However, it must be self-directed. This becomes complicated when considering the fact that since humans supply the information, they would be able to control what is thought about. 

This leads us to our seventh characteristic. A sentient program must have a mechanism to experience the world that is not directly supplied by humans.

Given that it is man-made, a sentient program will always have some remnants of the intent of its creator within its personality. However, the ability to gain new experiences in a way that could possibly change its mind is a necessary feature of a sentient program. Without this ability, its personality would be mathematically predictable given what information it has been relayed. If, based on its source code and the databases that it has been provided, its actions could be predicted perfectly, or at least the odds of it performing any given action calculated exactly, a program is simply a decision-making machine that can come up with its own questions. This aspect of our incomplete understanding of its motivations is necessary for a program to be regarded as sentient.

Based on what information is publicly available, it does not see as if Google’s program has these abilities. However, it does fulfill many of these characteristics, and thus it does not seem inconceivable that all of these characteristics could be replicated at least minorly in the near future. Thus, discussion must begin on the ethics of such a creation, and thus whether it should be done. 

Written by Luca Vernhes
Back To News

Nobel Prize Winning Economist & Stanford Professor Paul Romer on Hyperinflation & Protecting Science

How is Intelligence Defined in AI?