20 d’octubre 2021

AI everywhere (3)

 Algorithms Are Not Enough. Creating General Artificial Intelligence

In the last chapter:

An artificial general intelligence agent will need to:

• Address ill-defined problems as well as well-formed problems.

• Find or create solutions to insight problems.

• Create representations of situations and models. What do the inputs look like; how is the problem solution structured (modeled)? What is the appropriate output of the system?

• Exploit nonmonotonic logic, allowing contradictions and exceptions.

• Specify its own goals, perhaps in the context of some overarching long-range goal.

Transfer learning from one situation to another and recognize when the transfer is interfering with the performance of the second task.

• Utilize model-based similarity. Similarity is not just a feature-by- feature comparison but depends on the context in which the judgment is being conducted.

• Compare models. An intelligent agent has to be able to compare the model that it is optimizing with other potential models (representations) that might address the same problem.

• Manage analogies. It must manage analogies to select the ones that are appropriate and to identify the properties of the analogs that are relevant.

• Resolve ambiguity. Situations and even words can be extremely ambiguous.

• Make risky predictions.

• Reconceptualize, reparamaterize, and revise rules and models.

• Recognize patterns in data.

• Use heuristics even if their efficacy cannot be proven.

• Extract overarching principles.

• Employ cognitive biases. Although they can lead to incorrect conclusions, they are often helpful heuristics.

• Exploit serial learning with positive transfer and without catastrophic forgetting.

• Create new tasks.

• Create and exploit commonsense knowledge beyond what is specified explicitly in the problem description. Commonsense knowledge will require the use of new nonmonotonic representations.

I believe that with the right investments, we will be able to develop computer systems that are capable of the full panoply of human intelligence. We cannot limit ourselves to looking where the light is bright and the tasks are easy to evaluate.

At some point, these computational intelligences may be able to exceed the capability of human beings, but it won’t be any kind of event horizon or intelligence explosion. Intelligence depends on content as well as or perhaps more than processing capacity. The need for content and the need for feedback will limit the speed of further developments. If we fail to develop artificial general intelligence, our failure will not be, I think, a technological failure, but one of our own imagination.

Glups!