Doing AI: A Business-Centric Examination of AI Culture, Goals, and Values
A common external goal for artificial intelligence is cognitive plausibility. That is, in order to qualify as “real,” a solution must solve intelligence in much the way humans are intelligent. When it is discovered that a solution is not anthropomorphic enough, many dismiss the accomplishment. In other words, how insiders solve puzzles is as important as how they define puzzles.
Is AI About Simulating the Brain?The answer is sometimes, but not always. However, many insiders believe that if a solution looks like the brain, then it might actually act like the brain. When a solution doesn’t act like the brain, insiders conclude that the solution teaches them nothing about the brain or intelligence. Simulating the brain effectively requires insiders to reverse engineer it. The so-called inverse problem is the process of calculating from a set of observations the causal factors that produced them. In other words: starting with the answer and working backward to the question. However, reverse engineering the brain and studying intelligence will always be an exercise that is more complex, with much longer payoffs, than identifying and solving real-world problems.