February 5, 2012

Computers Verses Concepts: Can Computers Think?

Traffic computers conduct our signal lights. Microprocessors direct our car engines. Self-operating controllers run our factories.

And in an insult of sorts, Watson, a successor in spirit to Deep Blue, trounced our human compatriots in Jeopardy.

Computers have permeated our work and our freedom and our lives. This has generally been for the good, improving human society, enabling our progress. But we wonder. Do we feel comfortable with so many functions performed by non-thinking machines? Do we risk something passing off operate to efficient, but non-the-less essentially mindless, entities?






Or maybe we feel the opposite, we wouldn't want our computers to think, otherwise we, humans, might lose control.

So can computers "think?" Would it pose a danger, or contribute a benefit?

I will recognize those questions, and do so, as I often do with questions of this kind, with a plan experiment.

Poker Chips

Imagine round, plastic poker chips, like you might find at a casino. Rather than being imprinted with dollar figures, we stamp each chip with a different number. The numbers run from one to twenty-five thousand. We need so many because each chip stands for a word, though for this conference we don't know which one.

Well, we will allow some exceptions. We will have a subset of chips with actual words not numbers. These words will be generally prepositions, articles, linking verbs, etc. Such as, "is", "to", "can" and "from". This allows us to build relations in the middle of the numbered chips. For example, using the words and chips, we might have:

"Two" can be "Seventeen" from "Sixty-four."

That might stand for something such as a chair (two) can be assembled (seventeen) from wood (sixty-four). We march to building thousands, hundreds of thousands, of such relations.

We could now be asked questions, such as what can a "two" be "seventeen" from. We would quest through the array of chip expressions, and find our example expression, an retort "sixty-four." We would have found the definite answer. But we did so not by insight anything, but rather by looking through a collection of meaningless chip relationships. We had no idea of what we what talking about. We didn't understand.

From Symbols to Meaning

What would it take to add insight to the numbers on the chips?

We could translate the chips to words. But that is not positively an answer, since words are still symbols. If we translated the chip numbers into Latin, few of us would positively gain any understanding. The Latin words, in fact most words in any language, are as arbitrary a seal as the whole on the chip.

Pictures, however, would help. If a dozen or so pictures of a chair were linked to the chip numbered two, we would begin to understand. "Two" would start to have meaning.

We can envision lasting the process across hundreds, thousands, of the chips, associating each with pictures, or a movie, or a sound, or a smell, or even a touch sensation (hot, cold, sharp, soft, etc). Our insight would expand.

At some point, insight the concepts linked with each chip would need more than pictures. "Push" could be movie of a person with his or her shoulder to a dresser absorbing the dresser. That may or may not be interpreted correctly. But by this point, we would have built an insight of a good whole of the chips, so the movie could be supplemented by the sentence "to push is to move an object. This can be done by walking while having your body against the object."

We could continue to build concepts upon concepts in the same manner. Once we reached a adequate base, maybe when we got through the first ten thousand chips, we could positively step up to tackle the chips that represented words like "justice" and "truth."

So eventually, we could teach ourselves the "meaning" of all the twenty five-thousand chips. We would understand.

But could we teach a computer so that it would "understand?"

The Role of Experience

Yes, and no.

Yes, because like our human above, a computer can readily connect pictures, movies, sounds, smells, touches to a symbol. positively the computer would need many unique components, along with specialized sensors, optimized processors, large memory stores, and custom software. But we don't picture this as outlandish. We can picture a humanoid robot, with proper sensors in the locations of the human's ears, eyes, nose, finger tips, and so on, linked wirelessly to the computer complicated needed to process all that data.

As sophisticated as the Watson of Jeopardy fame is, such a robot would be a generation, maybe two, beyond Watson. Watson works at the level of word association, basically linking our numbered chips. Watson has assimilated billions of associations in the middle of those chips, but nowhere does it appear Watson associates a chip/word with anything other than someone else whole chip, or an occasional picture or sound.

Our robot goes beyond that. It doesn't just connect "chairs" with "four legs". Our robot learns by sitting on actual chairs; in fact we have it sit on dozens of chairs of all different types, metal ones, wood ones, plastic ones, soft ones, hard ones, squeaky ones, springy ones. And as this happens, the robot's sensors regain sounds, sights, feels, smells, at ranges and precisions well beyond humans. All the while, the robot and its computers are building associations upon associations.

And we repeat the process with tables, then with beds, then dressers and the whole range of furniture. We then move to desk items (paper, books, pens, erasers), then to kitchen items, bathroom items, work bench items, then move outside, and on and on.

When it has a adequate knowledge of the poker chips we teach it to use the internet. The whole of associations explodes.

We then add in a crucial element, evaluative software. This software allows for judgments, and comparisons, and balancing of alternate answers, and so on. We have estimation modules for many aspects of the world, for engineering, for ethics, for aesthetics, for public dynamics.

With all this, we then send our robot/computer out into the world, to shop, to travel, to attend college and to work, all to build further and deeper associations and to tune the estimation modules.

Let's say the training progresses for a decade. Would our robot now understand?

Yes, and no.

Yes in that the computer would have an association mapping as rich and complicated as humans, and an capability to make judgments with those associations. For example let's ask the robot/computer "would you drive a freight train on a highway, and why?"

If we asked Watson, I conjecture it might stumble. Watson would find many associations in the middle of highways and freight handling, and associations of trains as a vehicle and that vehicles (trucks, cars) ride on highways. It would find many citations that trucks ride on trains, and train containers ride on trucks.

In contrast, Watson would only see few mentions of the fact that the wheels on a train would damage the highway, and that the wheels could not regain adequate traction on the road face to travel under control.

So Watson would be confronted with at best conflicting associations relative to freight trains and highways, and at worst indications the trains and highways are compatible.

Watson would then likely falter with the words "would you" and "why." Those don't call for a fact, but rather a judgment, and Watson can not positively evaluate, it can only associate.

In contrast, our robot would likely catch the intent of the question. We gave our robot the capability to evaluate, and the word "would" would explicitly trigger the estimation modules. Watson would run through them all, for example considering ethics, and efficiency, and economics, but would ultimately reach a technical valuation based on engineering.

In fairly short order (a few seconds) or maybe long order (a few minutes), our robot/computer would conjecture the load stresses of the train wheels on the asphalt and concrete, and the lateral friction in the middle of the steel and the road. The robot would see that the concentrated load from the train wheels would exceed the carrying capacity of the road material, and also see that friction in the middle of the wheels and the road face would be insufficient to contribute traction and lateral control.

Our robot would thus retort that it would not drive a train on a highway, since the train would fail in primary mechanical aspects.

Could our robot positively do such engineering calculations? Computers do them routinely now. But today humans configure the qoute for the computer, so could our robot turn our inquire to the primary mechanical setup. Yes, converting a bodily object or ideas into an abstract force diagram may be daunting, but it is not strangeness or magic. The process of creating force diagrams can be converted into an algorithm, or set of algorithms, and algorithms can be programmed into a computer.

So our robot thinks? Yes. But does it "understand"?

No.

The robot lacks consciousness. For all the capability of the robot to connect and evaluate, the robot isn't conscious. Why do I say that? Long story short (and a conference of computer consciousness could be long), our robot of the near time to come will have microchips of customary architecture. These microchips may be very fast, may be very sophisticated, and may be made of exotic semiconductors, but they will be extensions of today's architectures nonetheless. In my view, such chips, even thousands put together, do not have the right configuration to generate consciousness.

So, agree or not, let's posit that our robot is not conscious. And consciousness is likely the key to going beyond reasoning to meaning. We know a chair not because we have digitally stored a sensor measurement of a 3/8 inch deflection in a cushion. We know a chair because we contact it, a holistic experience, not a set of mechanical sensor readings. Our robot has thousands of memory registers associating digitized pictures to a chair, but not a singular holistic experience.

Thinking Computers

So, our robot can think, but it doesn't understand. It has intelligence, but does have a sense of meaning. And this is because it lacks consciousness.

So now to the other part of our question, do we want our computers to think?

Numerous movies - Eagle Eye (2008), I Robot (2004), The Terminator series (1984 and later) - have computers that think. In a typical Hollywood fashion, the "thinking" of these computers, though well-intentioned, causes them to veer down unintended paths, to start to think they are smarter than humans, but to the detriment of humans. We positively don't want those type of reasoning computers.

Isaac Asimov, in his extended fictional writing on robots, was not nearly so pessimistic. His three laws of robotics kept the robots on a more inescapable and controlled path.

Data, on Star Trek, stands as an even a more inescapable view of a robot, even altruistic to a fault. But he was offset by the Borg, a cyber-organism of driven determination, to assimilate every civilization. The Borg could think, no doubt, but were thoughtless in their destructiveness.

Which one of these images from fiction will be our future?

I lean towards none of them. Watson, and then a second generation of Watson like the robot pictured here, will likely impact human community in a more insidious manner, economically. Will that economic impact vault us transmit or backward? Will we have a Star Trek like Camelot with computers freeing us for freedom and human advancement, and will reasoning computers displace our vast collection of information workers consigning the at one time well-employed to low paying jobs. Utopia or Matrix-like enslavement, which might reasoning computers bring?

Will the time to come tell? Maybe not my future. But likely the time to come of our children. May God help them.

Computers Verses Concepts: Can Computers Think?

WireLess BGN Router