Wednesday, August 23, 2017

One Problem with A.I.


Elon Musk is constantly warning us about the dangers of artificial intelligence (A.I.) ... despite the fact that he is balls out to develop it for autonomous cars. Having been somewhat involved in this technology myself, may I point out one possible problem?

Let us take the example of autonomous cars. When a programmer is accounting for one circumstance of driving, he/she might have to deal with the following conditionals:

Is it dusk out?
Is the pavement wet?
Does this car have new tires?
Is it above freezing?
Can I see the other driver?
Is this a holiday weekend?
How heavy is the other traffic?
Are the other car's headlights flashing?
Is everyone wearing their seat belts?
Etc.
Etc.

And what if one obscure possibility is not included? Such a decision tree may be 20 or 30 deep and, here is the rub, they are in a predetermined sequence ... and the order of these conditionals often produces different judgments. The human mind does not have such a restriction. It can resequence things in a flash. The possible sequencing of 30 conditionals is a huge number ... a number that would confound today's super computers. And things might get even more complex ... not to be so resolved in a microsecond. Only hope might be quantum computers ...

I'm not claiming to be brighter than Musk ... just quite a bit older and perhaps wiser ...

Afterthought: Perhaps the human mind's ability to rearrange such decision trees on the fly is called "judgment" ...

1 comment:

ChillFin said...

A key item in the vehicle's decision making is based on a point of view. Do you protect the passengers more than the car? Do you veer away to hit a pedestrian on the sidewalk thereby minimizing passenger and vehicle damage but making a likely lawsuit from the pedestrian? Or do you veer the other way with the hope that you avoid collision with the oncoming truck?