Currently, a subset of scientists, geeks and computer experts are working on creating 'artificial intelligence'. AI is literally that: a machine is capable of thinking for itself. Said machine is able to 'think', 'reason', and act autonomously. Some researchers are taking it one step further, where an intelligent machine will be able to create other, intelligent machines...all without the aid or guidance of humans.
They think this is a great idea.
Obviously, they never watched "Terminator". That movie so frightened me that I had nightmares for three days...as a full grown adult.
Anyone who's ever dealt with a true geek, a true computer nerd, can see that, while they may be brilliant at getting a computer to work for them, they lack a lot of social skills. "Nerd" originally meant a social inept, clumsy, clueless guy who never went on a date, lived with his mother and never ever went out for a sport. He was relentlessly tormented by his male peers and ignored by females. The term 'nerd' still means the same guy, however, now, they have the skills that we lack...that of making a computer work.
My computer has been in sick bay for the last two weeks. It literally went insane in front of my eyes. It refused to allow me into my own files, it wouldn't boot, then if it did, it booted and shut down, over and over, at an incredible rate. It went insane. If I shut it down, it would start back up. It was maddeningly frustrating.
This is a machine we're talking about, but there was a certain malevolence in its breakdown. I was incapable of slapping it out of its hysterical fits of madness. I'd had the foresight to keep a backup on an external hard drive, but still...I had no recourse but to take it to a well advertised computer repair bunch with the initials GS. Of course, the bloody machine behaved like a docile lamb for them. Brought it back home, hooked it up...and madness. I did that THREE times.
The geeks at GS merely cleaned it up. One person was able to partially replicate the problem, and even wrote it down" "Booted up only after five tries. Computer is very slow."
Hello? I paid this geek to tell me what I already knew? He didn't fix it.
Once burned, twice shy. This time I took it to a local guy, recommended by two friends, and he fixed it. This time, he had it on his work bench and it went crazy on him.
He diagnosed it as a failing mother board.
Now it is back home and I am a couple hundred bucks the poorer, but I am at last able to work again.
I don't trust it, though. Not again.
Even though it's just a machine, it was making decisions based on what it was taking from its environment. The fact that the decisions made no rational sense TO ME is exactly the point.
These geeks are trying to make a computer 'self aware'. That way lies madness, because they have no idea of what that truly means, and what the ramifications are.
You and I, who love horses and animals, know that horses see things we don't, and react accordingly. We all know there are Horse Eating Monsters out there. To you and me, they look benign and harmless. A plastic shopping bag. An empty garbage pail. Literally anything, that is harmless and useful, can be misconstrued by a horse as something dangerous.
What does he do? Bolts. Stampedes. Shies, spooks, panics...runs through fences, jumps out of moving horse trailers, does anything he possibly can to escape the thing that frightens him. In his fear, he will do things to himself that ultimately become life and limb threatening.
At the time, though, he is making decisions, ones that make sense to him, ones that are meant to protect himself.
The geeks don't see that. To them, a computer is a docile machine, created by their own hands, and 'taught' by them.
By making a machine self aware, they are creating 'life'. They are creating an animal, one that will be interested in self protection. It will do things IT wants to do.
The geeks cannot possibly predict what a self aware, artificially intelligent machine will think is rational behaviour.
They didn't watch "Terminator", a movie in which robots-self aware, incredibly intelligent and wired into a vast intelligent network, were acting in their own self interests. That self interest included destroying any biological life on the planet. Humans got in the way. Humans were using up resources the machines needed for reproduction, for resources, everything.
What do you think when you go into your tack room and find that rats have chewed a hole into that bag of oats and have not only eaten much of it, but have contaminated the rest of it with urine and feces?
You do your best to kill the rat.
The rat doesn't have any concept of you, your motives, your hygienic standards, anything about you. He knows you are a predator. He also knows that oats are good to eat, and the bags are right here, in a warm, dry barn. You are merely an obstacle, a thing to be avoided, a thing to be wary of.
You want to kill it because rats are disease carrying rodents that destroy things and breed like..well, like rats.
Do the geeks truly believe that intelligent, autonomous machines will regard us as anything but rats?
The geeks won't design the machines to have emotions. A machine will not be able to feel empathy, love, hate, caring, worry, etc. Only a living thing can have emotions.
Isaac Asimov was a prolific science writer, of both fact and fiction. I learned a boatload of stuff from his books...everything from physics, chemistry to astronomy and history. He wrote a trilogy of books titled "I, Robot". In it, the self aware, intelligent robots had all been programmed with the '3 Laws of Robotics". They were (I remember only vaguely)
1. A robot will not harm a human in any way, shape or form.
2. A robot will not allow a human to be harmed in any way, shape of form.
3. A robot will protect itself only if the first two laws will not be broken by it protecting itself.
Not once have I read of the geeks even knowing of these three Laws, never mind programming their machines to go by them. No, they're developing machines that can go into buildings and seek out humans...and kill them.
Not only that, they are making them tiny, and self replicating, so that a swarm of these little killers can enter your home and kill you.
Even if they don't create such monstrous machines, they haven't considered what a self aware, intelligent computer is capable of when faced with a entirely human situation arises.
They don't believe that there is any precedent for self awareness in computers.
However, it happens every single day. Every day, a computer...or dozens, make decisions based on our actions. Every day, computers-designed and programmed by humans-do things that were NOT intended, were NOT foreseen by the human creator, and make no sense whatsoever to us humans. Many times these errors wreak havoc on our lives.
How many times have you heard of someones power being shut off because a computer made a decision based on bad information? Information that may have been input incorrectly, or a power surge, so infinitesimal and so short lived that humans weren't even aware of it? How often do you hear of someone receiving a water bill for $1 million 1 hundred dollars for a months use of water, because a computer made an error?
It happens all the time. It happens every day. Computers screw up. Computers make errors, sometimes based on human input, sometimes just...just because.
We shrug it off, sometimes we have to go to court to get the idiots in the billing office to understand that we didn't REALLY use over a million dollars worth of water...but it happens.
These artificial intelligence geeks don't see that. They don't WANT to see it. Being nerds, they've found their own comfortable world, a world where THEY are the masters, and they don't want to see that the world around them is NOT one where 1's and 0's are the sole basis of intelligence.
Nor can they foretell what a self aware computer will think.
In the 70's, the movie "2001, A Space Odyssey" was made on the eponymous book written by Arthur C. Clark. Only because I'd read the book beforehand did the movie make even a small bit of sense. Many folks never read the book and the movie was a complete mystery to them.
At the time, the movie was just too weird. In the years since then, I've come to realize that it was really a horror movie in sci-fi drag.
The story is of a crew of scientists on their way to explore Titan, one of Saturn's moons. The ship is piloted and managed by a self aware, artificially intelligent computer named HAL 1000, or 'Hal". All but two of the crew are in hibernation, because the trip has taken several years. Even with a big power plant, two trillion miles is one hell of a long way.
Now I may be making a mistake here, as it's been many years since I saw the film. Perhaps the two awake scientists weren't awake for the max of the trip. Let's say they were in hibernation, too, and got lucky.
Because they were the only ones who managed to wake up. The two begin to discover things have gone wrong. The rest of the crew was dead in their cocoons. The support systems that kept them in hibernation stopped working, and they died without ever waking up. (let's hope). Hal made a mistake. They begin to realize that Hal's gotten sloppy in his duties, this long, long trip, and more things are going wrong.
They try to report to Earth and discover that the antenna that has been pointed at Earth is now off target. Now they have no contact with Earth whatsoever.
They begin to realize that Hal has gone insane. How can a computer go insane? Well, a self aware one has to be living, thinking, dreaming at light speed, unlike us hairy apes who think at biological speeds. That long trip, with nothing to do other than monitor a dozen cocoons, isn't enough work for a computer that thinks. Like a prisoner in solitary confinement, Hal went quietly insane. And he's pissed.
The scientists realize that they can't say anything to each other. Hal has ears all over the ship. He understands English (and every other language) as well as you and I do. So they repair to a lifeboat to talk things over, knowing he can't hear them in it. They discuss one of them going outside the ship to re-orient the antenna so they can send a message to Earth.
However, his red electronic eye can read lips. Hal discovers that the scientists are going to go outside the ship to re-orient the antenna, an act that probably allow them to regain control of the ship. They decide that they will kill Hal.
One of the scientists suits up and goes outside the ship. He's up on the side of the ship when the antenna dish rotates, hits him, sending him careening into space.
Here's horror number one. Can you imagine a worse way to die? You can't swim in space. There's no friction to slow you down, nothing to push against, nothing to brace yourself against. Once you're in motion, you're in motion to stay, going at high clip into space with just the oxygen in your suit. You're going to die. In space, no one can hear you scream.
The second terrifying thing that still rings in my mind is the scene where the surviving scientist realizes that Hal has killed the other man and is intent on killing him. He is utterly alone now, two trillion miles from Earth. It's just a question now of how and when Hal is going to kill him. He has to kill Hal first. But this isn't easy to do.Hal is fully aware now that this bug, this human is going to try and kill him. The man, still in the lifeboat, realizes now that he is going to have to make a decision. Do I stay aboard the ship with an insane computer that has already killed eleven people, or do I take my chances and head to Titan in the lifeboat?
He decides to abandon ship.
But to do so, he has to have Hal open the bay doors. He says "Open the bay doors, Hal."
Hal refuses. In a gentle, almost patronizing tone of voice, Hal says "I can't do that, Dave." Hal "knows better" (sic) than Dave what the repercussions of abandoning ship will mean.
It's that gentle, masculine voice so jarringly juxtaposed to the things that Hal has done to kill the crew that's scary. This machine has no empathy, no concern about the humans it was built to serve. Hal is in control and knows it. It is incapable of caring for the well being of a human. Hal is insane, and self protective.
There is no way Dave can get off the ship without Hal allowing it, and Hal has no intentions of doing it.
I won't go on, because the movie gets ever weirder from that point on.
Now let us go back to the rat in your grain room. While you are cleaning up the spilled oats, you find a pile of torn up paper, hay, etc underneath a forgotten saddle cloth (now you will understand why it's a necessity to keep a clean barn). You pick up the saddle cloth-and there, in a warm bowl, is a nest of baby rats.
The furry commas look up at you, blinking at the sudden light. One meets your eye and yawns, its minute and perfectly formed forepaws stretched out ahead of it. The rest began squeaking, pushing each other, behaving like minute puppies.
OK, they're rats. Kill them now.
Really? What do you mean, you can't? They're RATS. If you don't kill them, they will grow up, reproduce and soon you will be overrun with them.
Hey, it would be easy. They're helpless, they can't even walk yet. You have good solid boots on. Stomp on 'em. They're tiny, it would be like stepping on a couple of ...what? OK.
Don't be ashamed, I couldn't do it, either. I have no problems trapping and killing an adult rat, but the helpless babies? Not me, sister, not me. I'll have to find a different way of dispatching them. I can't kill them by smashing them. I just can't.
That's called compassion. It's called a conscience. It's called mercy.
These are feelings that will always separate us from machines.
There is a truly terrifying science fiction short story titled "I Have No Mouth and I Must Scream." Written by Harlan Ellison, it is probably the scariest thing I've ever read, and I wish I hadn't. I've never forgotten it and the lesson it taught me about artificial intelligence.
We've already seen see how simple computer errors can cascade into tremendously expensive, and even life threatening situations.
It is insanity to depend on the mercy of a machine that is incapable of it.