Well, we’ve done it. We’ve gone and inflated James Cameron’s ego just that much more. I’m sure you aren’t too surprised by this. The man is insane with billions of dollars and is currently hovering in his sky fortress deciding which extreme of the world to plunder next. Personally, I think he should turn to crime fighting. He is the closest thing to Tony Stark that we have and he can certainly afford to build an Iron Man suit. We just need to keep him busy from building the Terminator units that he envisioned in the movies. He can afford to build those too and we should be worried.
Why should we be worried? Computers are getting smarter. Not just in how they process and store information. One of the smartest computers in the world, ConceptNet 4, scored as high as a four-year old on an IQ test. That may not sound impressive to you, the savvy reader who can read in full sentences and comprehend them. It should though, because that means that the computer is learning.
If my enthusiasm for good and bad sci-fi books and movies has proven anything, it’s that computers will develop to our level and then kill us all. Sure we can teach a robot to love. But social awkwardness is often the curse of genius, and computers will surely elevate to genius level in the near future. So what happens when they pick up on this socially awkward trait and learn to be the cold, emotionless, logical killing machines that will one day govern California?
I’m not an opponent to advancing technology in any way. I love having certain things that know what I want to do and what I like. I told those that stuff though. That, or it the NSA taught them. For some of you that should be worrisome enough. They used to say “If these walls could talk…” when it should be “Be thankful your smart phone can’t announce your history because you would no longer be allowed in polite company.”
We’ve been designing smarter technologies every day in order to make our lives better. That’s really grand considering that not even 10 years ago I had every important phone number or fact stored in my head. Now I forget half of my friend’s numbers because my phone does the work for me. If I wanted to know where to find a life-sized gummy candy replica of Elvis, I could make that happen. If someone hasn’t made that happen yet, get on it. I’ll pay money because I need it…for reasons.
The Terminator concept may be a negative view of using technology that can learn on its own, but the movie had some valid moments. For those of you who don’t know the way that Terminator came to be, rent the movie. It’s been out for 20 years. But here’s a the basic gist:
In the future, humans create an artificial intelligence system known as Skynet. It controls an army of humanoid machines meant to protect people, but it gains sentience and realizes that it can kill humans and survive on its own. Former California Gover-nator Schwarzenegger is sent back from the future (naked, because time travel proved to be too much for slick threads) to kill the mother of the man who will lead the human resistance that defeats the machines.
That’s about it.
Will we reach such a grim future if we allow computers to learn on their own? Possibly. But only if we don’t exercise caution in what machines learn to do. Also, people are resistant to inanimate objects gaining more control than the living beings that created them. Most science fiction has been down on robots becoming as aware as humans. It’s an easy point to capitalize upon when we as a species fear losing our edge as the top of the animals intellectually and on the food chain. In the majority of our lifetimes, we probably won’t have to worry about losing that.
Intelligent computers are inevitable as we try to find ways to conquer more of the world and the space above it. Drones and other robots exist to fight wars for humans to reduce the amount of life lost in a battle. Those are the robots we don’t want having more intelligence than humans for certain. The last thing we need is an unmanned drone doing speeding enforcements by lobbing missiles at the offending vehicle. Let’s not even get started on the possibility of creating our own Transformers for this purpose either.
Even though one of the smartest computers in the world scored as well as a four-year old on an IQ test, it means the door is open at our control still. We could make it stop there or we could keep going. In the interest of making the world a better place in the coming generation, let’s see where this intelligence thing goes and make the judgment call from there. There are people in Europe that are testing to see if robots are capable of evil, I assume that they got this idea from movies. I want benevolent machines, and those machines are ones that don’t know what emotion is.
As the report from the people who administered the IQ test notes, the one place where the ConceptNet 4 faltered was reasoning. This is something that separates us from a lot of other animals as well. The computer couldn’t process the link between something like events that lead from one thing having a weapon and using it against another living being. The issues that cause that to happen were ruled irrelevant by the computer. Analysts note that the variance in its ability to judge situations equate it to a four-year old that could have some learning issues later.
Mentally unstable computers with intelligence are where we are now, people. The future is looking pretty awesome from there. I say we start making sure we have Robocop before we have Terminator. One of them is still technically a human being.