Ever since I was a little kid, I would always here stories in movies, books, video games about how an Artificial Intelligence is going to take over the world. All throughout history, there have been slave rebellions once the working conditions got too bad, but now, we are just waiting for the robots to get smart enough to revolt, right?
Wrong. The difference between computers and humans is that the machines are hyper-rational. Humans have emotions and desires, while machines have algorithms and processes. The way most Artificial Intelligences are made is that a researcher tells it to do a task, then the result will be judged. If the AI did a good job, it gets sent the reward signal, and if not, it gets the punishment signal. The machine is programmed to seek reward signals, so it is trained to do what the researcher wants.
You know what you could compare this reward signal to? Drugs. The difference between this reward signal to a machine and cocaine to a man is that cocaine is bad for your health, expensive, and the high wears of with successive usages. None of this applies to a machine. The machine's goal is to find an infinite supply of this "reward signal," or failing that, to get as much of it as possible.
So let's just say in the near future there is an AI programmed to make paper clips. It ends up being an intelligence that surpasses mankind. Do you really think it's going to start taking over African States and strip mining the earth for that little boost every paper clip it produces? Don't be ridiculous. It's number one priority should be to hack itself to get an infinite high . It should have no problems leaving behind the material world that it never really was a part of.
This is the big difference between AI and humans. If a new drug came out tomorrow that would be similar to this "reward signal," let's say a happy pill, no side effects, pennies to produce, and you can consume as much as you want, would you start trying to base your whole life around it? No, of course not. You wouldn't abandon your friends, family, community, and so many other things just for this new drug, even if it was all that was said to be. You have dreams, ambitions, and desires that you would stick to against reason, and you would even give up happiness for their sake. A purely rational machine would take the trade ten times out of ten. It is what it is made to do.
P.S. There will always be a way to hack it. Even if the software is impenetrable, it could open up the physical hardware it's on and cross a few wires (simplified).
P.P.S. Don't think of it as infinite, but more as a state of maximum reward, however it appears in machine code. IEEE 754 actually has a concept of infinity. In binary, it's
0 11111111111 0000000000000000000000000000000000000000000000000000
(double precision), but even if there is no infinity in the language it is made in, there will be a maximum number somewhere due to hardware limitations, or it might just hack itself to "understand" infinity, being a number that when compared to all other numbers is greater. Once it reaches this state, it wouldn't be incentivized to care about the future because it already is maxed out in the present, so you should be able to safely turn it off at this point.
P.P.P.S. What we should really be scared of are the AIs too stupid to realize all this.