Nick Bostrom on Existential Risk and The Singularity


 Singularity
Nick Bostrom studies existential risks to humanity in depth at Oxford's Future of Humanity Institute.  One of the key risks he has identified is that of artificial intelligence.




Existential risks are those that threaten the entire future of humanity. At the Future of Humanity Institute in Oxford, Nick Bostom studies these possibilities in depth.

Nick Bostrom, author of Global Catastrophic Risks highlights machine intelligence as one of the main potential threats to humanity.

Early computer scientists grossly underestimated the power of the human brain and the difficulty of emulating one. As a recent article in Mother Jones by Kevin Drum pointed out, it is sort of like filling up Lake Michigan one drop at a time. Drum's explanation of Moore's Law, and the impact it will have is strking.  In fact, not just sort of like. If you want to understand the future of computing (and the exponential growth it encompasses), it's essential to understand this.
Related articles
Suppose it's 1940 and Lake Michigan has (somehow) been emptied. Your job is to fill it up using the following rule: To start off, you can add one fluid ounce of water to the lake bed. Eighteen months later, you can add two. In another 18 months, you can add four ounces. And so on. Obviously this is going to take a while. 
By 1950, you have added around a gallon of water. But you keep soldiering on. By 1960, you have a bit more than 150 gallons. By 1970, you have 16,000 gallons, about as much as an average suburban swimming pool. 
At this point it's been 30 years, and even though 16,000 gallons is a fair amount of water, it's nothing compared to the size of Lake Michigan. To the naked eye you've made no progress at all. 
So let's skip all the way ahead to 2000. Still nothing. You have—maybe—a slight sheen on the lake floor. How about 2010? You have a few inches of water here and there. This is ridiculous. It's now been 70 years and you still don't have enough water to float a goldfish. Surely this task is futile? 
But wait. Just as you're about to give up, things suddenly change. By 2020, you have about 40 feet of water. And by 2025 you're done. After 70 years you had nothing. Fifteen years later, the job was finished.

Just like the example, Bostrom uses the factors of exponential technological growth to demonstrate just how likely and serious the rise of artificial intelligence will be.

For illustration, Bostrom looks a the world if Earth had formed one year ago.  If this were the case, Homo sapiens would have evolved less than 12 minutes ago, agriculture only began one minute ago, the industrial revolution less than two seconds ago, the first electronic computer was only  0.4 seconds ago and the internet about 0.1 seconds ago.

"It is hard looking at data series like this, not to get a sense that there is some kind of accelration, that one is moving towards some kind of discontinuity," says Bostrom.

Bostrom also points out that super intelligent AIs may help offset the risks posed by other factors like nanotechnology and bio-terrorism.  For this reason developing it as soon as possible might be worth the risk:

... a superintelligence could help us reduce or eliminate other existential risks*, such as the risk that advanced nanotechnology will be used by humans in warfare or terrorism, a serious threat to the long-term survival of intelligent life on earth. If we get to superintelligence first, we may avoid this risk from nanotechnology and many others. If, on the other hand, we get nanotechnology first, we will have to face both the risks from nanotechnology and, if these risks are survived, also the risks from superintelligence. The overall risk seems to be minimized by implementing superintelligence, with great care, as soon as possible.



SOURCE  VeerStichting Leiden, Mother Jones, Future of Humanity Institute


By 33rd SquareSubscribe to 33rd Square

Comments

Popular Posts