Imagining the Singularity
Artificial intelligence is making dramatic inroads in science, advertising and politics, and of course in our social lives through the likes of Google, Amazon and Facebook. Machines are simply better than us in an increasing number of domains. Now, many researchers believe that this trend will continue and that superhuman computer intelligence is not very far away.
As computers and artificial intelligence grow in power and capability, it seems ever more likely that we're approaching the Singularity: the point where machine intelligence
A recent piece by CBC radio produced by Jim Lebans explored these questions, and included interview portions from well known researchers and personalities in the field. Listen to the video below.
Chris Eliasmith, the Director of the Centre for Theoretical Neuroscience, and Canada Research Chair in Theoretical Neuroscience at the University of Waterloo discusses his work with SPAUN, an artificial intelligence system inspired by the neuronal connections in the brain. Eliasmith is also the Co-founder of Applied Brain Research.
Doina Precup is an Associate Professor in the School of Computer Science at McGill University in Montreal, and head of the DeepMind lab in Montreal also contributes to the discussion.
Nick Bostrom, Professor of Philosophy and Director of the Future of Humanity Institute at the University of Oxford outlines the concepts put forth in his book, Superintelligence
"It could literally be the best thing that ever happened in all of human history if we get this right."
Bostrom suggests we often mischaracterize the threat of machine intelligence — imagining Terminator-like robots devoted to our extinction. In fact the greater threat might be from machines that don't care about us, and generate their own objectives and goals. Our extinction might not be a result of hostile intent, but an incidental result of machines pursuing their own goals without care or concern for humanity."It could literally be the best thing that ever happened in all of human history if we get this right. And my main concern with this transition to the machine intelligence era is that we get that superintelligence done right, so it's aligned with human values. A failure to properly align this kind of artificial superintelligence could lead to human extinction or other radically un-desirable outcomes," he states.
The science fiction writer and futurist Madeline Ashby adds her thoughts as well. Ashby is best known for her 2016 novel Company Town
"We are fascinated with the idea of creating in our own image," she states. "The anxiety around creating an intelligent being, a thinking being, no matter what shape it may take, is the anxiety of creating of yourself, of having a child, of creating beyond yourself, and seeing your own flaws reflected back to you."
James McGrath is a Professor of Religion and Clarence L Goodwin Chair in New Testament Language and Literature at Butler University in Indianapolis points out, we have a long history of imagining our relationship with beings that are superior to us, and many of these are cautionary tales.
McGrath imagines many more positive outcomes in our relationship with superintelligent machines, including the possibility that they might use their abilities to develop religious thought — and answer some of the great religious and theological questions that humanity has struggled with. They might even become new prophets or gurus guiding our thinking about existence.
One feature of the singularity that has attracted a lot of attention is the notion that we might become the machine superintelligence, by uploading our minds to computers. Robin Hanson, an economist and futurist has explored this scenario in his book The Age of Em
"The anxiety around creating an intelligent being, a thinking being, no matter what shape it may take, is the anxiety of creating of yourself, of having a child, of creating beyond yourself, and seeing your own flaws reflected back to you."
So what happens when computers transcend us? The notion of the singularity, popularized by inventor and futurist Ray Kurzweil, is that it will be such a profound transformation of our technology, economy and society, that it may well be impossible to imagine or anticipate. It would be like seeing beyond the event horizon of a black hole.
Computer scientists have recognized this problem, but face a difficult task as they try to develop superintelligence: We don't really know how to instill human values in a machine.
Doina Precup, a computer scientist from McGill University says that one strategy might be simply having them learn by watching us — like children do. But this means we need to be very careful about what kind of behavior we model for our machines, and makes our responsibility for them explicit. In a way these will be our children.
Another challenge might be that a computer superintelligence could be superhuman without being very human-like — making it challenging for us to relate to.
Comments
Post a Comment
Thank you for your comment!