Sam Harris and Max Tegmark Discuss the Future of Intelligence
In a recent episode of the Waking Up podcast, Sam Harris spoke with Max Tegmark about the nature of intelligence, the risks of superhuman AI.
In a recent episode of the Waking Up podcast, Sam Harris spoke with Max Tegmark about his new book Life 3.0: Being Human in the Age of Artificial Intelligence. They talked about the nature of intelligence, the risks of superhuman AI, a non-biological definition of life, the substrate independence of minds, the relevance and irrelevance of consciousness for the future of AI, near-term breakthroughs in AI, and other topics.
Related articles
Tegmark and Harris discuss the rise of superintelligence during the podcast. Tegmark thinks that the obstacle to human-level intelligence is no longer governed by hardware limitations, but the software. It took 100 years longer to build a mechanical bird after the first airplane. Duplicating the intelligence of the human brain will follow a similar development path Tegmark suggests."I think people are a little bit stuck asking how much hardware do you need to exactly simulate a human brain, but that's the wrong question," Tegmark states. "The interesting question is rather how much hardware do you need to just accomplish the same intelligence that our brains do."
Since we humans were smart enough to create the intelligent computer systems we have now, once an AI is capable to do that, it would be enough to start the Singularity. The system only needs to be smart enough, and then recursive exponential growth would lead to an intelligence explosion.
"Once it comes to station 'human' it is just going to blow right through and keep on going," he says.
Tegmark is a professor of physics at MIT and the co-founder of the Future of Life Institute. Tegmark has been featured in dozens of science documentaries. He is also the author of Our Mathematical Universe.
Comments
Post a Comment
Thank you for your comment!