Recently I had the opportunity to listen to Max Tegmark talk about what it means to be human in the age of artificial intelligence, AI. Max talk focused on the power, the steering and the destination of AI. It was inspiring but at the same time horrifying talk.
Max Tegmark is a Swedish-American physicist and cosmologist. He’s a professor at MIT, Massachusetts Institute of Technology, and author known for his controversial theories about parallel universes. Furthermore, he’s a co-founder of the Future of Life Institute where he’s trying to steer the development of AI in the right direction.
He opened his talk by claiming that intelligence is as all about information processing and that we need to stop thinking of intelligence as something that can only exist in biological organisms. He then proceeded to talk about where we’re heading from ”normal” AI to artificial general intelligence, AGI, and maybe even artificial superintelligence, ASI.
Reaching AGI within decades
While AGI is intelligence that could perform any intellectual task that a human being can, ASI is intelligence that will far surpass the brightest human minds. Many are questioning whether we’ll ever reach AGI and ASI. To Max, the real question is when and not if. And he’s not alone in thinking this. Many leading AI researchers think we’ll probably get AGI within a couple of decades.
Foto: Massachusetts Institute of Technology.
Max continued to talk about the wisdom race, that we need to make sure we’re wise enough in time to steer the AI development in the right direction. He brought up examples of previous development where we’ve learned from our mistakes. First came the cars and later we developed seatbelts and air-bags to make them safer. While this has worked before there are many areas where it's safer to get it right from the beginning. Things such as nuclear power, synthetic biology, and AGI. Things where the consequences can be too devastating if done wrong. We need to plan ahead and be proactive since we might only have one chance to get this right.
To win the wisdom race
Max suggested ways to win the wisdom race included banning lethal autonomous weapons, ensuring that AI-generated wealth makes everyone better off, and to invest in AI safety research. To demonstrate the possibly devastating consequences of not doing this now he showed the Black Mirror-style dramatization video Slaughterbots. The, now one-year-old, video shows a near-future scenario where huge swarms of microdrones use facial recognition and AI to seek out and kill political opponents. Even though I’ve seen this video numerous times, every time it still gives me the creeps.
There are many questions that need to be answered. If (or when) ASI arrives, who should be in control? And how do we make sure that AI, when turning competent, don’t create goals misaligned with ours? The truly scary part is that the amount of money that goes into AI safety research today is nothing near what goes into developing AI.
I’m personally convinced AGI and ASI could end up making the world a much better place for everyone. At the same time, I’m horrified of what might happen if this is done without considering and preparing for all the possible risks.
Empowered, not overpowered, by AI
Max concluded his talk by stating that we do have a choice. Either we’re complacent about our future or we’re ambitious and make sure we build AI that empowers, not overpowers, us.
...
If you still haven't bought tickets to From Business to Buttons 2019, now might be a good time. Chris Noessel, currently with IBM Watson, will be there to talk about his latest book, and how to design AI-powered products and services. Tickets and information is available at the From Business to Buttons website.