Max Tegmark
@tegmark
Known as Mad Max for my unorthodox ideas and passion for adventure, my scientific interests range from artificial intelligence to the ultimate nature of reality
Joined May 2014
35 Following    149.1K Followers
Dan asked me some good touch questions here, giving me the chance to explain my optimistic AI vision:
Today my Trajectory episode with Prof Max @tegmark is live. This was recorded live during the AI Action Summit in Paris. 2 big takeaways, among many: 1. Military leadership seeing AGI as a threat may increase (not decrease) an arms race. 2. Wishing for posthuman futures TOO soon = ridiculous. Lets just lay out a few of these points here: 1. The tipping point for coordination / military leadership. Max talks about how, when the militaries in the US and China see AGI itself as a risk to both of them. He has faith that international coordination is possible, and that a combination of (a) raising awareness (like with the frameworks he shares in this episode) and (b) massive, scary growth in AGI capabilities could very well lead to an attractor state of coordination over conflict. He believes that it's important to make "teh control problem" well known ahead of time, so that if/when an AGI disaster happens, it won't be seen as an attack from the enemy (which would accelerate an arms race), but it would be seen as a shared danger for both nations. He's also no blind optimist. He's very clear that there may not be easy odds for international coordination, but even if the odds are slim - its vastly better than having a million Unworthy Successor AGIs getting hurled into the world at once. 2. Posthuman futures - but only when we get them right. Max's Life 3.0 is a pretty damn inspiring long-term look at AGI futures (albeit through an anthropocentric lens). He says in this interview that there might be a long term future (he says 1MM years, which seems kind of hyperbolic and wholly unrealistic, but alas) where humans may not even want to control AGIs and posthuman life should flourish. But he rightly points out that: a) We have no idea what we're even building now, so hurling posthuman life into the world is ridiculous, and b) We should focus on understanding the tech and obtaining global coordination and human benefits as a near-term step While my timelines for when a posthuman transition will happen are vastly more near-term than Max's, the argument AGAINST "rushing to posthumanism" is one I wholly agree with. ... What do you think of Max's takes? Anything else I should have asked him? (Links in post below)
Show more
0
6
87
13