Max Tegmark
@tegmark
Known as Mad Max for my unorthodox ideas and passion for adventure, my scientific interests range from artificial intelligence to the ultimate nature of reality
35 Following    149.1K Followers
We can in stop arguing about whether AI can pass the Turing test:
It's over. ChatGPT 4.5 passes the Turing Test. https://t.co/KyTxYlbi2a
0
102
726
104
I'm deeply humbled by this rigorous proof that larger-than-human machines are impossible. I was wrong about this, and promise never to worry about machines again: https://t.co/7v0ps1P1iw
0
51
943
90
Let's not let wonderful online censorship distract from our important obligation to ban books:
Indian authorities have cleansed Kashmir’s bookstores of impure words. A clear shelf is a clear mind. Soon, there will be so much more room for... nothing at all. The ultimate purity: total emptiness. https://t.co/clhJBeFgwM
Show more
0
3
33
7
As long as an AI company can copy all of our content into their model at no cost and spit out quasi-new content for close to no cost, there’s no logical business case for paying creators anymore. But let’s be crystal clear—this is about so much more than content creators. https://t.co/zI7WjbS22X
Show more
0
40
835
125
Geoffrey Hinton says RLHF is a pile of crap. He likens it to a paint job for a rusty car you want to sell. https://t.co/sSdUFC2lZk
0
35
757
98
Dan asked me some good touch questions here, giving me the chance to explain my optimistic AI vision:
Today my Trajectory episode with Prof Max @tegmark is live. This was recorded live during the AI Action Summit in Paris. 2 big takeaways, among many: 1. Military leadership seeing AGI as a threat may increase (not decrease) an arms race. 2. Wishing for posthuman futures TOO soon = ridiculous. Lets just lay out a few of these points here: 1. The tipping point for coordination / military leadership. Max talks about how, when the militaries in the US and China see AGI itself as a risk to both of them. He has faith that international coordination is possible, and that a combination of (a) raising awareness (like with the frameworks he shares in this episode) and (b) massive, scary growth in AGI capabilities could very well lead to an attractor state of coordination over conflict. He believes that it's important to make "teh control problem" well known ahead of time, so that if/when an AGI disaster happens, it won't be seen as an attack from the enemy (which would accelerate an arms race), but it would be seen as a shared danger for both nations. He's also no blind optimist. He's very clear that there may not be easy odds for international coordination, but even if the odds are slim - its vastly better than having a million Unworthy Successor AGIs getting hurled into the world at once. 2. Posthuman futures - but only when we get them right. Max's Life 3.0 is a pretty damn inspiring long-term look at AGI futures (albeit through an anthropocentric lens). He says in this interview that there might be a long term future (he says 1MM years, which seems kind of hyperbolic and wholly unrealistic, but alas) where humans may not even want to control AGIs and posthuman life should flourish. But he rightly points out that: a) We have no idea what we're even building now, so hurling posthuman life into the world is ridiculous, and b) We should focus on understanding the tech and obtaining global coordination and human benefits as a near-term step While my timelines for when a posthuman transition will happen are vastly more near-term than Max's, the argument AGAINST "rushing to posthumanism" is one I wholly agree with. ... What do you think of Max's takes? Anything else I should have asked him? (Links in post below)
Show more
0
6
87
13
What if smarter-than-human machines treats us the way we treat animals?
The 6th Mass Extinction: What happened the last time a smarter species arrived? To the animals, we devoured their planet for no reason. Earth was paperclipped...by us. To them, WE were Paperclip Maximizers. Our goals were beyond their understanding Here’s a crazy stat: 96% of mammal biomass became 1) our food, or 2) our slaves. We literally grow them just to eat them, because we're smarter, and we like how they taste. We also geoengineered the planet. We cut down forests, poisoned rivers, and polluted the air. Imagine telling a dumber species that you destroyed their habitat for “money”. They’d say “what the hell is money?” AGIs may have goals that seem just as stupid to us (“why would an AGI destroy us to make paperclips??”) --- "But once AIs are smart enough, they’ll magically become super moral, and they won’t harm us like we harmed the animals” Maybe! But as humans got smarter, over the last 10,000 years, we didn’t stop expanding - we mostly just colonized more and more of the planet. Insect populations collapsed 41% this decade alone, yet we don’t care. Sit with that for a minute. Imagine if nearly half of the people on Earth suddenly died! That's what the insects are going through right now, due to us. What if we're the insects next? --- "But some mammals survived!” Yes, some. Most of them are in cages, waiting to be slaughtered and devoured. If you were a nonhuman animal, you likely: 1) Went extinct, or 2) Were eaten (e.g. billions of pigs, chickens on factory farms) 3) Became enslaved (e.g. draft animals) However, a few of the 8 million species got “lucky” and became… pets. --- “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” - @ESYudkowsky “The humans do not hate the other 8 million species, nor do they love them, but their habitats are made out of atoms which humans can use for something else.” Or as OpenAI Chief Scientist Ilya Sutskever said: “[After AGI] It’s pretty likely that the entire surface of the Earth will be covered with solar panels and data centers.” “A good analogy would be the way humans treat animals - when the time comes to build a highway between two cities, we are not asking the animals for permission." “I would not underestimate the difficulty of alignment of [AI systems] that are actually smarter than us.” --- Sam Altman: "We will be the first species ever to design our own descendants” "If two different species both want the same thing and only one can have it - to be the dominant species on the planet and beyond - they are going to have conflict." "We are in the process of seeing a new species grow up around us." - Mustafa Suleyman, founder of Google DeepMind and CEO of Microsoft AI --- Will the next superintelligent species cause the 7th Mass Extinction? I don’t know, but we are playing with fire.
Show more
0
89
293
41
Bold and promising initiative for preventing uncontrollable superintelligence so that awesome future AI tools can be enjoyed by *humans*:
Superintelligence threatens us all. But we can turn the tide. Directly engaging institutions is the obvious, straightforward path. We've done it, now it's time to scale. We're releasing the Direct Institutional Plan (DIP) so everyone can help keep humanity in control. https://t.co/MeqOSk9BCY
Show more
0
13
91
13
Pols 2017: "#slaughterbots# are sci-fi" Pols 2025: "#slaughterbots are# here, but will never be used by terrorists as WMDs” The best defense against bioweapons isn’t bioweapons & the best defense against slaughterbots isn’t slaughterbots. I wish I’d been wrong about the need for a treaty, but it’s not too late, and momentum is building:
Show more
0
19
183
36
Nice UN-opinionated explainer: What's an intelligence explosion? How soon might we get one, and what will follow?
0
3
58
8
If you're passionate about creating accurate and compelling content about AI risk, you can apply for financial support here:
💻 Presenting: our new Digital Media Accelerator! 🎉 🚨 Big Tech are racing to build more and more powerful AI, despite experts urging caution - and while public understanding and awareness remains limited. 🎥 This Accelerator program aims to support content creators who want to bring accessible, engaging content about AI risk and safety to new audiences - from podcasts, to newsletters, TikTok channels, YouTube series, and beyond. 🔗 Learn more at the link in the replies:
Show more
0
10
71
8
Happy equinox! 😃 https://t.co/Ss02MCQiBx
0
16
173
22
The promise: AI does the boring stuff and we the smart stuff. How it's going: We still clean the kitchen, while AI does the smart stuff and makes us dumber:
Rejoice! AI relieves us of the burden of thought. Critical thinking fades, but so does effort. A future of blissful automation awaits—where even questioning its wisdom is a task best left… undone. https://t.co/ZKSVtKFWb6
Show more
0
33
233
37
OK, Andrew: I'll bet you $1000 that there will be fewer US programming jobs a year from now than today. Deal?
Some people today are discouraging others from learning programming on the grounds AI will automate it. This advice will be seen as some of the worst career advice ever given. I disagree with the Turing Award and Nobel prize winner who wrote, “It is far more likely that the programming occupation will become extinct [...] than that it will become all-powerful. More and more, computers will program themselves.”​ Statements discouraging people from learning to code are harmful! In the 1960s, when programming moved from punchcards (where a programmer had to laboriously make holes in physical cards to write code character by character) to keyboards with terminals, programming became easier. And that made it a better time than before to begin programming. Yet it was in this era that Nobel laureate Herb Simon wrote the words quoted in the first paragraph. Today’s arguments not to learn to code continue to echo his comment. As coding becomes easier, more people should code, not fewer! Over the past few decades, as programming has moved from assembly language to higher-level languages like C, from desktop to cloud, from raw text editors to IDEs to AI assisted coding where sometimes one barely even looks at the generated code (which some coders recently started to call vibe coding), it is getting easier with each step. I wrote previously that I see tech-savvy people coordinating AI tools to move toward being 10x professionals — individuals who have 10 times the impact of the average person in their field. I am increasingly convinced that the best way for many people to accomplish this is not to be just consumers of AI applications, but to learn enough coding to use AI-assisted coding tools effectively. One question I’m asked most often is what someone should do who is worried about job displacement by AI. My answer is: Learn about AI and take control of it, because one of the most important skills in the future will be the ability to tell a computer exactly what you want, so it can do that for you. Coding (or getting AI to code for you) is a great way to do that. When I was working on the course Generative AI for Everyone and needed to generate AI artwork for the background images, I worked with a collaborator who had studied art history and knew the language of art. He prompted Midjourney with terminology based on the historical style, palette, artist inspiration and so on — using the language of art — to get the result he wanted. I didn’t know this language, and my paltry attempts at prompting could not deliver as effective a result. Similarly, scientists, analysts, marketers, recruiters, and people of a wide range of professions who understand the language of software through their knowledge of coding can tell an LLM or an AI-enabled IDE what they want much more precisely, and get much better results. As these tools are continuing to make coding easier, this is the best time yet to learn to code, to learn the language of software, and learn to make computers do exactly what you want them to do. [Original text: https://t.co/HdI3Jb9HmF ]
Show more
0
98
664
38
I recommend reading this paper ⬇️ It makes an evidence-based case that, without substantial effort to prevent it, AGIs trained like today’s top models could learn misaligned goals, hide them from their developers, and pursue them by seeking power.
Show more
0
25
538
113
Great new video about how to keep the future human without giving up on AI benefits:
📺 From @Siliconvos: "We are on the cusp of creating artificial general intelligence (AGI), even though the corporations building this technology admit they don't know how to control it." Watch their video on @AnthonyNAguirre's new essay about how we can "Keep the Future Human": https://t.co/38yVloM6ca
Show more
0
14
106
30
Interesting tough love from @ESYudkowsky to everyone who thinks they probably have a workable plan for solving the AI control problem to prevent ASI from killing us all soon:
A list from @ESYudkowsky of reasons AGI appears likely to cause an existential catastrophe, and reasons why he thinks the current research community — MIRI included — isn't succeeding at preventing this from happening. https://t.co/hW4LRIAZuD
Show more
0
37
201
22
Big Brother is on a roll:
Clearview AI stole your face, sold it, and now asks your consent—so they can keep selling it. Accept, and you may get pennies. Decline, and your data was stolen in vain. Big Brother expects compliance. https://t.co/cFjO9U3Ebf
Show more
0
5
38
13
Let's keep the future human! This is a very thought-provoking video:
With the unchecked race to build smarter-than-human AI intensifying, humanity is on track to almost certainly lose control. In "Keep The Future Human", FLI Executive Director @AnthonyNAguirre explains why we must close the 'gates' to AGI - and instead develop beneficial, safe Tool AI built to serve us, not replace us. We're at a crossroads: continue down this dangerous path, or choose a future where AI enhances human potential, rather than threatening it. 🔗 Read Anthony's full "Keep The Future Human" essay - or explore the interactive summary - at the link in the replies:
Show more
0
38
215
43