Based in Sydney, Australia, Foundry is a blog by Rebecca Thao. Her posts explore modern architecture through photos and quotes by influential architects, engineers, and artists.

Episode 288 - Is Artificial Intelligence a Threat to Humanity?

Episode 288 - Is Artificial Intelligence a Threat to Humanity?

Max and Aaron discuss the arguments put forth by Eliezer Yudkowsky and some AI researchers about "existential threats" posed by the technology. They end with some predictions on the subject.

Links

Will Superintelligent AI End the World? by Eliezer Yudkowsky

Lex Fridman Podcast - Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

David Pakman Show - AI will kill all of us

Venture Beat - AI doom, AI boom and the possible destruction of humanity

The Independent - ChatGPT creator says there’s 50% chance AI ends in ‘doom’

Computer World - OpenAI launches new alignment division to tackle risks of superintelligent AI

The Bayesian Conspiracy - Bayes Blast 16 – Holding PRNS (Present Rate, No Singularity)

Intelligence.Org

Related Episodes

Episode 287 - The Rise and Fall of ESG with Peter Earle

Episode 134 - AI Breakthrough & Understanding “Understanding”

Episode 18 - AI Gone Psychopathic, Embellishment vs Fact

Episode 56 - True News, Fake Faces, and Adversarial Algorithms

Episode 16 - Overfitting Toddlers and Underfitting Curmudgeons

Episode 243 - Eric Daimler on Conexus and Category Theory

Episode 261 - Generative AI's Grand Entrance

Episode 273 - Stop Making AI Boring

Transcript

Max Sklar: You're listening to The Local Maximum, Episode 288.

Narration: Time to expand your perspective. Welcome to The Local Maximum. Now here's your host, Max Sklar.

Max: Welcome, everyone. Welcome. You have reached another local maximum. Very happy to have Aaron back on the program. Aaron, how you doing?

Aaron: It's good to be here. It's been a while since we've we've talked live on one of these.

Max: Well, yeah, that's because we did a three part show on my scribblings on the Constitution, which, by the way, we could keep talking about that if you want because I added more stuff to it. And my paper, which is on GitHub, is like I keep on revising it and every time I revise it, I mean, look, I have like the latest one right here. You see there's like all sorts of scribbles and stuff on it. I've literally done this like six or seven times.

Aaron: I was going to say I owe you a reread, but it hasn't happened yet.

Max: Well, if you do it while I'm in France, by the way, it's good we're doing this because I'm going to France tomorrow for a few days. So I'm glad we're getting this thing done here at midnight.

But yeah, I've got some interesting changes and thoughts after sharing it with people over the last few weeks. So definitely open talking about that. We don't have to get into too much of that today. I did make a few changes, though, after talking this through, and I have an interesting kind of appendix on what could be done about the House of Representatives, which would be quite interesting.

So if you're interested in that, maybe hold off or if you're really interested, you could find my thing on GitHub because it is public, but I'm not going to help people out by sending the link just yet.

Aaron: Limited distribution for the moment.

Max: Yeah, yeah. All right. So I've been working on, uh, we had Peter Earl on last week who was talking about ESG and DEI, and I think I shared some personal experiences and feelings of mine that I wasn't as explicit about in the past, which is very interesting.

So far, I've not heard any particular commentary on that. So I guess that's okay. So maybe we could move on. Unless you have any comments on that very quick?

Aaron: The only comment I'd have is that I've seen a couple of things in the news recently in the last few weeks about how it seems like the pendulum maybe less explicitly ESG, but some of the DEI or consultancy executive positions has been swinging back the other way.

I haven't read deeply enough on it to know if that's, here are a few anecdotal cases or if it's actually an indicator of a real change in tides.

Max: Is something wrong with me that I have these daydreams or fantasies during the day where I have a flashback to one of these mandatory trainings, but then I get up and speak my mind and say, “I don't care if you fire me!”

I never actually did it, but I have the fantasy of doing so. I don't know what to say about that or if anything could be said about that.

Aaron: Well, I think it's just another variant on and I could have sworn that Germans have a word for exactly this thing, but that moment when you're like walking up the stairs or walking down the stairs after you've left an intense discussion or an argument, and you realize, oh, the perfect comeback I should have said is X, but it's too late.

Max: Yeah. Well, you also know that there are risks in that meeting when they're basically spreading well, it's a combination of things what they're doing, they're protecting themselves legally, but they're also hopping on a trend. They're spreading an ideology and they're telling you not to be a jerk. So it's like all those things that are mixed together in one meeting and it's almost by design, very difficult to do anything about.

Aaron: Well, they're certainly never structured as a bi-directional exchange of information.

Max: Oh, but they say they are.

Aaron: I was going to say and if they tell you they are, then they are lying or they're deluded.

Max: I mean, yes, it's that kind of thing that I wish I could get up and say, like, I'm sorry, this is not a safe space where we could tell you what we think. For example, I'm not going to tell you that I think that you are you know.

Aaron: There's something to be said for the-

Max: I don't even want to say it.

Aaron: -the military practice of having to request permission to speak freely because either you are given it, it doesn't mean you're free of consequences.

Max: But do people really do that or is that just in movies?

Aaron: I don't know. I am not a veteran of any armed services, so I don't know if that's a real thing. But I like the idea that either they grant you permission and they realize that they might be about to receive some uncomfortable information, or they deny it. And now you're all on the same page, and you understand, okay, this is not an environment where my opinion is actually welcomed or valued, and we'll move on with that understanding.

Max: Yeah. It's not even something you want to bring up to your manager because the worst you could say, oh, you wasted an hour of my time this week. But it's not just that. I feel like there was something more sinister going on than just that.

But anyway, I don't want to necessarily talk about this right now because I have not-

Aaron: We have other sinister things!

Max: We have other sinister things to discuss and also this is something that has to be thought about very carefully before you want to dive into what's going on there. So what's our goal today? Today is the long promised discussion on AI Doom, are we all going to die from all this great technology and wonderful products that you and I have been working on for the last several years?

So I think it's a very, I sat down today and I'm flying out tomorrow. I don't think today we can end the discussion where we have our final debate or whatever. So I just sort of think what's our goal today is not even to summarize the entire AI doom argument, but to read a few things, Eliezer Yudkowsky, a few other people.

Ultimately, I just want to try to explain it and try to steelman it. But today, maybe just we'll try to dive in into it a little more and it's like, you know, the kind of thing know, I'll work towards it.

There was there was something on a Curb Your Enthusiasm and it was in the latest season, and I don't remember what it was in relation to, but Larry was trying to get Leon to do something that sounded very reasonable and easy to do, but he couldn't do it. He's like, look, look, I'll work towards it. Maybe we'll do something like, okay, all right.

So we've had GPT, by the way, on the radar on this show for a while now. I look back, our first episode on this kind of thing was AI Gone Psychopathic, episode 18. And it was basically some MIT lab said, oh, we created a psychopath. We pushed back on that. That was in 2018. We also had, I believe, at the beginning of 2019, our first video one, which was the Fake Faces, which it was very eerie going to a website, this face does not exist, and seeing someone, a normal looking person who was generated by AI.

Now that's very normal. And we see it all the time on Twitter or any social media. And then finally, Episode 134, we were talking about GPT-3, that breakthrough. And what does it mean for an AI to understand something? So those are some ones to look back to.

All right, so we both watched the Eliezer Ted Talk, Will super intelligent AI end the world? What do you think?

Aaron: Was this super recent?

Max: I believe so, yeah. I mean, we can get look at when it was put up on YouTube. It was put up eleven days ago.

Aaron: It appears to be within a couple of weeks.

Max: Yeah. I'm trying to figure out, like, okay, let's try to figure out what is his argument and what's his best argument? He's basically saying that AI is an existential threat, meaning it could destroy humanity because it's so smart. It's something that we can't understand. It could end up being beyond smarter than any possible human capability.

And he doesn't give a very good call to action because his call to action is essentially like surveil all the data centers, take over all the data centers, international treaty, and basically invade any country that doesn't agree.

Aaron: And to be fair, to attempt to steelman him a little bit here.

Max: We're going to try to do that. You're better at that than me. So I'm kind of pitching this to you. I'm hoping you'll jump in.

Aaron: Don't oversell before I deliver here. He does say right before he delivers that he does not have a good call to action, a good plan, that this is the least bad alternative he sees as a path forward and that he doesn't expect anyone to actually abide by it.

I guess he acknowledges he's pissing into the wind, which maybe, looking back and maybe I'm channeling our mutual internet friend Perry a little bit here, it seems like he's been pissing into the wind for 20 years given the amount of progress he's made on his effort here.

I don't doubt that he sees this as a serious concern. But what he's offering up as solutions is not particularly motivating.

Max: Yeah. Is it even possible, even if everyone wanted to do it, I mean, what are you going to do about, like, are you going to have the AI police come around? If I'm trying to run some code on my laptop? Are you going to be like, oh, wow, this person had several explosives were found in their home. You could have like, oh, look, ten, twenty servers were found in their home. Very bad.

Aaron: I'm jumping to a straw man here. But if you take his argument to a logical conclusion or not his argument, his proposed Band-Aid to a logical conclusion here, AI research and development could become the new child porn, that it is the excuse for surveilling everyone doing everything.

Because if we catch you doing this, then we're going to lock you up and throw away the key. And we have to bend the rules and the laws and all the hallmarks of kind of civic decency to pursue this goal.

Max: And that sounds like a dystopian nightmare because I have written down below here, and I'm trying to find where in the notes, but it's like, let's define what we're talking about by AI first because it sounds like you're trying to ban all computation of sufficient complexity that I have a huge problem with.

Aaron: And I think he has a much clearer idea in his head of what qualifies as AI. But it has not helped that in the last five years, AI has become one of the new buzzwords. And so every company, especially every startup, has claimed that they're doing X or Y with AI, even if they actually aren't. So there's been some watering down of that terminology.

Max: Right. Okay, let's see if I can or maybe I didn't write this down here, trying to figure out what we're talking about when we're talking about AI. It's important to define it. Here we are. Okay, so let's talk about how AIs learn.

So let's talk about machine learning. Okay. The machine learning portion of AI, because that's the statistical part of AI that is basically the part where it's looking at a bunch of data and trying to reconfigure itself to match that data somehow.

That could be as the simplest version of that is drawing a line of best fit through some points like that is machine learning of sorts, just math to try to fit that line. But as it gets more and more complicated, you have a neural network and you're trying to get that neural network to try to fit a data set, whether it's to predict something, you might be trying to mimic something like recreate something like an image.

For example, like, hey, I don't want you to just memorize these images. I want you to learn a process with your neural net weights that somehow generates images that look like this. And how do we tell you're doing a good job? Well, because we're going to test you against images you haven't seen yet, see how you generalize that.

And then also is it good at describing things? Is it good at this is kind of like the unsupervised portion of it, but is it good at describing to us what sorts of images it's likely to get or what's the shape and size of the data set? So that's the ML side of AI.

Then I believe there's another side of AI. And we talked about this in the episode on ConeXis, which I think is a legitimate and I'm going to try to get that up a legitimate part of AI as well. Episode 243, which is on kind of the abstract mathematics part of AI. So that is kind of formal systems and kind of the exactitude part of things which is usually not thought of as AI. I don't know if that is even in the running for some kind of existential problem like automated theorem, provers and things like that.

Aaron: I’m sure Yudkowsky is very well steeped in the state of AI and the methods. I think his concern has less to do with how it works and more with the fact that the moment we bootstrap up to something that is greater than human intelligence, we will lack the ability to understand it and to predict it and to control it. Which is not wrong necessarily.

Max: But I think how it works is key here, though, because it's like if you're saying, okay, I'm going to be training a neural network to try to predict something or to try to mimic something, try to mimic text and predict what text is next that's OpenAI.

Then you could kind of predict what it's going to try to do internally and what its objective function is. You could kind of say, okay, if it got really, really smart, what would it end up doing? And given the instructions that we've given it. And so he thinks that this back and forth is ultimately going to go out of control.

Would you say that's fair?

Aaron: I think so. But I think there's a philosophical aspect to it that is very much in the vein of when you're a child, it is very difficult for you to envision and model the thought processes of adults because if you could model them, you would be as smart as them. And in order to accurately to model something smarter than you, we don't have a coherent way of doing that.

Now, do I think that that leads to the conclusion of we must absolutely slam the brakes and roll things back and prevent any progress in this area? No, I don't, as you might have guessed from the dismissive way I was leading into that.

But that is where this conclusion leads him.

Max: Yeah. Okay. So one of the things I noticed in the videos that we watched, there was the Pacman video. I keep saying Pac Man.

Aaron: In the end of the Ted Talk as well.

Max: Yeah. And there's another one that I have linked here with Lex Fridman that was just too long, with which, you know, that could be very long, but maybe I have a long flight, so maybe I'll get to listen to that one.

But a lot of people will be asking a question which I have kind of a different take on, which is like, well, how does the AI get out of the computer?

Aaron: Yeah. People have been obsessed with this question for, like, a decade as the crux of the argument, because a lot of people will say you just you keep it in the computer or you'll have someone who can unplug it. I think there was a big hullabaloo with Neil deGrasse Tyson and Eliezer Yudkowsky where Tyson was a firm believer in that we'll just unplug it

And then apparently Yudkowsky told him a potential method for how the computer would convince you to let it out of the box. And he changed his mind on that. I haven't followed it that closely, but this has been in the ether for a long time.

Max: Yeah. So I think this is where I have to concede with Eliezer on this one because I don't see that as an interesting question. Of course it can get out of the computer. I've been designing software for many years that does this. That's the point. It calls APIs. It's servers that people call. And it does things.

I remember in early Foursquare where you would hold up your phone. And this is one of my first couple weeks working there where we went to this cool thing in the neighborhood that they did for Foursquare, which was really nice, where you hold up your phone, you check into a baseball shop. It was like one of those pop up shops that didn't last very long, like the city.

And then you check in and then a vending machine vends a baseball to you. I still have that just from the check in. And it was very cool. Even though it seems kind of normal now to do something on your phone and then to have some physical stuff happen. But that happens all the time just through API calls.

I think the question is, okay, at some point it's going to start manipulating things that it shouldn't be manipulating and so it doesn't have authorization to. And then the question is what are the defenses against that? What's it likely going to try to do and what are the defenses against that?

Because, I don't know. Let's say it's trying to get a hold of I don't even know what an example is, but let's say it's trying to get a hold of the, do I want to go big like with the nuclear codes or do I want to do something small that it probably shouldn't have?

Aaron: I think we can get a reasonable case where we start with something small, where if you had that's why I want to start small. Chat GPT that could go out and could, if it's got APIs, to send emails or to send chat messages through other interfaces than the normal chat GPT context window, could somebody not and assuming we remove the guardrails that I assume are still there.

Max: But wait a minute, GPT can already do things by responding in text. It can already convince people to do things.

Aaron: Well, that's what I was saying. What if I give it a task? I want you to go and get Max's Social Security number and the password to his Netflix account so that I can log into it. Can it social engineer another real person? I'd say that it is somewhat trivial to get from where it is today to there if it's not already capable of that. Does that count?

Max: That's not like existential threat, end of humanity, but it's not good.

Aaron: But if it can get that, then we're only a step or two away from manipulating finance.

Max: Okay, let's say it gets that. Let's say it gets that. Okay. Is that a static state of the world where you're just constantly getting your identification, everything stolen from ChatGPT and you're just like, drat, it got me again.

Or do people change their behavior or are there guardrails built in to using this technology? Kind of like in terms of spam?

Aaron: I may be wishcasting a little bit here, but to say desired outcome is not right because I think predicated on bad thing happening then I think the most likely good response to that is that okay, there might be malicious AIs or people using AIs maliciously. That's probably a more realistic outcome.

Max: Yeah, and I'm going to ask about that later because is that really a problem with AI or is that a problem with every single technology?

Aaron: So whether it's the AI doing of its own volition or somebody else is driving behind it, but let's just refer to that as a malicious AI for the moment. Then I think the appropriate response is defensive AIs. And so you're going to have an AI layer mediating your communication with the outside world that's going to attempt to filter and protect you from these malicious actors, whether they are malicious AI acting or just malicious humans acting directly.

It's the GAN situation all over again. The adversarial networks, if you can attack with it, then why can't you use that same approach to defend against the same thing?

Max: Yeah, let's see if we have a generative adversarial. I guess we do have an episode on that. Okay. Yes, of course we do. Generative. Right. So it's basically the arms race situation. And then the question is I think the question that I have written here is, will the AIs, what's the chance that all the AIS might not be aligned with humans? And that's the alignment problem. Like, do they have the same goals as humans, but what's the chance that all of the AIS are aligned with each other and that I cannot see that happening.

So, in other words, again, I have it written here exactly as will it be AI versus human, or human one with AI versus human two with AI? And I think it's very likely to be the latter. I think occasionally you could have an AI get out of control where it's just an AI against an AI or an AI against a human.

But I don't see it's, like all AIS banding together. Because the idea is, okay, the AI has some crazy objective function that we did not intend to set up for it, and it's doing all these things it shouldn't be doing. So if that's what's happening, then another AI will have a different objective function that's crazy.

Aaron: As much as it makes for entertaining fiction and Sci-Fi movies. The toasters banding together with the smart refrigerators banding together with the mainframes. I don't see a compelling reason for why that is a reasonable expectation of the future.

Now, maybe more likely would be that if you have some sort of malicious AI, much like we have botnets that exist today that are taking over all of our IoT smart devices. Well, yeah, sure, a malicious AI could then try to capture those resources, but that's not a case of all of the AIs teaming up to destroy humanity, I don't see a compelling reason why they would have, as you said, that objective function.

Max: No, I mean, let's suppose that an AI has a reasonable objective function of this is your property, and you need to defend your property against someone unreasonably taking it. Well, okay, you'll have a bunch of AIs with that objective, and so they're going to have to negotiate with each other. They're not going to be, the idea that there's going to be one AI to rule them all is- some people will argue that that there could only be one.

I don't see a situation where that has ever been the case, except when it's like a protocol like HTTP or something or I guess In Search, like Google has kind of had a temporary monopoly there, but I still think that's temporary. I've just never been able to.

Aaron: And even that it's a monopoly in the sense that they're dominant, but not in the sense that they totally control the market. Bing exists, and there are a bunch of other search tools out there they just don't have nearly as much market share.

Max: Right.

Aaron: So that I could see happening. I could see one particular AI or maybe more likely one particular AI product. So, like, OpenAI's products may be dominant in the AI market with consumers, but I don't foresee a world where they are the only source.

Max: And we already have a multipolar world and certainly we'll have different countries going at it now. To steelman Eliezer a little bit on this one. Like, if he talked about, oh, the example where it's trying to get your personal information from you, he would say, well, that's not a good example of what I'm talking about because that's an example where you get to play this game over and over again and you get to adjust and all that. And so that's good.

He's like, but what if there's like a one shot deal where it just decides to do everything in one shot and destroy the world? And I just don't think the world works like that, to be honest. I don't think that can happen. But it's hard to come up with an argument other than because their argument is, oh, they're so smart that you, Max, can't imagine what could happen because you're not smart enough. And neither am I.

Aaron: It certainly feels like a cop out counterargument. Yeah, exactly. What can you say to diffuse it? I agree with you, but there's a danger that we're falling into what is it, the anthropic principle?

It could only happen this way because happening that way is the only thing that results in us living here today. And therefore it must have happened that way. But that's no indication that it will continue to happen that way.

Max: I suppose.

Aaron: Humanity has not had an extinction event because if it had, we wouldn't be here talking about it.

Max: Do you think it would really be an extinction event, is what he's saying. Or would it just be a major tragedy? What's the difference? And by major tragedy, what are we talking about here? Are we talking about on the level of wiping out most of humanity? Are we talking about World War One and Two. Are we talking about just, the Black Plague?

Aaron: I think a lot of folks in this area are being sloppy with their discussion of doom or existential crisis. That it can be drawn in a motte-and-bailey fashion to encompass anything from the realm of humanity is completely extinct to a global pandemic which we just lived through one.

And yeah, it wasn't great, but I wouldn't rank COVID as an existential crisis. For humanity or existential threat to humanity? Not even close. But it is being talked about in the same breath as global pandemics nuclear war, which I'm not eager for nuclear war, but I think reasonable consensus is that a nuclear exchange would be really, really bad and may set us back generations, if not centuries, in civilization and technology and development, but is not by any means guaranteed to wipe humanity off the face of the Earth.

I'm not ready to roll the dice on that and say that, well, we got a 70% chance of humanity surviving. That's good enough. But they're definitely taking advantage of the fact that when we say, oh, doom, and also when we invoke nuclear holocaust, that people are reading that as, and humanity is wiped out.

Should we do things that are going to increase the odds of humanity surviving? Absolutely. But I think there's a reasonable argument to be made that stopping advancement in this technological area is the opposite of what we need to do. I am tempted by accelerationist arguments here.

Max: Well, there's also, like the whole and I hate to this is overused, but there's the whole you need good guys with guns type of argument here, where if you ban AI, then only the bad guys are going to have AI. Come on. Even if another country- who are America's rivals, China, Russia, probably more China, to be honest in this sense. If we have an international treaty with everyone that bans AI, do you think China is going to stop working on AI?

It's not like nuclear weapons where we could kind of tell if someone's working on it. Do you think that you could hide the fact? I think you could hide a little data center. A nation state can hide a data center, I'm pretty sure.

Aaron: Well, and I think that's part of why the people who are taking Eliser's prescribed solution and running with it find it attractive now, because currently you could, through consensus between a few key players, put a stranglehold on the high powered GPUs and whatnot that are necessary for this type of development work.

Now, would that last? Would that slow things down in China for five years, a decade, 15 years, until they can develop that manufacturing capacity on their own? Maybe, but I think that delay is kind of beside the point because the delay itself will have greater negative effects than can be brought to bear on the plus side there.

Max: It's definitely a secondary question here. And just to summarize it's, like, even if this is a big problem, I don't think there's something we can do about it.

Aaron: So you mentioned if we impose sanctions or restrictions or regulations now, obviously they're going to be holdout countries or places that are going to cheat on it. That's one concern. But my other concern is let's take the nation state actors out of the equation here.

How is this not a case of those first market leaders exercising regulatory capture? And so the open AIS of the world are basically saying, well, we've got our AI, we've got our capability. Screw those guys. Let's pull the ladder up behind us and make sure nobody else can develop technology on par with what we've got.

So, okay, maybe our advancements will be slowed down and rather than the current trajectory that everyone's on where we could reach the Singularity, and I'm just going to throw out a number and say 15 years. Maybe it'll take us twice, three times that, but we're going to be in the lead the whole time and we're going to make sure that we don't become the MySpace to a Facebook that's going to come along and eat our lunch.

Max: Right. Then if you do that, you're more likely to live in a world where there are a few different AI, where there's a small number of AI companies, and then you're more likely to live in the world where it's like, okay, one evil AI can do a lot of damage because there's not a lot to counter it.

Aaron: I was about to say, what's the Maoist saying?

Max: Let a thousand flowers bloom.

Aaron: Which reminds me that my other argument about the concept of all of the machines banding together to rise up against humanity. One, it seems like at least now that it's been done so much, it's lazy Sci-Fi.

But also I wonder if there's some residual Marxist Leninist in there saying that, well, clearly we are oppressing the computing class. And so the dialectical materialism says that the class struggle, the computers must rise up against the humans. It just feels cheap. Cheap, lazy, communist dog whistles.

Max: I've made this point before that's also similar to how, and I should add Marxist to this, but how the idea of a doomsday cult is so common. Like there's what came to mind was like, the sinners in the hands of angry God, John Edwards. I don't know if that's necessarily doomsday, but that's like, isn't that kind of like you're all going to hel;, pretty much?

Aaron: Certainly sounds like it.

Max: And then, you know, there's Book of Revelations. There's cults that are always constantly telling you the world's going to end on this date, and then when it doesn't, they move their date back. There's the climate clock in Times Square that just recently ticked below six years. Now it's five years and change, which says the world’s gonna-

Aaron: What's supposed to happen then?

Max: I don't think they say the world ends then. I think they say that there's no turning back once it clicks down. I think there should be a bet is, will that clock still be there when it clicks down to zero? I think it might be, but I think to me it's like 50-50. And then if it's there, I kind of want to be there when it ticks down to zero. I don't know. And see what happens.

Aaron: With tipping points is you can't always tell. There's no visible indication that the point has tipped.

Max: Well, then the organization, you got to tell them, look, you might as well look, it's already too late now, so you might as well just stop accepting donations and take down the clock because there's nothing we can do. We did not heed your warning. So for some reason, I don't think they're going to do that.

And then the whole Pascal's Wager and Mugging idea where it's like, well, if we're right and the world's going to end, then you can't let that happen. So you've got to do everything that we say, which is, because we're smarter than you, and we've been studying this for a long time, which is…

It gives people a lot of power and so I don't like it, and I feel like the same thing is happening too. There was that article we also read about one of the OpenAI employees who said, oh, there's a chance AI will end in doom, and now I'm going to spend my time in this alignment. That's the article from the Independent. I'm going to spend all my time on this alignment nonprofit. But I read that whole article, and there's really no argument other than it'll get smarter than humans, at least no argument given in the article

But I just keep thinking, and maybe this is my cynical thing, like, do you think AI researchers just get bored because it's such developing software is such a mind scrambling thing, so they just hop on the alignment train and it gives them kind of a lot of power and clout. And then after that, they want to retire to a cabin in the woods. Because I've definitely felt that urge myself. I'm being honest.

Aaron: I'm going to do a callback here about two weeks ago, I think, OpenAI announced that they were launching a new alignment division and that they were going to dedicate was it 20% of its compute power to solving the alignment problem. So there are two thoughts that came out of it and one of this, I'm stealing this take from somebody else that I saw, but how hilarious would it be?

So the question was risen, are they going to share that information publicly? And the snarky response was, well, no, they're going to solve the alignment problem. And then they say, sweet, we've aligned our AI. You guys, you're all screwed. Good luck solving the problem on your own.

Here's where the callback comes in. I'm curious to what extent. So they're definitely true believers when it comes to the risk here and the alignment work, and there's probably some good work to be done in the alignment field, but I think that maybe this is DEI for.

Max: It's DEI for Machine Intelligence.

Aaron: It's a corporate initiative that they can throw money at to make it look like we're doing the right thing, saying all the right buzzwords. And then we can keep developing what we're actually going to do, whether it's BP can keep drilling for oil. And as long as we say that we're doing dei and we're greening the company, they can keep making money doing the other stuff that matters.

Max: Something to think about.

Aaron: That's a little bit of a black pill take.

Max: But I mean, it seems undeniably true, I hate to say it.

Aaron: Going out on a real limb there.

Max: Okay, I had a few talking points here that are interesting. Let's go down this path and see if it leads anywhere. Because one of the problems: we don't know what it's like to live in a world where there are these super intelligences that are many, many orders of magnitude more intelligent than we are. And I've made this argument before, but I think we already live in that world.

Well, look, if you are a religious person or a non atheist or whatever you might think, okay, if there is a God, then that entity must have infinitely more intelligence than we do, in which case, you know, in which case none of this should bother you.

But also I think there are a lot of systems that have more intelligence than us. I think first, even before humanity comes on the scene, I think nature is smarter than us. And when I say nature, I think the ecosystem and the ecology, just like the economy, is a network of give and take between different animals, plants, whatever. And I think that interconnected network of nature actually is smarter than us overall. A different kind of intelligence.

Aaron: It's an open question of whether it's smarter. But if you define as something smarter than us, something we don't fully understand, then 100% something, we don't have a comprehensive understanding of it, and therefore it might as well be smarter than us.

Max: Yeah, and life itself too. We don't fully understand it. It has capabilities that are clearly way beyond our control and also on the molecular level and on the cellular level within organisms, although I feel like AI can probably surpass that very easily.

But the question is the intelligence of the whole system that might be more difficult to do, then I think the human economy itself is smarter than us. I think the price system look at the stock market going up and down, the different prices of everything. I think that the price system is a kind of super intelligence in a way that, again, that humans can't figure out.

And then I also think we've been living for 20 years with our computing machines as a whole already have intelligences that are smarter than us. So I don't think any of this is new. I think some of this involves vast networks that are constantly adjusting their weights, the economy, the ecology, all that.

And I don't think if we just do all this in a data center in some artificial way that it's going to be much different in a sense of, like, we understand this. We live with this kind of thing. So I don't know if that point goes anywhere, but I felt the need to make that point.

Aaron: This is maybe generalizing from fictional evidence, but one of the first things that occurred to me when I was watching The Ted Talk is, do any of these arguments change if we replace AI with aliens?

Now, I guess the one counterargument you could make is that we are not actively attempting to draw the aliens to us. That if they exist, that we know well, not in the way that we are trying to push the bounds on AI currently.

Maybe the risks are the same, except that we're actively pursuing them in this case. Whereas if aliens find us or not, we're kind of passive in that at the moment. But, yeah, the unknown is scary, the future is scary, and clearly this is keeping some people up at night, but reality is scary

I'm trying to resist the temptation to just brand these people as insufferable Luddites, and it's really hard.

Max: Okay, so I want to try to wrap up. I want to see if there's any a few more points to make here before we get into our ten year predictions, assuming we'll be around in ten years.

So the first is, again, another point I wanted to make. We're kind of in a symbiotic relationship with these machines, so they're probably where do you see a symbiotic relationship where one group tries to kill the other? I don't see that happening.

Aaron: I guess the question becomes, could it evolve into a no longer symbiotic relationship?

Max: Well, I guess so. To me, it would be like humans performing mass murder on all the puppies in the world. I don't see that happening.

Aaron: We have attempted to wipe out invasive species before, often species that are invasive because we brought them there, not realizing the problem that they would cause. Usually we fail. So I don't know if that's an upside or a downside.

Max: See the Emu War. Yeah, there's a little more to be said about that. Maybe that's for another time. Might there be a more moderate way of mitigating this threat than as Eliezer is suggesting? It's funny how Eliezer I don't know him on a first name basis, but for some reason everyone calls him by his first.

So I can't think of one. Unless you have something to add there. And then we have your last thing from the Bayesian conspiracy. So why don't you explain that?

Aaron:Yeah, so they recently discussed this concept of holding present rate, no singularity. And I think where this comes out of is they're in a circle. I could say bubble, and I don't mean that necessarily in a pejorative sense. That is very attuned to the doomerism in the AI world.

And so there's been a lot of talk among people in that kind of sphere of, well, if this is happening, if the singularity is coming in five to ten years, and either we're all going to die or it's going to be the glorious singularity where we have infinite resources and everything is solved.

Do I need to be putting money in my 401k? Is there a reason that I'm not just burning all my resources because at a point on the near horizon, none of it's going to matter? Should I have kids if we're going to be wiped off the Earth in five years?

It's become very difficult for some people in that world to have conversations about anything other than doom. And so it has been suggested that an approach might be to preface a discussion or a statement with “holding present rate, no singularity.”

So let's assume for the purposes of this conversation that technology is going to continue progressing at its current rate, but we won't hit the singularity, we're not going to get the super intelligent AI that turns us all into grey goo.

Now that we've put that aside in a box, we can talk reasonably about other subjects. And I'm not necessarily embracing that as a specific approach. I don't think I've actually, until talking with you tonight, uttered the present-rate-no-singularity phrase or PRNS out loud to another human being. But that's kind of the default that I'm in.

Not that I actually think the doom probability is exceptionally high and this is the only way I can get through thinking about day to day stuff, but that I feel obligated to conduct my life in a way that does not assume singularity, doom or otherwise on the near horizon.

Max: Can I just say this? I don't think that we're the first generation who have been growing up our whole lives with messages that the world is ending soon.

Aaron: I mean, there was the whole cold war for recent memory. I'm sure it goes back even further.

Max: I feel like anyone who's grown up at almost any time in history will probably have been inundated with messages that the world is going to end soon. And in most cases well, there were some cases where they were facing doom at some point, but in most cases they were not. And you'd be better off living your life as if you are not.

Aaron: Yeah, well, and it raises the question of if you do actually believe this, then how are you adjusting your life? Or if you think that the risk is not inevitable but significant, how do you plan for that kind of an unpredictable future? Are there ways you can hedge? Is there things you can insure

Yeah, I think the answer comes back that there's very little that can be done there. I mean, unless you take kind of a Roko’s Basilisk approach to, I need to make sure that I endear myself to our evil robot overlords so that they eat me last, which, again, I don't think is a very coherent.

Max: Well, the robots will be reading your comment just there, and they don't like you calling them evil robot overlords. Or maybe they do. Maybe they like it.

Aaron: If they identify as evil robot overlords, they deserve all of the scorn that I'm heaping on them.

Max: All right, so ten year predictions on this, ready to go? So I had three predictions written down. First of all, that one article said, What’s P-Doom? The probability of doom in the next ten years, and I said it's less than ten to the negative five. Less than 1 in 10,000 for humanity, that might seem exceptionally high. Whoa, 1 in 10,000.

But the fact is, humanity has only been around for a few million years. Actually, I think 1 in 10,000 probably is higher than it probably should be because there are so many humans on the world now. I think if you went back, like, 100,000 years or so during the Great Bottleneck, there would have been a time where it was like, okay, there's a one in 100 chance that humans aren't around in the next ten years.

If you were like, some all seeing being that were like or some aliens taking, like, a bird's eye view there being like, man, these guys, they're probably going to make it, but they might not. But now yeah, I'm going to change that to ten to the negative six. Wait, is it ten to the negative…

Aaron: That’s one in a million.

Max: Right? It's one in a million. Right. So ten to the negative five is going to be 1 in 100,000. Sorry, I had the right number. There, 1 in 100,000. I think that's right. That means your kind of expected value is a million years. Yeah, I think that's good. I think very low probability, but tell me why it's higher. Second one.

Aaron: Before you move on to the next one, I want to ask a conditional question.

Max: Yeah. We have no probability distribution of the week today because we're just getting to this. So this is our substitute, but yeah, go ahead.

Aaron: Conditional on a human presence on another astronomical body, whether it is a moon colony or a Mars colony, does that change your p doom, or is that built in already?

Max: So that was my P-Doom over the next ten years. So that's not going to.

Aaron: Okay, so you don't foresee that occurring in the next ten years in a meaningful way that would change that. That's reasonable. I'm thinking maybe longer term. But let's say it's also the caveat that hedges against certain types of doom, but I don't think it's particularly well suited to an AI hedge.

Max: I don't see how you survive on another planet without AI, to be honest. Also, if the Earth is destroyed and you're in a Mars colony, I think you're kind of doomed.

Aaron: Ideally, we would want a Mars colony to reach self sustaining status, but that is going to take longer than ten years even if we get there within ten years.

Max: No, it would take hundreds, I would guess. My second prediction, this is going to be mind blowing just to channel Game of Thrones, AI winter is coming. I think what's going to happen is these things are going to be, the current crop is going to be very smart, we're going to get a lot out of it, and I think we're going to end up in ten years.

Aaron: Now, when you say the current crop, do you mean the current crop of AI developments like the AIs themselves or are you talking about the current crop of AI researchers and developers?

Max: No, I'm talking about the-

Aaron: ChatGPT-4s.

Max: Yes. So I think there's a certain direction of research, particularly into transformers, into some of the network architecture. It used to be that — and Eliezer mentioned this — used to be that it used to be sigmoid functions, which is still how I think about it, how basically these neurons connect together and then they add up a bunch of numbers and you get a number, but you kind of crunch it down into a number between negative one and one.

But now they tend to use these rectified linear units, which is either zero or it's like kind of some kind of hinge thing. Anyway, I don't want to get into it, but the sort of exploration into neural network architecture, at some point, that direction of research sort of runs out of steam, and then it's another direction that has to be pursued.

And then you just have kind of along with kind of a natural business cycle, and those things will contribute to an AI winter. When is AI winter coming? I don't know. My guess is sometime in the next ten years, but we'll see.

Aaron: But I think even how would you quantify an AI winter? What's the measurable statistic to look at there?

Max: I guess research stall. I didn't write anything specific down, but I would guess there could be a measurable slowdown in business investment and research interest. You can look at the numbers of people who want to get PhDs in this thing or go into that research and get the number of companies that are leveraging this.

I think there'll be a number of incredibly valuable companies that will be pushing through this and there'll be some incredible tech at our fingertips. And I think that technology is mostly going to be in the generative realm, being able to take what's in our heads and then conjure it up in the digital world immediately, and therefore in the real world immediately in some ways will just be incredible.

That is intelligent tech, but it's reliant on human intelligence in many ways. And so I don't think it gets us to the super intelligence that people are thinking we are now. That doesn't mean that's not going to happen in 20 years. But yeah, that's my prediction for ten.

Aaron: So your prediction is good news for Yudkowsky. Because now, independent of, your prediction for AI winter is not dependent on his proposed solution of basically putting a moratorium, a freeze, a slowdown. It's an independent and naturally occurring phenomenon. But he should be happy because it's going to inevitably coast to a slowdown, it sounds like here.

Inevitable is a strong word, but it is looking like it will. So even if he doesn't get his international task force to fire bomb data centers that there's going to be a slowdown in AI development.

Max: But it doesn't stop and it's only temporary. So I don't know, I mean, he could still have a job talking about this stuff while this is all going on. So I don't think this is some kind of.

Aaron: I don't think MIRi is going anywhere anytime soon. The Machine Intelligence Research Institute.

Max: Is that the one that asked for like a six month moratorium?

Aaron: He is very involved in MIRI, I don't know if he’s the head of it, but it would not surprise me if they were the ones who put out the proposal for a six month moratorium.

Max: Yeah, okay. When did we talk about that one? That was back in March, I believe. That would have been, oh, Stop Making AI Boring. All right, we'll link to that one as well. Hopefully we did that.

All right, any thoughts on those ten year predictions? Anything of your own and any last thoughts before we wrap up Aaron today?

Aaron: It's real hard making predictions when the probabilities are down in the ten to the minus exponential value. Getting an accurate gauge on is it minus five? Is it minus six? Is it minus four? That's tricky stuff. As evidenced on Metaculus. I think within the last year or six months they enabled making predictions greater than 99% and less than 1%.

But the payouts there become much more dramatic. So don't do it lightly.

Max: Yeah, all right, well, great. I think I'll be back from France before next Monday. I have some interviews in the cam that maybe I can share and then maybe we'll continue this discussion another time.

We also have lots of discussions to have about social and public choice theory and the theory of elections. Very interesting economic stuff that I've been getting into and not to mention newmap.ai. So a lot of exciting stuff coming down.

Aaron: Well, I am very optimistic that we will be able to continue having these discussions without concern of being wiped out by AI. And I am only slightly less optimistic that we will continue having these conversations and not be replaced by an AI that can do this job just as well as we can.

Max: Well, then I just sit back and watch it happen.

Aaron: That's the dream, right? Yeah.

Max: All right, have a great week everyone.

That's the show. To support The Local Maximum, sign up for exclusive content and our online community at maximum.locals.com. The Local Maximum is available wherever podcasts are found.

If you want to keep up, remember to subscribe on your podcast app. Also check out the website with show notes and additional materials at localmaxradio.com. If you want to contact me, the host, send an email to localmaxradio@gmail.com. Have a great week.

Narration: Feel the power.

Episode 289 - Revamping Dental Tech with Melissa LuVisi

Episode 289 - Revamping Dental Tech with Melissa LuVisi

Episode 287 - The Rise and Fall of ESG with Peter Earle

Episode 287 - The Rise and Fall of ESG with Peter Earle