Based in Sydney, Australia, Foundry is a blog by Rebecca Thao. Her posts explore modern architecture through photos and quotes by influential architects, engineers, and artists.

Episode 304 - OpenAI Fails to Align

Episode 304 - OpenAI Fails to Align

Max covers the dramatic conflict between the board of directors of OpenAI, and former CEO Sam Altman who they voted to remove over the weekend. Is this about AI Alignment? Personality clashes? Or oddball Corporate governance?

Links

Peter E. Metzger via X: Humans have an extremely strong

Elon Musk via X: I am very worried

Ilya Sutskever via x: It might be that today’s

Toby Ord via X: Most coverage of the firing

The Wall Street Journal - Microsoft Adds the Tech Behind ChatGPT to Its Business Software

The Wall Street Journal - Microsoft to Deepen OpenAI Partnership, Invest Billions in ChatGPT Creator

The Wall Street Journal - Microsoft Corp.

The Wall Street Journal - What Is ChatGPT? What to Know About the AI Chatbot

The Wall Street Journal - Sam Altman Is Out at OpenAI After Board Skirmish

The Wall Street Journal - Sam Altman to Join Microsoft Following OpenAI Ouster

The Verge - OpenAI board in discussions with Sam Altman to return as CEO

Related Episodes

Episode 230 - Another Google AI Spat, Self Awareness, and the Eliza Effect

Episode 213 - Artificial Consciousness Controversy

Episode 255 - AI's Got Chat: The Rise of chatGPT

Episode 220 - Twitter's Musk Factor, Chat Bot Intelligence, and Pizza

Episode 134 - AI Breakthrough & Understanding “Understanding”

Transcript

Max Sklar: You're listening to the Local Maximum episode 304.

Narration: Time to expand your perspective. Welcome to the Local Maximum. Now here's your host, Max Sklar.

Max: Welcome, everyone. Welcome, you have reached another Local Maximum. And also welcome back to me, I don't think I've taken a week off from the show before in the last six years, but I did now, much needed catch up time. Actually, looking back I've kept the show going in the past, even during incredibly tough family emergencies and illnesses and even surgery.

Fortunately, the first delay came because I had a lot of good things going on. And I needed some extra time. And also a lot of time to think about where to take the podcast. I have quite a few good ideas on where to take the podcast and all of the content that we've created over the years as you'll see, in this episode, you'll because we have such a large library of content going into the past here on the Local Maximum, that improves our ability to comment on current events as you'll see today.

So let's try to figure out how to make the best use of that. I'm gonna have to talk to Aaron about that. And hopefully a brand new format soon, maybe similar to not like a total rehaul. But hopefully, there's a few new things I want to add to the show. Very exciting.

All right. Now, there is an obvious news story this week, and we need to cover it. Of course, this is openAI’s CEO, Sam Altman was removed by the OpenAI board over the weekend. Why did this happen? weren't things going really well at OpenAI, with GPT? So there's been lots of discussion and speculation on that. So before we get into it, let's rewind a bit to understand this story.

About a year ago, as most of you know, as you all know, OpenAI took the world by storm by publicly releasing ChatGPT. And later, its API to developers. This has been on our radar here on the Local Maximum for a long time — way back in episode 134, which was the summer of 2020, we covered OpenAI’s launch of the — at the time — revolutionary GPT-3.

Nowadays, you kind of click back to GPT-3.5 on OpenAI and you're like ah, this one again? But anyway, GPT-3. Incredible back then. Buoyed by the effectiveness of the transformer neural network architecture, which is research from Google, as well as OpenAI’s long time horizon.

You know, I think they've been at it for nearly a decade, and investments from Microsoft went into OpenAI and investment from Elon Musk. So at the time, this was summer 2020. We had a lot more going on than just this in the summer of 2020. I'll tell you that. But this was kind of one of those positive stories, maybe under the radar, when all that crazy stuff was going down. At the time, the Wall Street Journal had a headline, I think captures the moment well, on GPT-3, which was, “AI Breaks the Writing Barrier” which it did, and now it could. Now it could write well enough to be well understood and not write crazy, weird junk. I mean, sometimes it does that too. But you know what I mean.

Alright, in Episode 220, let's forward a little bit. We covered Elon Musk's potential takeover for Twitter — at the time it was potential, we now know that he would later in fact take it over in that same year in 2022, and this year in 2023, would rename it X. In that same episode, I mentioned that Musk also has a stake in OpenAI and I read Andrew Gelman’s blog post grappling with the implications of GPT-3.

Now, at some point Aaron and I covered and we talked about Musk's falling out with OpenAI think that was episode 255. I'm not sure. But Musk did have a falling out with OpenAI to kind of I would say differences in opinion on how AI should be developed. I remember Musk is a bit concerned over the potential downsides of this type of technology. And he's criticized OpenAI for being more like closed AI and that the models aren't open source.

I mean, that's kind of the narrative that Musk kind of works around, which is like hey we have Twitter now we have X, we're going to open source the the algorithm, I'm not sure if they did that. He wanted to open source the algorithm for OpenAI. The management said that's not quite gonna work for us.

And so there is this kind of tension between how AI should be developed and there's a lot of arguing back and forth that takes place in the industry obviously. Musk being very concerned about what we call AI alignment in the future of AI. He has a very big, outsized voice in this for, for obvious reasons. So, so yeah, we'll get to that in a bit.

So, Episode 255, that was the big one. Because that was the one where we were Aaron and I talked about the rise of ChatGPT. And that incredible product launch that occurred just a year ago. Since then I and many of my software colleagues have been using it for work, increasing our productivity quite a bit.

You know, the other day, the other day, I had to build an HTML page with JavaScript in it. And I've done this for I probably did that first in 2005. So nearly 20 years, I've been doing this type of thing, but it is a pain in the tookies, I'll tell you that much.

Outsiders asked like, are you an expert in these technologies, HTML and JavaScript? Well, expert is kind of if it's something that you have to do every few months, every time you come back to it, it is a slog, yes how it all works.

But sometimes you got to sit down, you got to take a day and kind of figure out how well it's, no, I know how HTML works. On the broad level, I know how JavaScript works on the broad level, I know what all works on the broad, broad level, I know how to ask OpenAI and ChatGPT to write these pages for me, instantly. It doesn't make the dumb mistakes that I make. And whatever it gets wrong. I can easily fix. Given my knowledge I can find the errors very quickly. And so that just and this is the kind of work that I really, really hated doing back in the day which was kind of mucking around on the front end and the display. And we've totally automated that. So that's great.

And what's also great is like, I could just ask it to write some CSS to kind of style my pages right off the bat. So I don't even have to think about that. I don't do anything. CSS is notoriously annoying to use. I'll put it that way.

So, yes, we've been increasing our productivity quite a bit. I've even made use of the API. I've been using the OpenAPI for, for several different reasons. I have one kind of project now with the OpenAI API that I'm pretty excited about and I'm excited to share with, with all of you out there, hopefully in the coming months.

So with OpenAI, doing so well in the marketplace, and OpenAI coming up with a once in a generation level product, and one, by the way, that required true sophistication and perseverance. This is not like a luck play, by any means.

This is not some company where it was kind of the obvious thing to do. And there were lots of companies doing it. And the company who wins executes well, but kind of think, okay, like a lot of different groups could have done that. So this is not luck. This required a lot of vision, a lot of time horizon. So it might seem strange for the board to oust the CEO, Sam Altman.

I've heard of Sam Altman for a long time. He was actually one of our competitors at Foursquare when he had this app called Loopt back in the day. I think Foursquare did a bit better than Loopt. But then he went to Y Combinator, and, and apparently, kind of got into starting all these new companies and OpenAI — great job there. But it's, it's strange that they would ask that CEO and it's even stranger that there's such a rebellion going on among the employees of the company to bring him back.

I've been through CEO turnovers you hear about CEO turnovers all the time. The famous one that has been compared is well, Steve Jobs was kicked out of Apple, the Apple employees didn't start rebelling. I mean, some were disappointed.

Sometimes employees are disappointed and unhappy. Honestly, I think the Apple employees were like, yeah, we kind of, we kind of get it. But you know, I have never heard I've never heard of this. I've never heard of an organization have such an organized appeal to the board from the employees.

There have been a few times and I've been working at companies and I've thought of messaging a board member, I've never actually done it. I wonder if that's something you should do. Or you should keep certain things to yourself. It's an interesting question. But anyway, why is this going on? It’s still playing out. We're still gonna see what the falling out is because a lot of people have quit. And so now what happens if researchers quit? They go to other companies and bring the technology there or are they going to be brought back in some kind of deal? Let's look at the latest from the Wall Street Journal to get some of the facts here. The Wall Street Journal reports in this article, ,OpenAI Leadership Hangs In The Balance, “Two days after Sam Altman was ousted from OpenAI he was back in the company's office trying to negotiate his return. The former chief executive officer entered with a guest badge on Sunday and posted on X, ‘First and last time I ever wear one of these.’”

Unless they go back for a second negotiation, I guess. Continues, “The leadership of the company that created the hit AI chat bot GPT remained unclear Sunday, as investors and many employees pushed over the weekend to restore Altman. He has been engineering a counter coup to retake control of one of Silicon Valley's most valuable and high profile startups. Altman's camp has succeeded in bringing the board that fired him to the negotiating table, and proposed a series of high profile tech executives to potentially helm a new board that would be more aligned to his business vision.”

We can skip ahead because we don't need to read the names of the executives. One of them here is Sheryl Sandberg which is kind of interesting from back to back in Facebook. But you know, that might not happen. It's all speculation.

“Microsoft's executives also pushed for oversight in a new corporate structure, including a potential board observer seat that would give it more visibility into the company's governance. Any greater role on the board could be a regulatory concern. Microsoft has kept its ownership stake in OpenAI below 50% In order to avoid raising the attention of regulators. Among all the investors, Microsoft might be the most deeply intertwined in the fate of OpenAI. And the startup's turmoil has been a liability. Beyond being OpenAI is largest backer, Microsoft has reoriented its business around to the startup’s AI software shares and Microsoft fell after the news of Altman's firing.”

Skipping again, “Two days after the board fired Altman, different explanations persisted for the initial firing. The board said Friday that it pushed out the CEO after it concluded he hadn't been candid with the company's directors. It didn't elaborate. People close to Altman said the ouster had more to do with disputes around the safety of the company's artificial intelligence efforts and the power structures struggle with one co-founder and board member Ilya Sutskever.”

I hope I'm pronouncing that correctly, Sutskever. “On Sunday, a person familiar with the board stood by the board's statement citing Altman's lack of candor. This person said there was no single precipitating incident but rather a mounting loss of trust over communications with Altman that led it to remove him as CEO. The person declined to offer examples now.”

Obviously, obviously, the board is actually entitled to hide its reasons. But without this disclosure, yeah, a bunch of things. You know, it's hard. It's difficult for us in the public, to evaluate them. “With his firing from OpenAI. Altman quickly got the upper hand in terms of public messaging. The board didn't use a communications or a law firm in its dealings, people familiar with the board said, expecting that the OpenAI team would help them but Altman had loyalty from investors and employees. The board ended up isolated as social media became filled with shock and support for Altman. His largest backers including Microsoft and Thrive Capital immediately on Friday, began pressing for Altman's position to be restored. Microsoft CEO Satya Nadella began working with Altman that evening on his next step, people familiar with Altman said.”

So this is really strange. This is interesting, because we read before that Microsoft doesn't own a majority stake in OpenAI, but it owns quite a bit. And there are other groups involved as well, Thrive Capital. I'm guessing all of them together probably own a majority.

But apparently, it doesn't have much influence on the governing board. In fact, it doesn't have anyone on the governing board right now, so, the board of directors. I don't believe that yeah, they don't have any appointed board members. This is an interesting bit of disagreement between the owners of the company and the directors of the company.

We’ll talk a little bit about what the structure is and what the corporate governance structure is. We’ll talk about constitutions and how to align constitutions. There is a really, really weird constitution, a really weird corporate structure here that is causing this, this kind of rift.

So anyway, continuing in the in the Wall Street Journal article, “Before Friday's dust up, the board consisted of six people, including Altman, then it abruptly moved Greg Brockman, OpenAi’s president and close friend of Altman's, and voted to oust Altman, none of the four board members were meaning were affiliated with the company's big investors. It isn't clear whether the vote was unanimous.”

However, guys, let's do the math folks. If the board was six people, they wanted to oust Brockman, you need at least four probably to oust Brockman. Speculating, but you know, standard majority.

So I'm assuming that Altman would not have approved of that. It must have been those other four that voted to do that. And so I'm guessing that those four also voted to remove Altman unless one of them flipped, and it was really three to two. But it seems unlikely that someone would go along with the first thing removing the board member, the first board member without being against the second thing.

Now remember, Ilya Sutskever, chief scientist at OpenAI is one of those board members in fact, so is this a particular concern about Sam Altman? Is it a power structure struggle over the future of AI? Is it a concern about the direction of AI development and its so called alignment? That ranges from you know — what's alignment range from — a worry about faulty AI business models to like a full on AI doomsday scenario. Now we have some commentary on this, that I'm going to share. The first bit of commentary that I want to share is from Toby Ord, he is the author of The Precipice and seems to be one of the people concerned about this “AI alignment.”

He writes the following: “Most coverage of the firing of Sam Altman from OpenAI is treating it as a corporate board firing a high performing CEO at the peak of their success. The reaction is shock and disbelief. But this misunderstands the nature of the board and their legal duties.”

Now remember, Elon Musk's kind of falling out with OpenAI and how the way it works now is different from the way it was set up both the tech and the structure. So we'll get to the weeds now with Ord. It's pretty clear that when Elon Musk was investing in it, it was probably more of a, it was probably more weighted toward the nonprofit sector. And now, it's kind of unclear. Let's hear more about this.

OpenAI was founded as a nonprofit. When it restructured to include a new for profit arm, that arm was created to be at the service of the nonprofit's mission and controlled by the non profit board. This is very unusual, but the upshots are laid out clearly on OpenAI’s websites. This is actually quite fascinating. You know, where OpenAI is a for profit — with for profit investors like Microsoft — fully controlled by a nonprofit. And this is the board! The board of is the board of nonprofit!

Now also on the board or employees and directors and you know, because it has Altman and other people in it, but the but just talking about the the board as a whole — the board is controlled by the nonprofit that doesn't necessarily serve the interests, the interests of the for profit investors, or at least ensures that OpenAI is staying within some defined bounds.

Perhaps ideological bounds or you know, something like that, defined by the nonprofit by the way, it gets confusing because the nonprofit is OpenAI Inc, a 501c-3 public charity. The profit is OpenAI LLC — see the difference.

That kind of reminds me of certain companies creating, like, lots of shell companies. I don't think that’s what they’re trying to do here. But still. Even though I'm sure you could draw it out, and then a regular person can understand it, it still makes things quite opaque. There's it mucks with the shelling point. It mucks with the proper purpose of these organizations, and it kind of makes it, I think this kind of setup would make it way more difficult to govern as we’re seeing.

Now the investors who invest in OpenAI — they know this going in, I suppose. They know that OpenAI’s board is controlled by this nonprofit. They might not have another option. I mean, Microsoft might think, well OpenAI, you know, is our ticket here. So what else are they going to do? Still I wonder if the team there that is concerned about AI alignment thought through the misalignment of their interests that the corporate governance structure might lead to. They're so worried about aligning AI, they can't even align their own corporation. In other words, is that a problem? What do you think?

All right back to Toby. He writes, “Most of the hard power of a board comes from the ability to fire the CEO. The CEO has executive control of the organization, and the board doesn't, but the board has the power to fire a CEO and find a replacement. Knowing this, a CEO will usually comply with pertinent board requests and not hide mission critical information from them. Very few people know for sure what happened in this case, but my best guess is that when the board members said he was not consistently candid in his communications with the board, hindering its ability to exercise responsibilities, they meant that he repeatedly withheld information that interfered with their legal obligations to ensure safe development of AGI — Artificial General Intelligence.”

Super intelligence or on the roads of super intelligence, I suppose. “If so, they may have faced a legal and perhaps moral obligation to replace him as CEO. But why would they have done it so abruptly with no notice to Altman, or to their main investor, Microsoft? My best guess is that they knew that despite having no board seat, Microsoft would apply great pressure to save him by threatening to withhold their crucial cloud compute. If so, an attempt at a more orderly exit would be blocked, and the board would fail in their responsibility.”

So this is, so what is the board's responsibility? So it's very strange — for profit company, with the board with the board's responsibility not in the hands of the investors. So, that's pretty crazy.

But yeah, if this is correct, why they would do it so abruptly. I mean, that's that’s a clear power play. I mean, it also sounds like the board is not candid in its communications because it's trying to kind of sneak around their investors and in perhaps acting against their investor’s best interests, which happens from time to time.

So he also writes as happens, “Microsoft appears to have been even more set on rescuing Sam than the board might have guessed, being willing to force him back into OpenAI even after being expelled. And even if doing so would imply that the nonprofit which is ostensibly governing OpenAI has actually been powerless for some time, with true control being in Microsoft's hands.”

Toby is clear that this is all speculation, of course. In fact, there's a lot of speculation that some of it is kind of fun to think about. But they, OpenAI and the research team, has found general intelligence and someone is hiding something big, maybe GPT-5 was created and is taking over the helm from here; no longer need a CEO, we now have a robo CEO.

Or more likely, it's just some engineer hand wringing over what we've created ala Oppenheimer. Which is interesting, because I think this type of soul search needs to happen that social media and ad companies that don't seem to give a crap about where their tech leads to, even though OpenAI has some of the answers to these problems.

They are showing far more freakout-ery than is probably warranted at this time, as we've covered in Episode 213 and February of 2022. Ilya Sutskever, who was one of the board members who almost certainly supported this move to remove Altman, probably engineered it, wrote that he thought that their computations were at least a little bit conscious — very, very controversial.

In fact, this weekend, Elon Musk tweeted, “I am very worried. Ilya has a good moral compass and does not seek power.” But let me do that in Musk's voice. I can't do it in his accent, but I'll do it in his intonation. “I am very worried. Ilya has a good moral compass and does not seek power. He would not take such drastic action unless he felt it was absolutely necessary.”

So that's what Elon Musk says. In Episode 230, summer of 2022, we looked at how a Google engineer was put on leave for claiming that their chat bots were conscious. And of course, we've covered all the drama at Google around AI ethics and how it seems to be made up of ideological activists.

So all these people — I don't mean to put them all in the same bucket here — but the reaction of engineers and employees and leaders to this technology has really run the gamut, and there's a lot of different diversity there.

So I should also provide this quote. Let's try to think about, like, what the, what the point of that really is. And I think what the point of that is, is just like because there are so many perspectives on this technology, that’s another source of disagreement. That's another source of conflict that could come up. So I am going to now provide this quote by Perry Metzger, who provides some anecdotal evidence of irrational fears overtaking those in the industry.

He writes, “AI is one of the most wonderful things in human history. It is amazing to me how many people have convinced themselves that it's a dangerous tragedy.” He also tells us that humans have an extremely strong capacity to talk ourselves into apocalyptic nonsense. Perry quotes, an article from the Atlantic on Twitter, that has seems to paint Ilya at a certain light, I'll read it.

It says, “Anticipating the arrival of this all powerful technology, Sutzkever began to behave like a spiritual leader. Three employees who worked with him told us his constant enthusiastic refrain was, 

‘Feel the AGI,’ a reference to the idea that the company was on the cusp of its ultimate goal. At OpenAI’s 2022 holiday party, Sutzkever led employees in a chant: ‘Feel the AGI, feel the AGI, feel the AGI.’”

Okay, okay, let's not speculate this far with OpenAI. I think it seems just as likely, probably more likely, that there was a difference in communication style and expectation between the board and the CEO. It's not uncommon for that to happen. But we'll find out more as this story progresses.

And the antics described above by Ilya: well, they look strange. It kind of is what it looks like. It's kind of not, but pretty standard behavior in the industry. I mean, just look at the Steve Ballmer from from a couple decades ago, going “developers, developers, developers!”

So yeah, it looks cultish. But, that's the way it goes sometimes. So it's unclear which ideas are leading this move. Butas you can see, and hopefully, what, as you've learned from today, there are a lot of moving pieces, which make the story of this developing industry, unlike anything else we ever seen. There's like not a very good analogy to another industry where the opinions of the engineers working on the technology, in terms of its proper use diverge so wildly to the point where they're…

Well, I mean, now that I put it that way I do look back at Oppenheimer, where, but the issue there was they were developing this technology, they had some disagreements on strategy. And then of course misgivings about the politics, but not in terms of like, “Oh, if we invent the science, where does this lead to? What's the purpose?” and everybody kind of going all ideological on it, I really think this is kind of unprecedented.

I don't know. Do you agree? Let me know, localmaxradio@gmail.com or maximum.locals.com. As for us, here on Local Maximum, I hope to have Aaron back on soon to talk about this topic and more. I have some great guests coming up, including a fantastic, fascinating discussion that I did recently with a doctor/author on the fragmented state of data in the medical industry.

But I think, maybe if the talk is just that, it's important. But I think the real draw of that discussion was the real practical advice and the real stories about patience and being a patient, so I got a lot out of that. So I hope you enjoyed today's episode. I look forward to covering this story as it kind of plays out. Have a great week, everyone.

That's the show. To support the Local Maximum, sign up for exclusive content at the online community at maximum.locals.com. A Local Maximum is available wherever podcasts are found. If you want to keep up. Remember to subscribe on your podcast app. Also, check out the website with show notes and additional materials at localmaxradio.com. If you want to contact me the host, send an email to localmaxradio@gmail.com. Have a great week.

Episode 305 - Fragmented: Health Care and Data with Ilana Yurkiewicz

Episode 305 - Fragmented: Health Care and Data with Ilana Yurkiewicz

Episode 303 - Pharaohs and the Exodus with Alexander Hool

Episode 303 - Pharaohs and the Exodus with Alexander Hool