Based in Sydney, Australia, Foundry is a blog by Rebecca Thao. Her posts explore modern architecture through photos and quotes by influential architects, engineers, and artists.

Episode 149 - Chaos at Google, Woke AI, Ethics, and Ultimatums

Episode 149 - Chaos at Google, Woke AI, Ethics, and Ultimatums

Discrimination is still present in our society. Social movements have been pushing back against systems that have treated and harmed people. It doesn't stop there. In some cases, AI also shows bias, and this is where AI ethics come in.

In today's episode, we discuss the chaos at Google surrounding AI ethics researcher Timnit Gebru's departure from the company We delve into her research paper on the costs and dangers of language models and how it can change our communities. We also unpack AI biases and how they occur. Things are changing fast. Organizations and companies should be held to a higher standard to serve society better.

Tune in to the episode to learn more about the issues of AI ethics!

Here are three reasons why you should listen to the full episode:

  1. Learn about the controversial departure of Timnit Gebru from Google.

  2. How does AI show race and gender bias?

  3. Discover workplace dynamics in tech corporations like Google.

Resources

Related Episodes

  • Episode 146 with Tai-Danae Bradley on Language Modeling

  • Episode 139 on Coinbase’s decision to go less political and inviting employees to leave

  • Episode 135 on natural language understanding vs. modeling

  • Episode 134 on GPT-3, the large language model from openAI

  • Episode 76 on the political climate at Google

  • Episode 27 on Big Algorithm

  • Episode 4 on building a simple language model in python

Episode Highlights

Who Is Timnit Gebru?

  • Timnit Gebru worked on Google's AI systems, particularly on the systems' ethical implications and development.

  • Her work focuses on finding racial, gender, and cultural biases.

  • Timnit said that she was fired for trying to publish a paper on Google’s large scale language models.

  • She further clarified that Google asked her to withdraw her name from the paper because it did not meet their standards.

  • She then gave them an ultimatum to do certain things, or she'll leave the company.

The Public Uproar That Followed

  • This issue caused a huge public uproar that pushed the company to conduct an investigation.

  • Since there is no clear picture of the whole situation, various narratives have surfaced.

  • Some say that Google is trying to silence Timnit for being outspoken and critical.

  • Timnit had attempted to sue her previous employer in the past.

  • Listen to the full episode to hear more about the narratives surrounding this issue.

Internal Issues and Chaos

  • Timnit has been the victim of micro and macro aggressions.

  • Jeff Dean, current lead of Google AI, states that Timnit’s paper talked mainly about the problems of language models without mentioning Google’s mitigation actions.

  • He also says that one of Timnit’s conditions was to reveal the identities of her reviewers.

  • Timnit’s departure has led to several supporters signing a petition and threatening to leave the company.

Timnit’s Research on Language Models

  • The paper is titled, “On the Danger of Stochastic Parrots - Can Language Models Be Too Big?”

  • AI might see racist, sexist, and abusive language as ordinary language.

  • Twitter once taught Microsoft's AI bot to be racist.

  • Data sets are important because some sets will have location, racial, socio-economic, or business sector biases.

MIT Tech Review on the Research Paper

  • The research paper looked into the environmental and financial costs of large language models.

  • The paper also argues that social change requires changing our vocabulary over time to become politically correct.

  • Aside from filtering out abusive language, it also shows the potential power of computer-generated language to change society.

  • It isn't easy to audit large data sets when you don't know what you're teaching the model.

  • Listen to the full episode for a nuanced discussion about Google and Timnit’s paper!

Nature of AI Bias

  • AI bias is still a problem today.

  • There is a question if this is due to issues with the algorithm or with the training data set.

  • There are no such things as a neutral algorithm — life is inherently biased in many ways.

  • Solving these issues needs prioritization; systems harmful to people should be solved first. 

  • Listen to the full episode for specific examples of AI bias and non-discriminatory biases.

Workplace Dynamics and Ultimatums

  • Max asks if ultimatums are resignations.

  • Ultimatums can be open to interpretation depending on who reads them.

  • For example, Donna Dubinsky’s ultimatum at Apple led to her promotion instead.

  • Listen to the full episode for an in-depth discussion!

Organization Structures

  • Big organizations tend to get political, even Google. 

  • Max shares why the employees may not feel the leadership there.

  • He notes how leaders often seem to hide behind legality.

Approach to AI Ethics

  • Ethics is important to understand, especially where bias comes from and what it is.

  • Ethics should inform design decisions.

  • AI ethics includes data privacy.

  • AI bias and privacy testing departments should be more widespread.

5 Powerful Quotes from This Episode

“General AI has to actually understand what it's doing. Whereas you can make a narrow AI that can fake it really well, but it doesn't necessarily have an understanding of what it's doing. That makes it a little bit more, more fragile.”

“The curation of our online experience — do we want a pure, unfiltered feed? Or do we want some sort of algorithm . . . that's normalizing and adjusting that to a ‘better depiction of reality’.”

“The value in these types of algorithms are being able to tell things apart and extract information from what would otherwise be noise.”

“You want to try to understand where bias comes from and what it is, rather than trying to be flashy and activist — I think you need to have people in those positions that kind of consider all sides not take extreme positions.” 

“When you're designing a piece of software — you want to have a security person in the room to inform your design decisions.

Enjoy the Podcast?

Are you hungry to learn more about AI ethics? Do you want to expand your perspective further? Subscribe to this podcast to learn more about AI, technology, and society.

Leave us a review! If you loved this episode, we want to hear from you! Help us reach more audiences to bring them fresh perspectives on society and technology.

Do you want more people to understand the complexity of AI ethics? You can do it by simply sharing the takeaways you've learned from this episode on social media!

You can tune in to the show on Apple Podcasts, Soundcloud, and Stitcher. If you want to get in touch, visit the website, join our community on Locals, or find me on Twitter.

To expanding perspectives,

Max

Transcript

Max Sklar: You're listening to the Local Maximum Episode 149. 

Time to expand your perspective. Welcome to The Local Maximum. Now here's your host, Max Sklar. 

Max Sklar: You reached another Local Maximum. Welcome to the show. As always, we have Aaron joining me. Aaron, before we close out 2020, how about a good old fashioned controversy? 

Aaron: Yeah, we're gonna go with a real softball this week, it sounds like. 

Max: Well, so this is a—this is one of those internet wormholes, not wormholes. There's another type of hole, that's a... 

Aaron: Rabbit hole? 

Max: ...rabbit hole. It's a rabbit hole. Yeah, that's the animal that you want to watch out for. So... 

Aaron: That's all I need but definitely larger. 

Max: ...a lot of podcasts and YouTube channels are talking about this, but I tried to do research, try to find all the angles to it. So we'll try to be fair, and while also giving my opinion. So, we'll see how it goes. This is about the departure of AI ethics researcher, Timnit Gebru, who—which has caused a huge uproar in the AI community, but also has implication of what Google is doing with all of our input data, where the culture is going, where the media is going, where—workplace issues. So it's really helpful, I think, to try to untangle this. And the first thing that we need to go over is what exactly—what actually happened because even that is a little tough to break down. So notice I said, Timnit’s departure from Google, even the characterization of that as being fired or resigning is in dispute. So already, there's—it's sort of a whole big controversy. So it's one of those stories that you have to be very careful about covering. 

Aaron: Indeed, yeah. Well, it sounds like we're going to start with just the facts, ma'am. And then we'll depart a little bit from there and get into some more analysis. But I think that... 

Max: Yeah. 

Aaron: ...the good news is that the raw facts are not hugely in dispute. It's the nuance around them, that becomes much more complicated or have I missed something there?

Max: Well, it depends what you mean by raw facts. But there are some facts in dispute, for sure. But they're not—there, yeah, there's some things we don't know. So first, we're going to talk about what exactly happened. And I tried to look into, like, what do we know? And what don't we know? Because there's a lot of insinuations going on and why people care. And then the second thing I want to go over is, what was Timnit Gebru working on her AI ethics research at Google, what some of the work that she's done in the past, and what exactly was the ethics paper that she unsuccessfully was trying to get published at Google, that led to this? 

Aaron: And that, sir, maybe one of the things I'm most excited about because I think you potentially bring a very interesting viewpoint on this as somebody who is tied into AI and machine learning, that is not really getting a lot of coverage out there. 

Max: Well, thank you. All right, yeah, no, I'm excited about that too. Third, we're gonna look at some examples of AI being biased in, like, towards certain races and genders. And which ones of these are legitimate? How big of a problem is it? So we'll have that discussion. No controversy there. And then fourth, fourth, we're gonna say like, what does this incident say about workplace dynamics? 

From the information we have, when is it acceptable to give your employer an ultimatum? When is it inadvisable? And also like, why are these companies taking the wrong approach to ethics, sometimes? What should AI ethics organizations be doing? Because they could get very politicized, so we can talk about potential solution there. So the fourth one is more lessons and solution oriented, which I want to be, but you won't find that on Twitter. I can assure you, and maybe all of Reddit. So that isn't a lot, Aaron. It's pretty straightforward. I think we'll be done in like 20 to 30 minute episodes. 

Aaron: Oh, yeah. Yeah. Quick one and dire.

Max: All right. I'm ready to go. All right. So let's just start with some of the facts. Timnit Gebru, was an AI ethics researcher at Google. And if you're wondering about the name, she's from Ethiopia, she is fairly accomplished. Some of the articles and commentary kind of give the impression that she was directly building the AI systems. And it does seem like she's qualified to do that, but her focus is on examining the ethical implications of those systems. Maybe that's part of developing them, you could say.

Her work focuses on finding racial gender and cultural bias in AI systems and machine learning systems, as it were, and in statistical models, and she's also a manager at Google and she has a team reporting to her. So I think those are maybe not disputable facts, disputed facts, but I tried. Okay. So recently, I think it was either before Thanksgiving around Thanksgiving, she said she was fired for trying to publish a paper on Google's large scale language models. 

Now, language models try to predict which sentences and words are more likely than others and so it can generate language. It sort of mimics human language. We've spoken about language models on the Local Maximum here a lot in the past, specifically, in Episode 134. I talked about GPT-3, which is like the most famous one right now from the open AI lab. And then Google has one now it's called “B-E-R-T,” BERT. 

Aaron: Does System AI have a formal relationship with Google? Or is it a completely separate entity?

Max: I believe it's separate. 

Aaron: Okay. 

Max: But I could be, yeah, and I even built a language model for Episode 4 when I was cracking your code, if you remember that. 

Aaron: Oh yeah, that’s something way back.

Max: Yeah. So, right, so, okay, so that's what it was about. Google officials said that they accepted her resignation from Google and they gave the impression. So she says that she didn't resign. They gave the impression that they asked her to withdraw her name from the paper because it didn't meet their standards. And then she gave them an ultimatum to do certain things or otherwise they'll leave. Specifically, they said that she wanted them to reveal the names of the people reviewing the paper. 

Now, it's important to come to understand, we have some of the communications so people have posted some of the communications specifically her texts, like an internal message board, but we do not have that—so we don't have the paper, we only have things written about the paper. And we don't have that email that we're talking about. Where she purportedly made that ultimatum? Or either...

Aaron: So both sides, it seems like, have been leaking information. But presumably, you would only leak information, which is beneficial to your painting of the story and we don't have the full unredacted or unedited version of that exchange.

Max: Right. So we don't have the email that she got. We have exchanges, we don't have that email. Important to understand. So first of all, why is this news—somebody gets fired from company—even if somebody gets fired from company unfairly. That is not news, happens all the time. 

Company fires someone who's high up in the organization happens all the time. Right? So this is—but it's really caused a public uproar in the company and there have been dozens of news or articles written about it. And honestly, like that is—that's, even like, big things that I think are going on don't have all these articles written about it. And, in fact, the CEO I was about to say, shoot, I was about to say Satya Nadella. But that's... 

Aaron: Is it Sundar Pichai? 

Max: ...Sundar Pichai I, oh, my God. Now I'm gonna be accused. 

Aaron: We're mixing up all the tech CEOs.

Max: Oh, no. Well, yes, but I could be accused of an ethnic mix up there. But anyway, so, okay, so the CEO actually had to respond. So first, some of the people at Google had to respond, and then the CEO has to respond to this called for an investigation. So that suggests something big is happening. And obviously all these are like there's big issues about race and gender and wokeness, and all that stuff. It's—this is gonna be a fun one. So usually, if an employee gives an ultimatum to the company, so she, I think they both agree, she said, the terms, “let—do this, or let me work on an end date.” Now, they clearly did not let her work on an end date. They just said, “you're done.” So, the language that Timnit purportedly used in the email was “do this or let me work on an end date.” 

Usually, if that happens, you'll talk to the people about it if you can't reach those demands, because I think all of us have said things in frustration. And a lot of employers... I think in, even in most cases, if it comes out of nowhere, in particular, will make some attempt to patch things up. In the normal circumstance, I think. Do you agree with that? Like, that's one of the things that I've noticed is... if someone says I flipped out at work, I don't think most people would be like, “what? I would never do that.”

Aaron: Yeah, I think how the company deals with this is more they what they should then what they will because I do have a personal experience where a co-worker of mine who was above me in the in the company structure basically came in at some point and said I've, I've had it, I'm—if XYZ isn't done then then I'm giving my two weeks notice. And in fact, actually, they did this while they were out on vacation or business trip or something. 

Max: Yeah.

Aaron: But they sent that in and said, “I'm happy to come back in and you'll work on the transition for another two weeks or a month. But if you can't do XYZ, then I'm out.” And the company's reaction, or at least someone in the management change reaction was okay, your email has been turned off, your access to the facility... 

Max: That’s what happened here. 

Aaron: ...and to the network has been turned off. 

Max: Yeah. 

Aaron: And don't bother coming in for your last two weeks, we'll send you your paycheck. 

Max: Yeah. Facts. 

Aaron: I don't think that's necessarily the best way to handle that kind of situation. But it's definitely something that happens. 

Max: Right. So the immediate acceptance here, when they're like, “thank you for your resignation” suggests that there's more than meets the eye that’s going on here. Maybe in your case, too, I don't know. 

So anyway, the pro-Timnit group, the people who are up and arms in support of her, their narrative is saying that she was trying to publish something critical of Google and they were trying to silence her, or they didn't like her outspokenness as someone trying to get more racial and gender diversity at Google. And then there were some insinuations that they fired their only black female researcher on the team. Sometimes it's like, well, they’re so blatantly racist that they did that, because they couldn't stand having even one. I think that direct one is only like some crazy people on Twitter, but there's definitely like, well, she was speaking out in favor of more diversity. And she was too outspoken, and it rubbed them the wrong way and they got rid of her is definitely one of the main narratives that I'm picking up here. 

On the other side, it appears that Timnit has attempted to sue her employer in the past, in kind of her long message on that message board that came out. And she even mentions that, which if I tried to sue my employer two years ago, and then put it aside, said, “Let bygones be bygones.” I wouldn't bring it up in like, I wouldn't bring it up all the time, as I'm working there. That's kind of weird. 

Aaron: Well, she—I guess it depends where in the timeline, she brought that up, because if this was after she'd actually been... 

Max: No, before because I can see after the fact.

Aaron: Okay, definitely bring it up, it's like, “I know, they're gonna use this to try and paint me as a bad egg. Let me get out in front of it.” But if she was... 

Max: Yeah.

Aaron: ...putting that on the table is almost a negotiating tactic that raises some other questions. 

Max: Well, it wasn't a negotiating tactic. It was once, so she wrote on the internal message board about all of the problems with Google that she was having. So, some people say she was fired for that, that rant. I was avoiding calling it a rant, but I'll call it a rant now. And again, I don't—it's that rant I can guarantee you is not her best work. I don't know her. But I don't want to, like, judge someone based on one rant, but it was not that—I read it. Look, it was not an advisable thing to set and send out. I'll tell you that. So, anyway.

Aaron: Legal council definitely would not have advised posting that. 

Max: Yeah. Well, she's more of that. So basically, some have suggested she was just very difficult to work with and they were just looking for a reason to be able to say she resigned voluntarily. So, she wrote that long thing on Google brain, women and allies form and a lot of people don't understand what that is. That's kind of a strange name. But let's break it down. It's Google Brain is the name of the AI group. That makes sense. 

Aaron: Okay. 

Max: Google, Google Docs, Google Brains, right? 

Aaron: Is Google Brain a product? Or is this like... 

Max: It's, I think it's an internal team.

Aaron: ...a separate company under the Alphabet umbrella? 

Max: No, no, it's within Google, it’s a team in Google. Yeah. So Google brain and Google brain women is like a... 

Aaron: A networking group? 

Max: ...like women. Yeah, networking group. And women and allies are just—allies means that—I guess men could hang out in there if they support.

Aaron: But it's focused on... 

Max: Women's issues.

Aaron: ...women's issues in the workplace.

Max: Yeah, sure, sure. Okay. So she's basically saying in that long thing, that diversity initiatives are all glass and mirrors essentially, which most organizations that's true, and she's, but she kind of starts out saying that she's been the victim of “Various micro and macro aggressions, and harassments” without citing them, told people to stop cooperating with company initiatives. And also she talked about “sick and tired of being constantly dehumanized.” So, it's a lot of very rough content, where you don't actually get the point, but it's—you could tell someone's upset, but it's like, it's not the way I would have advised her to write it. Let's put it that way. 

So now that she's gone, it seems like there's an extremely well organized PR campaign on behalf of Timnit Gebru. And I've noticed many AI generated texts and videos on YouTube with reading things in an AI voice supporting her, which is really weird. I almost think that maybe some people on the AI team had programmed the eyes to put all of that stuff on Google. So, that's kind of an interesting point on that. 

So, okay, so Jeff Dean, who is someone in her chain of command, so to speak, he's a lead AI person at Google. I've actually spoken to him once. He's very well, like, known at Google. I heard like, they have memes going around about him. Like the, what's the meme with Chuck Norris? Like he's like the Chuck Norris memes of Google of AI. 

So anyway, he wrote an email to employees that she didn't give the reviewers of her paper enough of a chance to review her paper and that it didn't make the—meet the standards. Because even though it was talking about problems with the language models, both in terms of energy costs and social sense—cultural sensitivity concerns, it didn't cite the work that Google has done to mitigate those concerns. 

So Google wants they're—so the way he's saying it, Google doesn't want to censor her. They just want their—she's working at Google. They want their point of view represented as well. Which, if that's true that's reasonable. So they said that one of Timnit's conditions for continuing was knowing the identities of who gave feedback on the paper, which they couldn't do because it was anonymous. Now, I could point out that Timnit denies that that is what she requires. So we don't have the email. And so she claims...

Aaron: And this is where we get into the, “He said, she said.”

Max: Right. Right. So she said “No, I never was. First of all, it was never anonymous—it was never supposed to be anonymous. And I just want more transparency.” So that's what she's saying on Twitter. Jeff Dean's email doesn't look like it was written by a real person. So probably, not Jeff Dean, it looks like it's written by lawyers. I suppose lawyers are real people, too. Okay. 

Aaron: Not when they're acting in their professional capacity. 

Max: Yeah, exactly. It's written by committee. And they use terms like “We respect her decision to resign,” like, yeah, we know, we know what happened. But I know, it's because without those, that language, they open themselves up to a lawsuit. So it's, it's hard to take that statement, I can understand why people don't take that statement seriously. Our supporters also say that, while they can cite specific rules that could invalidate that could, like, cause them to retract the paper. They're essentially so many rules that no one follows. And then they usually wave so they've kind of arbitrarily decide to come up with minutia to knock down this one, if that makes sense. So that's what they're saying has happened. 

So again, I don't really have a good sense of how to adjudicate this. I'm just trying to give all the different sides of the argument. So she has a lot of supporters in the company signing petitions, and they're all threatening to leave. It was like the walkaway campaign. I think it's called—the CEO, Sundar Pichai, has called for an investigation. And he's been criticized for not going far enough. Timnit called his response dehumanizing because he spoke of her departure and not have her firing. So that is my attempt at putting together all of the facts. How well do you think you understand what the—not necessarily what happened, but what the layout of the facts are here? 

Aaron: I think I've got the big picture. The—knowing a little bit more about how exactly this review process was supposed to work, is intriguing. But I think it's a little bit beside the point getting down to the nitty gritty... 

Max: Yeah.

Aaron: ...that's not something that we're going to resolve on this episode. 

Max: No, honestly, all of the academic, like, the paper, acceptances, and it's all such a pain in the neck. And honestly, it's like disrespectful, you kind of expect to be disrespected in that process, if you sort of naive if you don't, but anyway, I’m not naive.

Aaron: I mean, I've only worked on, in recent memory one, one academic paper, well, at my current company, and mostly it had to be reviewed by marketing before they would allow me to put my name on it, but I wasn't so concerned about what marketing had to say, given the technical level of paper.

Max: Right, right, right. All right. So part two, what was Timnit Gebru working on let's let's go dive deep into that. I always, like, so you can always tell the spin her supporters called Her a renowned top AI researcher, which are very... 

Aaron: renowned, mean top or subjective? 

Max: Yes, yes. But they're very complimentary terms. And they could be justified. Her work on racial and gender bias in AI does get a lot of attention. But it's also that stuff gets a lot more attention anyway, because of the subject matter. And because of the cultural zeitgeist. She has won some competitions, some best papers, and she's a high rank at Google for sure. In 2018, in a paper at Google, she found that facial recognition works on—Google's facial recognition, works on white people much better than on black people because there are a lot more white people in the training set. So I mean, I always hate when people than, not hate, I always like, get annoyed when people say, “Oh, that means that the AI is racist.” It's like, well, what does that—I probably “racially biased” is a better term. But because, well, it's hard. 

Aaron: It's not exactly garbage in, garbage out. But there's a certain element of that to it. 

Max: Yeah, there's a certain focus of where it's better, it's gonna be better on the things that it's been taught, obviously, so Google has no problem with this research. They let her put it out just fine. Of course, the language model research goes more to the heart of what Google does because their search algorithm uses the language model research that she has in the current paper. So what is the current paper? That we have the title. The title is “On the dangers of stochastic parrots, can language models be too big?” And there's a key term in there. The key term is parrots. In other words, like the animal that squawks and repeats whatever you says, if whatever you say back to you because that's what these language models do. 

And so one of the aspects of the paper is the energy costs to train these things, I kind of want to skip over that for time. Because the real crux of it, the real social concern is that in the training data, so what these large language models do is they take all of the english text, say on the internet, and they use that to build the best possible model they can of the English language and what people are likely to write and what people are not likely right. Not just grammar and vocabulary, but also like the incidence of phrases and words and how they fit together.

Sort of like what we spoke about in a few episodes ago with Tai-Danae Bradley, where it was like, okay, you could have a red fire truck and a purple fire truck, just one is way more likely, even though they're both correct. So, it's, so that's sort of statistically what it's doing. 

So, the real crux of it is that in this training data, there will be racist, sexist, and abusive language and then the AI is going to repeat that as normal language. This has happened, I think Microsoft bot that they put on Twitter, they went out and became horribly racist in like five minutes. Do you remember that? 

Aaron: I do remember hearing about that. And actually... 

Max: Yeah. 

Aaron: ...this reminds me of a recent episode of 99% Invisible I listened to. They were talking about the Enron emails, which I wasn't aware of, and how they were released into the public domain, or maybe not released in the public domain, but publicly released during the investigation. And because they were so publicly available as a database, it was that corpus was used for a lot of the development of early spam filters and language research and some early machine learning stuff along those lines. 

But there were concerns about well, what's the sample size? It's people working in an oil and gas energy firm in Houston who are there's a location bias, a racial and socioeconomic bias and a business sector bias. So, it's great as a tool...

Max: A topic bias. 

Aaron: ...but it doesn't necessarily represent everyone effectively. And so, the way that it—because it was kind of an early block in there, it is establishing some norms early on that are potentially still with us today. So there...

Max: So, one thing... 

Aaron: ...there’s real concerns there... 

Max: Yeah.

Aaron: ...and this kind of builds on I think some of those same ideas. 

Max: Right. Oh, by the way, this is from the MIT Technology Review is we're getting this information because they looked at the paper and were reviewing it, so I couldn't read the paper myself. So, I'm kind of getting info from them. So, one thing that the article says not in the paper, it says an AI model who taught to view racist language as normal is obviously bad. And actually I want to point out that, that's not actually true at all. There are lots of models. I mean, I even built one at Foursquare trained on offensive language tips. We use it to filter out sort of abusive tips. 

Aaron: Yeah. 

Max: In Foursquare. 

Aaron: I guess the work... 

Max: I think it shows the bias of the people writing where they're like, anything that's taught to be racist is obviously bad, but in some cases, it uses those models.

Aaron: Be as normal as doing a lot of work in that statement there, and how exactly that's interpreted. Yeah.

Max: Yeah. I think it's... 

Aaron: So, I would say, inherently bad and useless, are probably inaccurate. 

Max: Okay. 

Aaron: The as normal, maybe gets them around that, but i think that's that's a valid point that simply excising this from the model doesn't solve your problems that would just be creating a potential blind spot for this kind of thing. 

Max: Yeah. Okay. So that's one thing about the article, there are three more and the second one is really the one where I have a strong dislike of, but the second aspect of the paper is the theory that social change requires us to change our vocabulary over time to be more politically correct. And it sounds like Timnit and her team want to use the AI—want the AI to use the most current language, the most woke language, if you will. And there's no way around this that what the MIT article is saying here, it's like, it's not just about filtering out abusive language. 

It’s also an aspect of wanting to wield vast power over computer generated language from AI to shape the direction of society. And this is one part where I'm really not down with this. I mean, this is, you're gonna encourage Google to wield that power? That's just that's kind of horrifying to me. And it's amazing to me that people don't see it.

Aaron: Not to go too far down that divergent rabbit hole here. But it's hard to come up with a clear right answer there. Because either they do advocate for it, and they do go down that path, in which case, they're doing one thing, or by choosing not to do it, they're making another statement. And so there's—it's not an easy thing where you can say, “Well, I just wash my hands of it. We're not gonna deal with that” because that in itself is functionally taking a side in these issues, at least from the perspective of many of the people to whom this is important.

Max: Yeah, so I'm not sure I get what you're saying, though, like, they are taking a side. They're just like, “Hey, this is the right way to talk about something. And I want to kind of dictate down the line, how people should be using language.” And then, inject that directly into the AI, it almost sounds like. 

Aaron: Yeah, I guess the choosing to do something is definitely an action here. But also, choosing not to do something potentially opens them up to similar, if maybe, slightly different emphasis criticisms there. 

Max: Sure, sure. But this is sort of the like, permanent social revolution, ideology type thing. Okay, let's get into the third and fourth things which I have some sympathy for. So the third thing I mentioned is it's hard to audit these large data sets for, what are you teaching the model? Because there's so much data. 

Here, I totally agree. That's a big problem. We had a big episode on it called Big Algorithm. 

On Episode 27 we said we want big algorithm over big data and that we want to find algorithms that learn from data that is very well vetted, versus just trying to throw as much data at it as possible. So I agree with her on that. And the fourth is that the language models do not actually attempt to understand language, they just statistically mimic it, which is another one that we've been talking a lot about on the show. We've talked about it several times. So I think there was—what was the episode understanding “understanding”? I'm going to try to—now I'm going to pull up the archive immediately, because I didn't...

Aaron: Yeah, but I think that gets to... 

Max: ...134, yeah. 

Aaron: ...what are the key differences between I forget what the term for it was. But a powerful narrow AI and a general AI, a generally AI has to actually understand what it's doing. Whereas you can make a narrow AI that can fake it really well. But it doesn't necessarily have an understanding of what it's doing. And so that makes it a little bit more, more fragile maybe or, or brittle. 

Max: Susceptible to error. Yeah, and it's, yeah, right now these statistical models do exactly that. So this would be an interesting paper that one thing we have several great—I suspect it has several great points, and it has one thing that's ideologically driven, that's crazy. But yeah, I would want to publish. So why would Google want to suppress this? That's a good question. Even though it goes to the heart of Google search. It's just my view, I just see. A publication of this paper would not, like, lead to cause for dismantling Google search, it could even lead to more research to, like, improve the models and allow Google to make even more money. 

So I don't really like the way it's kind of hinted at is like she was going after the—their main moneymaker and they wanted it stopped, I just don't, I don't see that, I could be wrong, I just don't see that as a motive. 

So, anyway, that is what she was working on and that's what the topic is about. I did want to spend a little bit of time next talking about some of the racial bias in AI that I found, which does exist because of, largely because of the training sets that are used, and the lack of understanding. But any questions about this so far? 

Aaron: I'll just throw out with your last point there that, well, why I think the kind of Cui Bono who benefits? What is Google actually gaining by blocking this paper from publication? is a good question to be asking. But at the same time, I'm very hesitant to give Google the benefit of the doubt in acting. They've come a long way since their do no evil days. 

Max: Yeah.

Aaron: So, I cannot give them the benefit. This sounds like a case of bad actors on both sides. So I don't know who I want to root for.

Max: I've said that I think as much as I went into this story, I started disliking all sides of it. I'm sorry to say, more and more. But, yes, let's—so they're probably some people saying like, AI is—there's nobody who says that bias in AI is not a problem. But let's talk about some of the examples that we found. One of them is from this year, which is like the—those things where they take your temperature, but they point something at your head, and then it takes your temperatures that you know whether or not you're allowed in because you don't have COVID? 

Aaron: Yeah, yes, we have to do that at daycare every day when we drop the kids off.

Max: Okay. Okay, so when Google sees a picture of that with a hand with one of those in it, it's called “hand holding a monocular.” A monocular is a binocular with only one side, it kind of looks like that. I can see why Google would think that. Now, make the hands black and it says hands with a gun. So...

Aaron: Yeah. 

 Max: ...that's an issue. There's another issue and this one is, so there's a question I want to ask is, what's the actual? What is the actual harm being done there? If it's just some algorithm that people made, but it's not being deployed? So, that's a fair question to ask. Unfortunately, it's not a question that AI ethics teams, I think are asking enough because there's going to be a lot of there's going to be a lot of trial algorithms out there just people testing stuff in research. And there is a big debate about if you test something in research, and you try something out, and it has one of these horrible things. Are you now a horrible person? 

Aaron: I guess, the question I want to ask is, is this an issue with the algorithm? Or is it an issue with the training data set? And maybe maybe it's not possible to separate those two... 

Max: Well, yeah, right. 

Aaron: ...they're inextricably bound.

Max: Well, yes. If you create something in the lab, so to speak, and you don't productionize it, is it a problem? And some of these people are saying, “Yes, yes, you can't do that because then someone else is going to come along and productionize it.” So, oh, well. There's—I don't agree with that, by the way. 

So, there's another one with GANs, the generated adversarial model where it, like, generates faces, and there's a version of it where it can generate faces from pixelated faces. So could pixelate your face and then I give it a pixelated version of your face that reconstructs your face. 

Problem is it turns everyone white because it’s seen so many white people and sort of, like, turns—there's one, I have a link here. And all these links will be on localmaxradio.com/149. And so there's one that turns Obama into a white guy. Another example, I found that you ever do a Google search on yourself, Aaron, and I've done it, I search Aaron Bell, and then it comes up, see Aaron's criminal record. Have you seen that? 

Aaron: Not recently, but yeah, I know what you're talking about. 

Max: Yeah. So apparently, if you have a name that is more commonly associated with African Americans in Google search, those find so and so's criminal record thing are going to be ranked higher. And then of course, there's Gebru's initial work on facial recognition, which we talked about in the beginning. So, these are not necessarily generated by people being intentionally racist on online and it sucking that up. But it's more just like, who—what the training sets are, what the people doing the training sets are, it's kind of inevitable that people working on AI will not be—will not come up with every case will kind of be biased towards who they are. I don’t see that outreach.

Aaron: And some of us gets to a more basic question that I think we've talked about before with the curation of our online experience. The curation of the data that we're exposed to. Do we want a pure, unfiltered feed? That's based on almost a frequentist assessment of the information out there? Or do we want you some sort of algorithm, whether it's Google or some other institution coming up with it, that's normalizing and adjusting that to, for lack of a better term, to correct it, to be a more accurate or maybe not even more accurate, but a “better depiction of reality.”

Max: Right. I mean, it's very tough to actually make everyone happy with one of these things. Obviously, the things I pointed out are obvious things that Google could then go ahead and fix and they do it, which is one benefit of having a group that tries to discover these things. But there is some inevitability, you don't get a neutral algorithm in life, so to speak, it's just not, it's just not possible. Life is inherently biased in many ways. So, again, trying to I feel like a...

Aaron: if you as a person or the system itself have no biases... 

Max: Yeah. 

Aaron: ...then it can't make any—well, for I feel awkward using this term, but it can't make any discriminatory assessments and discriminatory here doesn't mean racial discrimination, it means being able to look at multiple things and make a decision. Is it A or B?

Max: Yeah, yeah.

Aaron: And that’s where the value is in these types of algorithms as being able to tell things apart and extract information from what would otherwise be noise. 

Max: Yeah, yeah. So I feel like an AI ethics team, when looking at this should really look at it like, “Hey, bias is inevitable. But we can take—make an attempt to be neutral by looking at these really bad situations and trying to prioritize them.” But I do find it concerning when you come out with—when you try to come out with outrages all the time. And it's like, Okay, you do have to prioritize, and you do have to ask for each one. Well, how bad is this? Who are you harming? For each particular one is very important. The criminal record thing is a good example, like, you could actually point to who you might be harming in that situation. Whereas turning Obama, white, I mean, maybe it's, like…

Aaron: I'd say that it's less a harm done, than it is a symptom of a fault in, in the approach. 

Max: Correct. Yeah, correct. Okay. So let's now turn, if you want to talk about if you want to kind of back out a little bit. Let's turn to workplace dynamics, and what's kind of going on here, shall we? So the first question is, is an ultimatum a resignation? Have you thought about this? Right? 

Aaron: Yeah. And I don't know where it comes down legally. But yeah, there's, it's very open to interpretation. I think I would proceed. If you say, if you don't do x, then I will be, then that I will no longer be able to continue working here. Then, when they say, well, we're not going to do X, then you've already offered your resignation, they can say we accept it. Perhaps it is, legally, it may be required that there needs to be an affirmative step that they then say, “Okay, now I've resigned.” And if they don't, then then you know, things kind of go back into neutral. 

Max: Yeah, it could be like the employer has to say I won't do x, and then the employee has to say, and therefore I enact my threat. 

Aaron: The Hollywood version of that is, is that and what I'm picturing in my brain is a Cabinet Officer or Cabinet Secretary coming up to the president and handing them an envelope and saying this is my resignation. “I can't in good conscience continue to serve in your administration, if you do XYZ.” And then either the president says, “Thank you we appreciate your service, leave your badge on the desk on your way out,” or they say, “I'm tearing this up, you can't possibly resign. We're going to change our stance on x and y,” which is not exactly what happened here because I don't think the verbal or the...

Max: Or, “We're not changing our stance, but I need you to stay.”

Aaron: Yeah, well, yeah. “I don't accept your resignation, you're being held here against your will.” I don't think that that's quite parallel to what happened here because even if she did say in her email that I would be forced to resign or then that I would no longer be able to continue my work. That's not the same as handing an actual letter of resignation signed and dated.

Max: Yeah, yeah, sure. 

Aaron: So, I think there's maybe some cultural gray area there. But that can only really be cleared up by a California employment lawyer. I'm assuming that she worked in their California offices. 

Max: Yes, yeah, I, well, I'm assuming that too. I don't know for sure. But I think it is. Okay. So there are examples when people have handed their resignation when it was absolutely the right thing to do. There was actually a case that we did in Business School. That was very interesting that I think is relevant here. And which is, I mean, kind of the opposite of what's happening, which is Donna Dubinsky, who's someone I met is very amazing. And what she did in 1985, was she was at Apple, I hope I don't botch this because I learned about this 10 years old. And obviously, the story is, whenever you get a Harvard business case, it's already—there's already flourish in there. 

So, but my understanding of it is that she was working at Apple, and it was her job to deal with distribution, and of the products. And then Steve Jobs was like the former founder, he was not in her line of—he was not in her department, her line, but he came in anyway and said, “We're not having warehouses. We're doing X, Y, and Z.” And I think it's eventually she basically tried to resign and say, “Look you have the former founder coming in and telling us what to do, and it's not the right thing. And I'll stay if you let us come up with a counter proposal.” 

And amazingly, they said, they said, “Oh, shoot, she's right.” And so they let her do that. And that was a gamble that paid off. So, but I'm sure that when she did it, she had to be very careful of how she communicated with people, especially if you go against Steve Jobs. How scary would that have been? 

Aaron: Oh, yeah, because he's definitely a—take note, like, he seems like absolutely the type of person who would call you on your BS on your bluff and say, “Okay, goodbye.”

Max: Righ. 

Aaron: If he was not convinced by your argument. 

Max: But he was no longer CEO. 

Aaron: I see that. 

Max: So... 

Aaron: Got you.

Max: Right. So the CEO said, Wait a minute, we hired someone to do a job. And then the former founder is stepping in. And they're resigning, even though they're going to be very good at this job. And they have a very good proposal, we better keep them. So that's always very, like, a kind of aspirational, like, wow, that-really-happened type situation. But yeah, I think the difference is first of all, you need to be extremely, you need to have an extreme amount emotionally intelligence to do it, right? And you need to be a very good writer and communicator. 

Aaron: Yeah. And not to mention, you need to have a realistic conception of your value to the company. 

Max: Yes.

Aaron: and I would say that, if this is a step, you're willing to take. You need to be prepared for them to call your bluff on it for it to not be a bluff. That if you're not actually willing to walk, then you probably shouldn't be taking this step. It's not right. It's not the movies. Not every Hail Mary pass comes through. 

Max: Yeah, there's a generational, almost seems like there's a generational difference here writing a long rant is it might be it might feel good, you might think you're justified in it, but it's not ultimately not going to get you what you want. So, that's something to consider. 

Another lesson here is, I think how much of a mess the Google organization is. Most big organizations ends up being this way. But, wow, the young, hip, googly, smart people Google is no longer there. It's all lawyers and politics now. It's a very political place. And what I see from the things that I've been reading is there's a lot of people they're ascribing bad motives to their opponents when they're trying to resolve disputes — very bad sign when you’re trying to run an organization. 

Aaron: And they're not the only ones… I think there's a fair amount of this going on at Amazon, at Facebook. I'm sure Microsoft has had this for decades now. Where you've got a generation of folks who are not necessarily on board with what the old suits are doing. And it's creating more attention, not less as things continue to develop. 

Max: Yeah, I think a lot of the employees don't trust the leadership there. It's kind of like, they feel like they're the old boys club. And so there's a lot of tension there. It's also it's hard not to mention James Damore, on the other side, who was fired for that memo that some people said was sexist, probably about three, four years ago, if I remember correctly. 

And so he was kind of, I mean, I hate to use the term but like, he's kind of a nobody in the company. So, it's definitely not—so it's not like the leadership is in—it was with his ideology. But it's definitely created a lot of division. And there's and remember he was kind of encouraged to write his thoughts. So it was, they sort of have seemed to have this. And this is something we all use, by the way, and that affects the way we think about the world because it gives us a results. 

All and when you find out all the people inside are, just can't stand each other. It's getting worse and worse. So I think more of these issues are going to come up: more James Damore, more Timnit Gebru. I know, neither of them would like to be associated with the other one. I don't want to make some, like, yin and yang... they both have this situation of being fired by Google and a big controversy. 

Aaron: They can join the ex-Googlers employment lawsuit support group, together. 

Max: Yeah, another observation. The leadership is just completely soulless. They hide behind legalese. And so what is their focus? What's their ideology? It's neither one of those two people. It's really, it's kind of like, they’re politicians. They kind of like they breed true believers in their ranks and then they cynically use it to maintain their power and position. That's kind of how I see it. So, whoever rises to the top of these organizations is the people are the people who are better at that. And I think Timnit, kind of, was a true believer and she’s almost a victim, either.

Aaron: Yeah, I can’t blame them from hiding behind the legalese here. Because...

Max: Yeah. 

Aaron: ...certainly the C-suite leadership has an obligation to do that. Because one misstep or mis misquote here, exposes them to huge liability as a company. Not to mention the PR...

Max: I mean, and it shows where we are in society too. 

Aaron: Yeah, it's they could probably deal with this better than they have. But... 

Max: Oh, yeah. 

Aaron: ...to not have the lawyers in the room to protect them from themselves would be foolish here. I think the big takeaway here is that Coinbase is coming out of this sounding like brilliant savant, that this is exactly the type of situation I think they were trying to avoid with their maneuver earlier this year by giving it out to people so that this type of clash doesn't happen. And we're still waiting to see what the long term Shakedown on something like that actually is. But... 

Max: Right, remember they, you saw they had a whole hit piece on them in the New York Times saying how they're a racist company. And some former people said this, and this and this. So I mean, that's not the other Coinbase, obviously, we'll see. That could just be they could have anticipated that and say, “All right, I don't know if it's true or not, but it could be like, alright, let's, let's move on from this now that we've, we've done it.” 

But I in some ways, I think Timnit is kind of a victim here, because she was kind of sold a bag of goods by Google that this is what our company is about, which is something they could never possibly be. And then she's disappointed when it turns out that they totally are not that. 

Aaron: Yeah, I can't find your comment from earlier about it. But the bit where she was stating in—I think it was an internal memo that, oh, yeah, it was the to stop cooperating with company initiatives. And that, here it is diversity and initiatives are a glass and mirrors essentially. I think maybe…

Max: Those are my words not hers. Yeah. 

Aaron: ...that she was foolish to have ever thought it otherwise. But clearly she came into it thinking that was not the case. And became disillusioned and I can't blame her for when seeing the real writing on the wall, they're getting upset. But I also can't blame Google for not wanting somebody internally who's potentially spreading that kind of toxic divisiveness internally that... 

Max: you mean like what she was in the ring? 

Aaron: Yeah, that's that's a nightmare for HR and if she didn't already have a target on her back that certainly put it there. 

Max: Yeah, but I'm just saying it does take, I was about to say take two to tango. But I hate using those...

Aaron: Takes two to tango?

Max: ...cliches. No, but like, I don't feel like—it's not like she joined Coinbase and then got all upset that they weren't doing what they were supposed to do in terms of the diversity initiatives. I'm sure Google sold her that this is exactly what we're doing and we promise that this is how we're going to do it. And whenever you say, as a company, this is what we're going to do and you probably—and you’re like, I feel like they mislead their employees a little bit. 

Aaron: Yeah. 

Max: With their kind of brainwash, they have their own brainwashing is kind of what I'm saying. I don't think anyone who's worked at Google would dispute that. But you could—you tell me. 

Aaron: They probably have it to a greater degree than many companies. 

Max: Yeah, yeah, I would think so. Okay, so, what other notes here do we have? What should be a good approach to AI ethics? I think I don't really have a good insight into how AI ethics organizations work. I only know from this particular examples. I think some of the things they bring up, though, seem reasonable and things that should be brought up if you're going to build AI. It's almost like a part of testing, right? 

You want to make sure that you won't, but you want to try to understand where bias comes from and what it is, rather than trying to be flashy and activist. And that's a very hard rope to stand on there. It's very hard to do this properly, I think you need to have people in those positions that kind of consider all sides not take extreme positions. But who would—I think activists are more the people who are attracted to AI ethics might be the more activist type, so it might be tough to separate them.

Aaron: Yeah, right. And face value, that seems like a reasonable conclusion to draw. I mean, it's much like when you're designing a piece of software, that's going to be used by whether its internal or external users, you want to have a security person in the room to inform your design decisions, it probably makes sense to have someone thinking about these ethics related topics, so that you're not coming in at the at the 11th hour and trying to fix a problem that that you're designing with these things in mind. 

Max: And there's a bunch of areas in ethics, I feel like, and it could just be her work, but I feel like there's going—they're not considering all of AI ethics, they're just working looking for racial and gender bias, in some cases, cultural bias. But don't you feel like there should be more to AI ethics than that? I mean, there's also it's only certain, like Facebook came out and said recently, like, hate speech toward white males is now going to be allowed or at least not considered important to... 

Aaron: it's not being assessed or or dealt with in the same fashion. 

Max: Right. But I feel like this biased stuff is not the only ethics. There's data that concerns privacy. 

Aaron: It's certainly the most, maybe it's the most controversial, but it's the one that's getting the most attention. 

Max: I guess she dealt with energy costs as well. Although I see that as more of a cost in that, but anyway, yeah, there are definitely... I feel like there are areas that should did more intention that that aren't—I haven't thought about laying out what those are. Privacy would be a big one. 

Aaron: Yeah, for sure. 

Max: Privacy and data security would be a big one, I think. 

Aaron: There's one thing I'll add to the topic of having an ethics department, and I don't know where I originally saw this. So I apologize to whoever I'm stealing it from without attribution. But a company having an ethics department is in general a bad look, because it gives the perception of “Ooh, and in this room, this is our ethics department. And I'm now going to close the door because they do their ethics in there and we don't want that contaminating the rest of the company.” You know that they're the only ones that are allowed to think about ethics. 

Max: Yeah. 

Aaron: It's a semantic thing but it tickles my funny button. 

Max: No, it does sort of suggest that you have a little bit of a problem. I almost wish there would be like AI bias testing and AI privacy testing departments would almost sound better. But, yeah, we can when we start our own company, we can do it that way. All right. So I think we got through this 55 minutes . That's pretty good. 

Aaron: Yeah, right on the money there. 

Max: Alright, so I want to thank everybody. Have you been on the local site maximum.locals.com 

Aaron: I have, there’s been some interesting exchanges there. A little—a couple of sneak previews of upcoming episodes a few days early. Not necessarily the episode itself, but a hint of what's going to be going to be dropping soon. So I think people would have appreciated seeing that.

Max: Yeah. I feel a lot more free to post whatever I want on there because you don't have all of Google—all of Twitter and stuff that can respond. Because usually when I put something in, somebody responds on Twitter, it's not someone who I care to interact with. These are my people. 

So please join us at maximum.locals.com  to support the show, you could become a member, and you could just join for free, and just watch or you can participate, and you could be a supporter, and it's only $4 a month. Not a huge part of your budget, you're really supporting the show because there's my costs are still not covered. I don't have a whole lot of costs, but I was worried. I am glad and very, very thankful to those of you who have been supporting the show. I am surprised and pleasantly surprised that people are actually chipping in. 

Aaron: It's good to know, we're not just speaking into the void. 

Max: Yeah, yeah. So I'm trying to get to 100 members there. And I think we can get there, pretty soon, maybe early next year. I could do a lot more on locals and they have 100 members, it makes sense. I think locals, as a whole, is trying to get the creators to get a bunch of people on there first, and trying to incentivize local communities to spring up. So yeah, definitely be interactive maximum.locals.com, check it out localmaxradio.com/149 for the show notes. And yeah, two shows left. 

I think next week, I just—there are a few articles that I wanted to get to today, but I wanted to focus on this. But there are a few articles about emerging technology that maybe I'll just go over as a solo show next week just to update you on some of the maybe more exciting things that have happened in 2020 that kind of happened under our noses. And then I'm hoping Aaron after that you'll join me for the last show of the year where we'll do a year look back of 2020 which will be one for the ages, I expect. 

Aaron: I'm really afraid that all this anxiousness to get 2020 over with and get into 2021 is going to result in an epic disappointment when we realize on January 1st that we're dealing with all the same problems that we were on December 31st. But I'm trying not to let that depressive attitude drag me down in the holiday season here.

Max: Okay, okay. Sounds good. All right. If there aren't any—If I hear no objection, any last words, then I'll just close out the program. What do you think?

Aaron: I second the motion.

Max: Okay, have a great week, everyone.

That's the show. To support the Local Maximum, sign up for exclusive content and their online community at maximum.locals.com. The Local Maximum is available wherever podcasts are found. If you want to keep up. Remember to subscribe on your podcast app. Also, check out the website with show notes and additional materials at localmaxradio.com. If you want to contact me, the host, send an email to localmaxradio@gmail.com. Have a great week.

Episode 150 - City Living, Waymo Driving, Protein Folding, and Bitcoin Popping

Episode 150 - City Living, Waymo Driving, Protein Folding, and Bitcoin Popping

Episode 148 - Explaining AI, Decision Science, and Augmented Intelligence with Lisa Palmer

Episode 148 - Explaining AI, Decision Science, and Augmented Intelligence with Lisa Palmer