Based in Sydney, Australia, Foundry is a blog by Rebecca Thao. Her posts explore modern architecture through photos and quotes by influential architects, engineers, and artists.

Episode 272 - Data Science History with Chris Wiggins and Matthew Jones

Episode 272 - Data Science History with Chris Wiggins and Matthew Jones

Chris Wiggins and Matthew Jones dive into a new book called "How Data Happened: A History from the Age of Reason to the Age of Algorithms" with Max. We learn that we can reach back into data history and find stories that experienced scientists and engineers are will find surprising at present.

Chris Wiggins is an associate professor of applied mathematics at Columbia University and the Chief Data Scientist at The New York Times. Matthew is the James R. Barker Professor of Contemporary Civilization at Columbia University.

Matthew Jones and Chris Wiggins

 
 

Matthew Jones: Profile
Chris Wiggins: Profile | LinkedIn | Twitter

How Data Happened: A History from the Age of Reason to the Age of Algorithms

 
 

Links

Harvard Business Review: Data Scientist: The Sexiest Job of the 21st Century
Amazon: Chris Wiggins
The Local Maximum: Email

Related Episodes

Episode 6 - Facebook Data and the Election, Decentralization to the Rescue
Episode 265 - The Multi-Armed Bandit

Transcript

Max Sklar: You're listening to the Local Maximum episode 272.

Narration: Time to expand your perspective. Welcome to the Local Maximum. Now here's your host, Max Sklar.

Max Sklar: Welcome everyone, welcome! You have reached another Local Maximum. 

We talk a lot about data science here on the show. In fact, we started out as a podcast on Bayesian inference way back in episode zero and one five years ago, still our podcast on Bayesian inference in many ways. 

If you're like me, you've probably been to tons of conferences and meetups, or God forbid, the Zoom meetups and webinars, but I hope not too many of those because those could get old. But particularly in New York City area, we used to go to tons of these things where we talk about data science. I went to NYU, we studied that. We're getting the story on how data works but we're not getting the whole story.

In 2012, back when I was first starting my career in Foursquare actually, the Harvard Business Review called data science, the sexiest job of the 21st century. In that year, NYU started its official Data Science Master's. Now, I, of course, graduated in 2011 from NYU, which was the year before. That was the Information Systems Masters but they took a lot of the courses I was taking and put it together for that program. That was kind of the start of, more or less better turn a data science slash machine learning engineer bull run. 

Now I think this particular iteration of data science appears to be morphing into something else, the Chat GPT era. But believe it or not, there's this whole history before 2012 of data. History did not start in 2012. In fact, the Mayan calendar ended in 2012. 

My next guests are going to talk about this, and we're gonna learn about this. They not only understand how we work with data today, inside and out. You'll see their credentials. But also, they've researched extensively the history of the field from the 19th century till today. I honestly wouldn't want this story from anyone else. 

Chris Wiggins is the Chief Data Scientist at the New York Times and a professor of Applied Mathematics at Columbia University. Chris is also someone that you get to know if you work with data in New York City. I first met him I believe it was when I was at NYU, through the Hack NYU program, and later through various machine learning symposiums and meetups and work at Foursquare itself since then.

Matthew Jones is the James Barker professor of Contemporary Civilization at Columbia. The book is called How Data Happens: The History From the Age of Reason to the Age of Algorithms.

Chris and Matt, you've reached the Local Maximum. Welcome to the show.

C: Thanks for having us, Max. 

M: Yeah, we're thrilled to be here with you.

Max: So the book is called How Data Happens: From the Age of Reason to the Age of Algorithms. It's so new, it's not even out yet. But I think it's going to be out by the time this podcast goes out so I'll have a copy of it. But I did get an advanced PDF, which I really appreciated. 

When I read the preface and part of the first chapter, I thought, I have to have this book because it combines history, and teaching, and practice, all in one book. Usually, I've read books on all three, but I really love that combination so I was very excited to see that in this book. And also, you mentioned people who I've met in person, which is also pretty cool. 

So let's start with what's the motivation behind this book and why did you decide to write it now?

C: The book really grew out of a class we've been teaching since 2017. And then the class in term, why now? Well, we felt like over the last 10 years or so, people have been trying to make sense of how data has become so pervasively part of their lives. 

We live in an age where data-empowered algorithms really shape everyone's personal and professional and political realities. I think for many people, they're trying to take stock of how it got that way. Even more so the headlines are just moving so fast. Even in the last three months, it just seems like every two hours, there's some new headline that involves data or massive datasets or any brand new product, brand new algorithm. I think for a lot of people, it just seems like it's hard to keep up. 

In our experience, one way to get perspective is to have a historical perspective to see what is it about the present day that's new, what's not so new. We think that historical context can help people make sense of how data happened and where it's headed. 

M: Yeah, we want to get across both the excitement of the introduction of data into different domains, whether it's economics, politics, the examination of human’s intelligence or other sorts of things but also the way that there are real questions involved about how those are applied. It isn't just that the technology develops by itself and then gets applied, and then one has to mop up afterward, but rather, there are major considerations all along the way. 

Our history opens up a set of questions about how is it that we got the way we are and do we want to continue on those paths or do we want to deviate from those paths and where we might want to transform them more dramatically.

Max: Yeah, and your book goes all the way, you start in the late 1800s. What was going on there that that there was some shift that provided a good starting point for you guys?

M: So there's two things that really come together by the end of the 18th century. On the one hand, you have this massive new prestige of physics. The new physics that had been elaborated was able to deal with things, both planetary bodies and other sorts of things. And people became excited, could that be applied to the moral and political sciences as well? And at the same time, there became a new enthusiasm for using numbers to understand people and state. So the US Constitution includes the census, right? The movement gets going to collect data. 

We start there because you have these two movements that come together, first, kind of tentatively, and then very robustly by the 20th century, the urge to analyze data using statistical tools and the beginning of a really sort of snowballing of the collection of data. So when those come together robustly by the 20th century, begin to have the kind of phenomena that we're interested in, punctuated above all by the introduction of electronic computers around the time of World War II.

Max: I want to jump ahead here and talk about this collaboration more because you, Matthew, you're a historian, is that correct? 

M: Yeah, that's right.

Max: I was wondering, you have data scientists, historians working together to write this book. Did you learn anything about, and I guess this is more to Chris, the research and telling of history by working on this book, especially what do scientists and engineers not understand about the work of a historian? Maybe I can get both of your perspectives on that a little bit?

C: Well, I was already a fan of history but I certainly learned a lot from Matt from teaching this class. That includes how to construct a class like this, a class that's about text and ideas, as opposed to… I've been teaching a lot of math classes where I sort of walk up to the chalkboard, and I just want to follow the talk and do the calculation and then I talk about the calculation. It's very different constructing a class that's around ideas and intellectual transitions. 

I guess another thing is, I was always interested in history. My research has always been very multidisciplinary, which means I never really know the full history of any of the fields I've been working in. I’ve been working in physics, applied biology, and mathematics, and computer science, reply to complex systems from the natural world. So often, I'm sort of entering a field in the middle of its stream and there's a lot of things that seem surprisingly novel or even just word choices that must have some interesting history. 

If you've encountered the word regression, you've got to wonder why is why is this called regression? And then all of these things have a history about the communities that formed those techniques, and what their interests were, and a chance to think about how things could have gone differently which often is useful for helping us envision possible futures.

Max: But why is it called regression? I don't even know, I actually have not thought about that.

Chris: Well, this is a great story. So once you get to the appropriate chapter, you will learn how at the end of the 19th century, people started putting data to work to understand the greatness of their states, or in the case of the late Victorian Empire, the greatness of the Victorian Empire and the perceived decline and greatness of the Victorian Empire. So there was a great social concern, the rise in poverty. People were looking for causes including possibly the rise of immigration. There's many things in the decline of empire that may resonate for people in this present day. 

Particularly, we looked at a particular gentleman statesman, named Sir Francis Galton, who's the person who gave us the word “regression”, the word “correlation,” and the word “eugenics,” which was his main problem, his main project to try to improve society. 

So he was looking at how great certain families are. He was distinct cousin of Charles Darwin. And so he spent a lot of time thinking about the greatness of different families and effectively looking at why is it that I am so great as Sorry, of my family members. So he looked quite a bit at what he could use as a proxy for greatness, which was height. 

So the very first regression paper in the students in class actually do the first regression is to look at the height of children as a function of the height of parents. So there's two parents we had to take the average and if you make that little scatterplot and then fit it to a straight line, you're doing the first ever linear regression. He calls it regression because you get this phenomenon where the really tall parents don't have kids that were quite as tall. They regressed towards the sort of mid-height of the population in question. 

That sort of regression where even great, great parents might have children who are slightly less great than them. That sort of concern, which again, went into the perceived decline of Victorian greatness was what was driving Galton, and that drove the original usage of the word regression.

Max: Interesting, it's almost like regression toward the mean.

Chris: That's the context. So that's the original regression. And for a century now, we've been using that form of supervised learning and just called regression regression regression. We've completely lost his original meaning. 

Similarly, we've lost the original meaning of the word “statistics” itself. One of the fun things about starting in the late 18th century is the introduction of the word “statistics” into the English language, which when it first enters the English language has nothing to do with data or numbers. It's about trying to run the state, a government. 

Originally, the word has nothing to do with data or numbers and almost immediately, there's a fight between people for whom making sense of the state is something that you do qualitatively by understanding the greatness of its leaders and other people were like, well, we could probably use numbers and little tables where every row is a country and the columns might be how many animals or how many population, or how many square miles or what have you. And there's a fight between the people for whom that kind of statistics is called vulgar statistics. Like quantitative statistics is called vulgar statistics. And the highest statistics, the artful statistics. 

It's a great place to start because looking at the change in the way we use words gives us a view into the way we change ideas and the way we change our values, what we think is true, and why we think things are true. If we take a multi-century timescale, you can really see how those things pivot in time. Same with artificial intelligence, machine learning data science, part of the project of the book is to take those terms, which are now becoming intermingled and sort of used in a fuzzy way, and tease out, where did these terms come from? And how is it that the way we use them today is vastly different? How it has changed quite a bit in a way that's traceable from their original interest.

Max: Yeah, it sounds like it can almost take us out of the narrow worldview of our time. And even if we don't share the worldview that they have in the past, you can almost see things in a different light by understanding the origins of these terms and the origins of these ideas.

Chris: Yes, as James Grohmann said, “History makes the present strange.” So our goal in using history is not to constrain our reader, but rather to liberate our reader and help the reader see how things could be different.

Max: There's so much we could get into. I was just flipping through it and I saw so many great topics in the development of science in your book. I saw I think central limit theorem, Bayesian inference, information theory, codebreaking neural networks, the origins of all of that. 

Do any of these topics stand out as the most surprising to you? Were these stories that you had known already and then just kind of decided to write down for people? Or was there anything that you looked into for this book, where you were like, wow, this is totally not how I expected this to have originated?

Chris: For me, pretty much everything before 1980 was new. I didn't really know so I learned a lot in writing the book. Maybe arguably before 1990, and even to the present day, one of the exercises in the book is to take a pair of lectures by Pat Langley about machine learning and one of the lectures is from the early 80s and the other lectures from 2010. I was already working on machine learning in 2010 and trying to explain to other people the term machine learning. 

It's fun to look at two different essays written by the same person over multiple decades and see how that person writes about the changing nature of that term. Machine learning, being a new enough field that what it meant in 1984, in the hands of one person is clearly different than what it meant and 2010 even from the same person. 

But I would say pretty much everything before 1980 I learned a lot. Certainly about the history of computation, digital computation, and World War II, and then how mathematical statistics became an academic field. Stories like the story of regression,  the drifting etymology of the word statistics and that's sort of philology of how that word changed along with pretty much every other technical term of art in the book. I learned a lot.

Matt: One thing I think both of us, I think, got pretty excited about was the extent to which, long before machine learning had its sort of moment of prestige and when AI was largely symbolic AI and statistics was very mathematical, in the military and the US intelligence community, there was this real drive to connect big scale computers with large large data stores, with computational statistic techniques. Not for the purpose of producing truths about nature, but producing actionable information. 

These early documents from the 40s, the 50s, the 60s, have echoes with things that attitudes and approaches that people had in machine learning in the 90s and thereafter because that was the ecology in which that kind of thinking could be supported, and thrived. We found many sort of fascinating ways in which things that have been quite secret and kept very close by the intelligence community of the military seep out in various ways, some of them quite deliberate, some of them, less so.

Max: Right. And if we talk about the 1940s, I believe you have a whole chapter on… was it called Data at War? So that sounds like that is one of the interesting turning points where you have the development of the computer as well as your World War II happening at the same time.

Chris: Absolutely. I hadn't really known that story before about how much of the origin of… honestly, I think it's an intellectual origin that leads up to the present day under data science is born of that particular context of making sense of streams of messy, important real-world data. There are many ways that data was useful in World War II, but the particular way that it was used for code-breaking, and how it directly led to computation itself was a story that I did not know. 

Max: It looks like you organized the book into three parts. Just from eyeballing it looks like the the first two are historical and the third covers some contemporary issues. Maybe the second part is more World War II onward. How did you decide to break it up like that?

Chris: Good question. I mean, I guess when I was growing up, I sort of was tired of hearing about how important World War II is but the older I get, the more I realized, wow, World War II really was pretty important. It just had such a massive impact on the way that country organized resources, as well as norms. 

So the way it organized resources around science I just hadn't appreciated. I grew up funded by the National Science Foundation and the National Institutes of Health, but I did not understand that those things must have had a birthday. There must have been a time when the National Science Foundation didn't exist. What was it like before that? And what was somebody thinking when they made that? 

Similarly, with digital computation, there must have been a time when people did not have digital machines. What were they made for? What came before? What was special-purpose hardware for making sense of streams of messy data before 1939? That is a story which I now know, which I definitely did not know before that. So learned a lot.

That's why we chose that as a topic for our chapters. We felt like it was a real pivotal change, and really kicks off part two with a class for, in the book, for helping us understand what it means to make sense of the world through data on a computer.

Max: Before… I want to get into some contemporary issues, eventually. There have also been recurring discussions on this podcast, which is now celebrating our fifth year of doing this. But before we go on that, Matt, do you have anything to add about the kind of organization of the historical context of the book given your work?

Matt: Yeah, well, we wanted… If the early part of the book is very much about the development of an intact new scientific approach, the development of the statistical approach to what is science, the p values, statistical significance, and whatnot. 

We really wanted to have a sort of pivot to when this becomes a much more sort of industrial process where you have sort of actionable intelligence or business intelligence that's really quite detached in some ways from some of those older goods that takes a different characteristic and requires the kind of larger infrastructures for collecting, recording, and then analyzing data. It becomes much more of an engineering task. And with that engineering tasks comes, of course, a much greater reach. much greater concerns about issues like privacy and legality and other sorts of things before pivoting to the moment where it's not just an engineering task, but it's evermore, sort of everywhere, ubiquitous and in our situation. 

We really envisioned it along those lines. To really see, to use actually, the codebreaking that happens at Bletchley Park. If it's on the one hand famous guys like Alan Turing, it's also teams, largely of women, making sure that you're doing large-scale compute. Our moment is one, where there isn't just some one genius coming for the algorithm but that being done really on an industrial scale. That's central to understanding what we can do and some of the problems around it.

Max: Right, and it sounds like you guys cover not just the political implications, but the business implications, the cultural implications, all of that kind of together. I love this kind of interdisciplinary approach here.

Chris: We really had to get into the business implications of data because there was such an amazing moment in the 1970s around privacy when people were concerned about overreach of power by the state when the state was seen as having a lot of data. And that is such a useful foil for present-day conversations about the power of tech companies, which have all the data. 

I think it's really instructive to compare the norms and the laws and the narrative of the 1970s about the way people reacted to too much power and too much data in one hand, namely, the state, and how that's different from, and yet has some of the same fears as, too much power in the hands of one place, namely, private corporations.

Max: It's private corporations, and it's still the state too.

Chris: In some ways though, the state hasn't been able to keep up. There's NSA documents we've looked at for the class where people, cryptanalysts were clear that there was a time when they were outstripping private industry and then there was a time when private industry was outstripping the NSA, according to these documents. 

Max: Wow. At what point was that? I mean, I could take a guess, but I'll hear from you. 

Chris: Matt what would you say? 1990s?

Matt: Well, in terms of computational hardware, it actually sort of converges by the 70s, what is being produced externally. Though, the NSA is really at the heart of a lot of developments around, say, computers, like the Cray and other kinds of things. But there's a sense that private industries is going ahead, and particularly is going out on its own. 

In analytic technologies, they've been sponsoring this kind of stuff since the Cold War period. But it takes off in academia then in the large data-driven corporations by the 90s in ways that they appear only to hope to be able to catch up.

Max: I would have expected wrongly that this was a product of the rise of social media more recently. Whereas this whole idea of entities controlling so much data just goes way further back than most of us talk about or most of us hear about in the media and in conversations.

Matt: Yeah, at the end of the 1960s, there was both the concerns about government databases, that would rule us all and that was the beginning of a set of concerns about consumer databases, and the consolidation of credit reporting. So it goes back quite a ways. As it turns out, in the 1970s, the US chose to highly regulate government collection of data, and not really regulate most corporate collection on data except in a few sectors that persevere to this day.

Max: All right. So I wrote a couple of notes on some of the contemporary issues that are mentioned toward the end of the book that we've also been discussing on the podcast every week. We have discussions, arguments, whatever. Lots of people come on the show to talk about this stuff. I wanted to see what you found. 

You mentioned, first of all in this book, the rise of these AI ethics groups at organizations, particularly large kind of big tech organizations, specifically Google. I've covered this while noting the kind of incredible amount of drama and personality conflict that these departments tend to generate. So the question I wrote was what do we need to understand and think about when people call for AI ethics or data ethics?

Chris: Good question. I mean, one question is what is in it for them? So to speak. What is it that's motivating companies to actually do this? Is it retaining and recruiting great talent that cares about it? Is it fear of regulation? Is it fear of losing customers from simply bad brand management? 

There's a lot of things that could lead, you could imagine, leading a company to think very seriously about ethics. Then the next thing is, well, what are they going to do about it? Because for a lot of companies, it's very easy to appoint one person who's in charge of ethics and then you can say, “Okay, well, that person is in charge of ethics now because we hired somebody who's in charge of it.” Or to state a set of principles, which if the principles don't somehow couple to the process, principles don't actually change anything. In rare examples, there's companies that have either staffed up a whole group of things or that have found meaningful ways to integrate that ethical commitment into practice. 

Part of what we were able to look at in the very recent years, is part of the collapse and flame out of that effort. 2016 through 2021, roughly, there seem to have been a huge rise in interest in ethics. A lot of it seems to have collapsed right around 2021. That will be a question for future historians to answer, why there wasn't a sustained interest over the, let's say, last two years, with the energies that there was in 2016 through roughly 2021.

Max: There was interest in it. Maybe I could ask more of a skeptical question. We probably don't have the answer to it yet, but didn't even do any good that there was interest in it from, particularly, corporations in the period that you mentioned, late 2010s,

Chris: A statistician might answer we will never know the counterfactual universe that would have existed had none of these places given attention to ethics. Certainly, it had a big impact on academic work. So within academic computer science like fairness, which is one particular subtopic within ethics, has really blossomed over the last seven years as a field of academic and research interest.

Max: Interesting, interesting. That reminds me of how… I heard a pitch by a startup a few weeks ago and the beginning of the pitch to me directly was, they were telling me about how ethical they were and how they took ethics seriously. They were telling me this before they actually told me what they did. That almost made me very skeptical because I was like why do I need to be worried about your ethics if I don't know what you do yet? Anyway, that was just a personal experience and a personal bias, maybe. 

We also talk about the use of data for advertising and persuading and manipulation. It's nothing new, but it's come to the scale of it today is much larger. Do you have any takeaways on that? Is it possible for us as people to become more difficult to manipulate by data efforts? It's almost like you hear about the efforts to manipulate people through politics, through buying, through consumer behavior. When you're on one side of it, it's like, yes, we can use this to achieve our goals. Is there any discussion on the other side of it, where it's like, how do I know that this use of data is… The use of my data is not causing me to do something that's not in my interest?

Chris: The first question was, is it possible for us to become more immune to persuasion, right? 

Max: Yeah. 

Chris: I'd certainly like to believe so. A slightly related phenomenon is immediately after the 2016 election when there was renewed attention by many in fake news. I was talking to somebody who used to be heavily involved in British media and he said, “Well in the United Kingdom, people weren't so influenced by duplicitous news sources that look like real news because they'd already been soaking in duplicitous media for centuries, decades, let's say.” They were sort of immune to the idea that you have things that are not so plausible but they're published in what looks like, graphically looks like a reputable news source. 

I definitely think the norms can change, right? Technology changes and norms change thereafter. And then a longer timescale laws change to catch up to people's norms, which have changed in response to technologies. Maybe I'm naive, but I do believe I have hope that people will learn how to use these technologies responsibly, including to use these technologies critically.

Max: I have hope too. If I remember correctly, 2015, 2016 there were a lot of websites that were not actual newspapers but you'd kind of see them on their feed, and it would be totally made-up stories. Now, that still kind of happens, doesn't it? But it happens in a different way. People react to it.

Chris: It is a socio-technical system so it's really also related to people's identity. I think a lot of technologists immediately responded to the narrative around fake news by saying, Okay, well, we'll make algorithms that will be able to tell you what is actually true and what is not true.” And a couple of years later, it's not clear that everybody really cares if it's true. Arguably, a lot of the way that people react to persuasion and to shared social communications doesn't necessarily seem to be constrained by truthiness. It doesn't seem to be the central thing that determines how people engage with and share content.

Max: Yeah, yeah. I also have concerns about the idea that we're going to have algorithms that… Okay, you can maybe validate the source of something, you can statistically tell me something, but an actual news item or a fact that's put out on Twitter, that's something that has to be debated. I don't think an AI, at least today, can tell us whether it's true or not, if it ever could.

Chris: Understood. 

Max: And finally, this is the fun one, because last week, I talked about the AI doomsday scenario on the podcast. I actually got some pushback from some of my listeners, because I was very skeptical reading about the AI doomsday people. I think I told a friend the other day, come on the doomsday cult is one of the oldest tricks in the book. But there are people who are telling me look there are a lot of smart people who think that we're in big trouble. Keep reading, Max. So how did you choose to cover this topic?

Chris: I don't know, I think we are kind of on the same side as you that we feel like there's a lot of real present dangers today. Rather than spending our time speculating on potential Terminator scenarios, why not look at the things that are really pretty messed up right now? Exacerbating inequalities and disempowering people who are already disempowered. 

I have no doubt that there are extremely successful people who say don't worry about the problems that are being caused right now, particularly those around inequality. You worry about the Terminator coming around the corner. It's possible that the only actual threat to those people is a Terminator because they're already on top of everything so they're doing just fine. 

I think if you look at society right now, there are all sorts of opportunities to inspect ways that algorithms are not always acting in consumer interests of consumer protection. Let's put it mildly, I think those real present concerns of the present day are much more forefront than speculating on future Terminators.

Max: Looks like we've got our 30 minutes but I want a chance to summarize everything. Now maybe, Matt and Chris, you could give us a little bit of what are your final thoughts about our discussion today and where can we learn more about the book and learn more about what you guys do.

Chris: I'd say as a final thought, as I said earlier, I think that the breakneck pace of innovation and data and data-empowered algorithms is difficult to contextualize. I think that history gives an extremely useful context for understanding where it came from and where it's going and how we all have a role in shaping it. 

As far as finding out more, books are available pretty much wherever books are sold these days. Matt and I are still here at Columbia doing what we do.

Matt: I would say one takeaway is if anyone is telling you that technology is only going to go one direction and everything has to change because of that technology, they're probably literally selling you a bag of goods. There's lots of ways that technological systems can be developed. We can be excited about that and also shape it in ways that comport with our notions of the kinds of societies we want to live in. 

And the book, as Chris says, is going to be available to all of your favorite booksellers and an audiobook version will also be available upon launch. So those of you who prefer to use your ears and your eyes, we welcome listeners and readers.

Max: Nice. Did you guys read the audiobook? Or did you get someone else to do it?

Chris: No, there are things we're good at and there are things we are less good at. A professional was used for reading the book, not us.

Max: Okay, okay. 

A couple of related episodes that I want to call out, we go back to more recently episode 265 on the Local Maximum. I talked about the multi-armed bandit and I actually reviewed one of the papers, Chris, that you've sent out before on the contextualized multi-armed bandit so you might be interested in that. 

Then I'm kind of scared to go back to this episode, but episode six on Facebook data and the election. That was the episode that came out during the Cambridge analytical stuff. I don't know, this is almost in the purview of historians now, what was I saying about it at the time but that's another one to check out. 

All right. All of this will be available on the show notes page and I'll come on afterward because we'll know what episode this is. I really appreciate you guys taking your time. I know this discussion and the book has kind of got me thinking about these issues in a new way and kind of a more broad way than I had in the past. So I appreciate that and I think that you'll bring those perspectives to the listeners too. So thank you very much.

Matt: And thank you for having us. 

Chris: Yeah thanks. 

Max: Alright, once again, the book, I've got it right here. It's called How Data Happens: The History From the Age of Reason to the Age of Algorithms. Look it up. I just got it from Amazon today. It's really easy to read. It's really accessible for anyone. If you like learning about this stuff from a very technical level. If you feel like, sometimes, maybe you don't want to read a book that is purely technical but you're really into history, I think this is the book for you. 

All right, next week, if the timing is right, I'm going to talk more about how my life in Connecticut is going among other things. Maybe Aaron will interview me. Have a great week, everyone. 

Narrator: That's the show. To support the Local Maximum, sign up for exclusive content and our online community at maximum.locals.com. The Local Maximum is available wherever podcasts are found. If you want to keep up, remember to subscribe on your podcast app. Also, check out the website with show notes and additional materials at localmaxradio.com. If you want to contact me, the host, send an email to localmaxradio@gmail.com. Have a great week.

Episode 273 - Stop Making AI Boring

Episode 273 - Stop Making AI Boring

Episode 271 - Semiconductors, Lithography, and Moore's Law with Adam Kane of ASML

Episode 271 - Semiconductors, Lithography, and Moore's Law with Adam Kane of ASML