Based in Sydney, Australia, Foundry is a blog by Rebecca Thao. Her posts explore modern architecture through photos and quotes by influential architects, engineers, and artists.

Episode 163 - Flying Cars, Causality, and Digital Art Storage Concerns

Episode 163 - Flying Cars, Causality, and Digital Art Storage Concerns

Today's March News Update includes:

  • NFTs, concerns about how they are being stored, Filecoin

  • Jetson's Act in New Hampshire to legalize flying cars

  • Talk about causality and its trickiness in AI

Links

Twitter: @Jonty on how NFTs are references
Million Dollar Homepage: Shows how URLs go bad given enough time
Allum Bokhari: The Need to Rein in Big Tech, with solution similar to one we proposed several years ago

Cnet: New Hampshire to Allow Flying Cars
Bloomberg: Flying cars could have a Future in New Hampshire

BDTechTalks.com: Why machine learning struggles with Causality
Also Linked: Article on the Book of Why and Judea Pearl’s Work
Paper: Toward Causal Representation Learning

Related Episodes

Episode 31 with Shirin Mojarad on Causality
Episode 9 on Fixing Facebook

Transcript

Max Sklar: You're listening to The Local Maximum Episode 163. 

Time to expand your perspective. Welcome to The Local Maximum. Now here's your host, Max Sklar. 

Max Sklar: Welcome everyone, you have reached another Local Maximum. And this week, I'm just going to interrupt my marathon of interviews. I did like a ton of interviews that one week, remember, but hey, we need to do a March update. So I got Aaron on here. How are you doing, Aaron?

Aaron Bell: I'm doing well. Yeah, we got to squeeze it in before March is over.

Max: Yeah. Right. Right. And it goes so fast with, I guess that's like, one out of every four episodes, sometimes five, but usually four. So if I record five interviews, it's hard to fit it in there. I feel like I should have a math episode or something coming up. I know people like those, although we do have a bunch of technical stuff today. So that'll be good. But first, did you happen to listen to the previous two interviews on The Local Maximum? 

Aaron: Yes, I did.

Max: Great. All right. So yeah, I thought it was really a lot of fun having Assaf Lev on last week. In particular, it was, you know, cool to hear from the actual CEO of Locals that we’re on. And, you know, I urge people to check that out. What do you think anything you took away from that? 

Aaron: Yeah, it was interesting to get a little bit of a peek behind the curtain for where they were coming from and what direction they're heading in next. Though, those of our listeners who are on the Locals will have already seen that I poked you with a question because at the end, he mentioned a feature, they're working on something that's going to be coming soon, that a lot of people might be interested in. And it turns out, that's something that you and I have talked about a week or two before that completely independently. And so I had to get you to, to check my memory and say you did we talk about that after you did the interview or beforehand? So.

Max: Yeah, yeah. I was pleasantly surprised that a lot of product and tech discussions that are going on at a place like Locals are similar to what's going on in the industry. I guess that's not too surprising. But it's always different when you have kind of, I don't want to I hope it's not a bad term, but like, like a front lamp man like Dave Rubin, where it kind of sounds like well, it's, you know, it's strange, because Foursquare, you know, we had a product CEO in Dennis, whereas, when you have someone like Dave Rubin out there, it's sort of like, okay, well, where are the engineers? Where are the product thinkers? And I know that Dave Rubin’s probably thinking a lot about the product too, but it's kind of cool that you have this guy Assaf, who's like, you know, thinking about Dave almost as, or not almost, but as the customer. 

Aaron: You have to say that there's a saying in tech that you know, build the product that you would want to use, and that sounds like very much the direction that Rubin is coming from. And maybe he's less equipped to answer the question of, okay, so how do we build it? And that's why he's brought in people on the technical side to just support that side of the equation.

Max: Yeah, yeah. Okay, what else did we do? Oh, yeah. The episode 161...

Aaron: Was that the tech censorship?

Max: Yeah. Oh, wait a minute. Yes, yes. yes. And that is with Sam, I forgot his last name. Sorry, Sam. Sam Jacobs. Yeah, that one was, that one... actually it sparked a whole bunch of conversation on Locals that I kind of have to dive into. Hopefully, I will. But before this episode goes out, but that was a little bit polarizing, like I thought it might be. But, you know, an important voice, an important point of view, I think. 

Aaron: We, I remember, again, you know, peek behind the curtain here that we had some discussions about, how do we present this because there was some concern that it might be a little a little too controversial on some things, and...

Max: Well, yeah, I've listened to his podcast resistance library, and it's actually quite good. Some of the history stuff is quite good. So I don't want to present this as like, oh, “This crazy guy came on the show.” No, no, he's actually very good. But, you know, I personally, I actually feel like I maybe need a little, I would need a little more convincing on like, using government intervention to regulate tech companies into giving people free speech. I know we're going to talk about this a little later today. But that's something that he's very much against, or very much arguing against. But it's sort of something that I kind of have not, I'm not quite there yet. Or, you know, I feel like I have some concerns. Let's put it that way.

Aaron: Yeah, it's definitely a fraught topic that causes some cognitive dissonance on my own side.

Max: Yeah, yeah. All right. So we have a few topics today. First, we're going to talk about some of these non fungible tokens, are they a scam? And we're going to talk a little bit more about, you know, another article about reining in big tech, what a surprise. That's what the 160s are... the 160s have now become all about NFT's and like how big tech has just become the evil machine. That's the whole thing. We're gonna talk about self—no, no, no, not self-driving cars, flying cars, holy crap. And, and then we'll end with causality. How's that sound?

Aaron: We'll see if we can squeeze it all in. 

Max: Yeah, we'll see if we can squeeze it all in. Okay. So first of all, I want to point your attention to this. The tweet is a rare—a good tweet storm by @Jonty Wearing not like that it's rare from him, but just rare in general. So I actually, I haven't heard of the tweeter, but he's in the UK with engineering experience. And he bothered to look into, when you buy these, this digital art, these non fungible tokens, as they call it, what are you actually buying? Now I asked that question, first… And so first, it's like, hey, it's kind of like a certificate of authenticity, like, I own the token from the original creator of that artwork. But I was concerned that you're going to have a hard time charging for that artwork, because it's freely available online. So why would people pay you royalties? Although, maybe there's a way to get that to work. But this tweet, which I'll post on localmaxradio.com/163, shows something even worse about some of these, where some of this artwork that you're buying is, it's not like the actual, the bitmap for the artwork is on the blockchain, that would be very inefficient. It's usually some kind of hash. And so first of all, some of them refer to URLs. Now, that's very bad. Aaron, can you tell me why that's really bad? Sorry, to put you on the spot.

Aaron: I’m assuming that has something to do with, have you ever gotten a 404? Have you ever tried to go to a URL and it is broken? Because something that was once there no longer is?

Max: Right? Right. And that actually happens all the time. Have you ever heard of the million dollar web page? This just came out, this just came in my mind, million dollar page. So this is actually something that was done. This is something from our generation. Or it was done in like, I don't know, 2004, or 2005 by someone who had just graduated college. And he was like, I have, I'm gonna buy a web page, I have 1000 pixels by 1000 pixels.

Aaron: Oh, I do remember hearing about

Max: Basically, every pixel is going to be sold for $1. And I'm gonna make a million dollars. And he sold it out. And by the way, when you buy a pixel, you could color it, and you could send it to a URL, you can send it somewhere else on the internet. Now, the interesting thing that nowadays so that the website is still up, you know, he promised them it'll be up forever. But a lot of these websites no longer exist. So if you click around on Million Dollar Page, I mean, let's see a million.

Aaron: I see you, each pixel is also a link to something.

Max: Right? Right. Right. So if you go to milliondollarhomepage.com, okay, I could see the pic. First of all, a lot of these—a lot of this artwork looks very, like early 2000s, which is pretty interesting. So let's say here's one, increase your sales today, silver members three, okay, I'm going to click it site cannot be reached. You know, what a surprise. So most of these do not exist anymore, which is pretty interesting. I mean, fortunately, that guy made a million dollars off of it. But it goes to show. And fortunately, you know, the Foursquare URLs are all there from 11 years ago. But it just goes to show that a lot of URLs, they're not permanent. So watch out for that. Now, some of them refer to something called—they don't refer to URLs, they refer to IPFS. That's actually the better way to do that. It has this crazy name IPFS. It's called the Interplanetary File System. I don't know why it's called interplanetary, like, I don't know if there's another planet involved. But have you heard of this, Aaron?

Aaron: I've heard that name before. But I did not recognize the acronym when we were talking about this previously.

Max: So okay, so yeah, so it uses something called content addressing. So that means that every file that you have can be put through. Think of it like a machine, like a function that takes all the data from that file, and boils it down to a shorter string, maybe not that short. But like, that boils down to like a string that you could put in the URL bar, that would be a bunch of letters, numbers and symbols and, or whatever, that you wouldn't make heads or tails of... But each file that could exist has a more or less unique string, because in cryptographic hash, obviously, you might be like, “Well, how could every single possible file, even like the ones that are almost infinite in size, or like huge in size have the same hash?” Well, some, there are collisions, but the idea is, the resulting string is long enough that there won't be collisions for quadrillions a year. 

Aaron: So when you initially said that it was going to shorten the URL, I was thinking like a bit.ly link. But there's a lot more going on under the hood here, it sounds like.

Max: Right. So if you want to take an image and shorten the URL, essentially using this content addressing scheme, and then you do it again, with the same image, then you get the same string back. If you take that image, and you change a single pixel, and you put it through content addressing, you get a completely different address.

Aaron: So it's not dissimilar to a CRC.

Max: CRC, this, that's when you're gonna have to explain to me.

Aaron: Oh, gosh, you put me on the spot. I don't remember what the acronym stands for. But basically, I know at least one place it's used is in Ethernet frames, I'm sure it's used in a lot of other places. But it's a way of having a short sequence of bits, that the content of the message can be compressed, you know, it goes into a function and it puts out the short sequence. And if that short sequence doesn't match the content, then then you know that something has become corrupted. And like you were saying before, it's, there's some collision space, you could have more than one input that generates the same output. But it's very unlikely that if you have, for example, a bit flip, or, you know, one or two minor changes, that it will generate the same exact output. So it's an error detection scheme.

Max: Right, right. So yeah, a bunch of systems use IPFS. There is a cryptocurrency called Filecoin, of which I I own a small investment. And this is a cryptocurrency, where actually you could use filecoin to pay the blockchain, I guess, pay the miners to, or the people who run nodes for that blockchain to store files for a specified period of time. So I could be like, I have a file, I want you to host this for 100 years. I pay you in Filecoin, and then it happens. And I thought, Oh, that's interesting. So, and they use IPFS to host those files. So anyone could use IPFS. It's open, right? Because it makes sense. Like you could host anything. If an image is hosted on there, there's only one place it could be hosted. And there's just no guarantee that it will be hosted. If that, you know, if that makes sense. Like, if I take a picture of myself, it's not hosted on IPFS. But so I have to host it, but only that picture could be hosted at that address. It's almost like every, every possible picture you could take of yourself already has an address. Every photo you could take with your iPhone already has an address and IPFS waiting to be used for it, which is if I–

Aaron: if I had two copies of this photo that are stored in two different places on the internet, would the IPFS would point to both of those or would it...?

Max: Oh, no. So IPFS would take the word IPFS wouldn't point to a place on the internet, it would just basically, you would call that, you know, you would call the photo’s hash on IPFS. And the photo would come out. If it's hosted and then the person hosting it can prove, hey, this photo gets stamped down to this hash. So I can prove to you that yes, this photo is what's here.

Aaron: So you can retrieve the real photo, but it could be in multiple places simultaneously.

Max: Oh, sure. You can have multiple people hosting the same file and IPFS. Yeah. So okay. IPFS seems like it's a better way to store these pieces of digital art. So there is now a few pitfalls. First of all, you want to make sure that the IPFS file that stores your digital art is not itself a URL. Like it could be like, here's the IPFS. And then you go there, and it says, it's actually not an image, it's just a URL to somewhere else. So here's the original problem. Secondly, it does not guarantee that anyone is hosting it there. So if you own this digital art, you should also keep a copy of it, in case, nobody else hosts it, or nobody else copies it. So there's one problem of anyone can copy it. But then there's another problem of there's no guarantee anyone will copy it or host it. So if you own the artwork, you better control several copies of it. Because it's basically like, you know, what if you have the certificate of authenticity for a painting, and then you lose the painting, it's no good. Likewise, if you lose the certificate of authenticity, it's no good.

Aaron: Makes me think of the Internet Archive that maybe you should make sure that there's a copy of it there since they seem to be somewhat reliable.

Max: Yeah, yeah. But even then, even then, yeah, there's no guarantee they'll be around.

Aaron: I see, I see a lot of parallels here to gold. In that, you know, the most secure way of possessing gold is to literally have the gold bars in your own home in your own safe. And then that a step down from that would be you have your gold bars in somebody else's safe, you know, in a safe deposit, box, the bank, and then there's another step away, which is I have a slip of paper that says that it's redeemable for gold bars. And you can take another step which says I'm invested in a fund, which supposedly own some gold bars, and you take enough steps away, it becomes much easier for someone to, in the intervening chain, to go bankrupt. And all of a sudden, you have a worthless piece of paper.

Max: It's the same thing with with cryptocurrencies, too, and…

Aaron: I suppose, yeah. 

Max: By the way, it's not, you know, there are people who say, you have to own gold in your safe or you have to own your own private, you have to control your own private keys and cryptocurrencies. There are also reasons not to. And actually, I'm going to talk about that with a future guest, which I'm going to spoil now, it’s Peter McCormick of the What Bitcoin Did podcast in an upcoming episode. So yeah, I mean, but in this case, with digital art, it costs you almost nothing to keep a copy of the digital art, even if somebody else is also, you know, also has it so it's a little bit different from golden, 

Aaron: The big advantage to digital art is that it—there, it is almost counter to the fact that we're referring to it as non fungible. You can have multiple copies of it stored in multiple places, and it does not decrease the security. It's not like you have to split your hoard of gold between multiple storage locations. You can have, in this metaphor, the same bar of gold in multiple different places without having to give anything up there.

Max: Yeah. Yes. The art itself, the file itself. Now, the private key that you have that proves that you own the art. That's another question, because if somebody else has that, then they could take your NFT. So sure, yeah. So that that comes back to how do you store your cryptocurrency with NFTs is the same thing.

Aaron: There's always a bottleneck somewhere.

Max: Yeah. All right. So I hope that makes sense to people. Next one is Oh, this is an article that you sent me. We've already talked a lot about, so called big tech. I don't know which one this is talking about. Oh, yeah. Twitter, Facebook, Google, YouTube. All the usual suspects. The last two episodes have been about taking on big tech, actually. So it's not surprising that we're still talking about it. So this article that you sent me is interesting. Well, tell me a little bit about it. It's interesting, because it harkens back to something that I talked about in episode nine, and you recognize that which I'm very impressed.

Aaron: Yeah, yeah. So I was reading through this and actually, funnily enough. 

Max: So what's it called? 

Aaron: So I received this in a publication called, I believe it's pronounced ‘Imprimis’. Okay, from Hillsdale College, which, don't ask me how I ended up on their mailing list, but I did. And it's not just a mailing list. I received a physical copy of this newsletter. So I was actually reading this on a glossy piece of paper. And, it was I think, literally the last paragraph or two, that the general discussion was, you know, something needs to be done, big tech is becoming too powerful, you know, they have control of the modern Town Square. And so we can't just write it off as “Well, they're private companies, they can do private company stuff and we can't stop.”

Max: We’ve heard. It's called Who's in Control? The Needs to Rein in Big Tech.

Aaron: Yeah, but when it got to the very end, one of one of the conclusions that the author drew or possible solutions that he proposed was very reminiscent of something that you had discussed in a much earlier episode. And so I thought, oh, this sounds, it almost sounds like it listened to an episode of your podcast and took the words right out of your mouth.

Max: Where are my royalties? We don’t have the NFTs, see, that's why. So all right, why don't you read the key quote there?

Aaron: Sure. Yeah. So it's, “Our ultimate goal should be a marketplace in which third-party companies would be free to design filters that could be plugged into services like Twitter, Facebook, Google, and YouTube. In other words, we would have two separate categories of companies, those that host content, and those that create filters to sort through that content. In a marketplace like that, users would have the maximum level of choice in determining their online experiences. At the same time, big tech would lose its power to manipulate our thoughts and behavior, and to ban legal content, which is just a more extreme form of filtering from the web.”

Max: Right? Okay. So essentially, it would be like, yeah, if you are one of these platforms, your only goal should be to sift through legal and illegal content, not to be censoring the legal content, which is sort of what what locals does and leaves the filtering of the legal content, which you have to do because, you know, not just having all the content together, leads to a lot of crap, which we see on Twitter. It's amazing as much as they as much as they censor and filter and remove and shadow ban. Most of the comments are still just very low calorie or high, high calorie, empty calorie. That's what I want to say. 

Aaron: Low nutritional value.

Max: Yes, low information content, or perhaps, negative information content. But that was sort of my idea. And I was like, in 2008, I said, hey, look, if you want to subscribe to like the New York Times filter, you could do that. And maybe it'll be run by the New York Times and have a good idea of what I'm gonna get from that. Or I can maybe go to a filter that's maybe a little too permissive, and I get kind of everything. Or maybe I just, you know, I find some other group that seems to be really good at it, and seems to promote the things that sort of make me think or catch my interest? And then I'll do that one. And so yeah, there could be sort of a, a competition among the filters. And then they're kind of federated, you know, compete, and maybe maybe each filter company can work across every service. You know, if I'm a filter company, maybe I'll make one for Twitter, I'll make one for Facebook, I'll make one for Google, etc, or for YouTube, etc, etc. You know, and so that competition would give people choice, I think it's a very interesting idea. And it's sort of I mean, you know, perhaps it could be implemented on something like Mastodon, I'm not sure. But, but yeah, I think that would make a lot of people a lot happier.

Aaron: I can't remember if I mentioned this way, way back in episode nine, but it makes me think of Slashdot, which I haven't been on there in years. But

Max: Me neither.

Aaron: They used to have a feature where, you know, there's the standard feed, but you can flip a little switch, and it turns on the fire hose, which is basically the unfiltered feed. So you get to see things that, you know, haven't been moderated and haven't met a threshold for, you know, a certain number of upvotes. And, you know, you can opt into that. And that's a simplified version, I think of the kind of concept you're talking about here that you wouldn't want the choice necessarily a binary, you know, are filtered feed or no filter. But the ability to turn off the filter means that you could, in theory, then apply an alternate filter on to it.

Max: Right, right. There's this other quote here. “Even more troubling, I think this is now quoting from the article, are the invisible thing that these companies do consider quality ratings.” Every big tech platform has some version of this, though. Some of them use different names and I thought, yes, I've worked on a quality rating at Foursquare. Actually worked on a few of them for Foursquare tips and reviews. But fortunately, that, as I've said, you know, in previous episodes, the Foursquare data set is not as contentious.There's just a lot of just kind of useless reviews there. I mean we filter out people who look like they were banging on the keyboard, and people who just say things that don't make any sense. And things like the spam, people saying, call this number. But even in Foursquare tips, like we've had discussions about this, back in the day, when we were focusing on the consumer product, that it was like, yeah, there are definitely areas of gray area and disagreement as to what makes a good tip. And then, of course, then you quality to like, you know, rating,the actual places was another interesting one.

Aaron: I don't want to dig too deep into the Foursquare algorithm here. But when—did these quality ratings do they apply in a—what's the proper term here? An individual manner to do each review? Independent? That's what I'm looking for—an independent manner? Or do you build a profile of a user and you identify, oh, this, this user is a low quality user? And so we're going to discount their reviews? Because they have a history of not providing useful information.

Max: So we do take the author into account, but I don't think that it is... I don't think it's going to do like what what you say like that, 

Aaron: Not as harsh as it is, as I stated it?

Max: Yeah, exactly. So you take into account, for example, the language that the user tends to speak, or to type in. So that's when you want to show people things their own language first.

Aaron: Okay, I know you've talked about the language model on a couple of episodes before?

Max: Yeah. And do we know if a user is low quality? I don't remember, but I'm pretty sure each tip gets its own unique rating. And then maybe a user can have sort of a slight bump, if I remember correctly, or it's possible not. But I'm pretty sure the quality score of the tip is the main thing. And I think that the user is either not taken into account, or barely taken into account. But then again, it was, this was a different time, this was back, we could sit down and be like, Okay, what is what are our users looking for? They want to know, you know, they want good reviews, as accurate reviews, they want cool things to do like, like fun things to do at a place like things to order, where if you're a park, like, what should you check out? Or if you're at a museum, you know, what exhibit should I check out? And so it was like, yeah, that's what our users want, they want something in their own language, they want something that is not like completely broken English, he can't understand it, you know. They want it reasonably well written, but maybe not too well written. So it's like, too long, you know? So it's like, okay, these are what our users want. And this is how we're going to filter for it. Another one is like recency, you know, things, you know, created more recently, matter as well. So that was, it was, there was gray area, there was areas of disagreement, but it was political. Whereas all of these, and what does a political really mean? I'm not really sure. But I feel like there's different things they were—was it's like, it's not the product teams. It's not in the hands of the product team. It's not in the hands of the engineers, the product managers, the designers and all of those people. It's like, it's more in the hands of the Trust and Safety Commission and the lawyers and things like that of these companies in the PR of these companies, rather than the people who care more about the product. That's what I mean.

Aaron: Well, I'm sure that there's a long conversation that we could dive into off of, but not today.

Max: Yeah, yeah. I mean, I've been complaining about this for a long time. I mean, it could be that the tech industry just went down the path of ad-supported large social networks. And that worked for a certain period of time, grew to a very large side size. And now that business model has kind of run its course. And now as these companies optimize their profits more and more, it's sort of decreasing their effectiveness more and more, and we're sort of ready for the paradigm shift, as I've been talking about for a long time. So that's just another way to look at what we've already been talking about. So, yeah, okay, I'll finish on that note. 

All right. So this next article, I was like, “No way this could be real.” Because when I moved to New Hampshire, we did Episode 158 entitled “Live Free or Die.” When I moved here to New Hampshire, and I was like, “Yeah, everything's legal here in New Hampshire.” But I didn't know that this was legal. In June of last year, New Hampshire became the first state to legalize flying cars on the road. So we now have flying cars in New Hampshire. Now, before I start getting excited, I'm like, “Oh, man, I haven't seen one yet.” So they don't really exist. That's why, but they are legal. If you get your hands on one. Now, the law called the, what's it's called, the Jetsons bill, I guess it's called the Jetsons Law. You actually, this is the lame part of law, you're not allowed to take off on the road. You actually have to drive to the airport to take off, but I have a feeling that some people will break the law in some of those long flat roads up in the country?

Aaron: Well, it depends on how you define airport, because there are even even in Massachusetts, there are a number of improvised runways that meet some sort of requirement for being a runway that are definitely not, you know, tower operated.

Max: Right? So these are essentially small airplanes that you could drive on the roads. Right? I mean, like, so you can't drive an airplane on the road. Right? So how—

Aaron: Not easily. Yeah.

Max: If you want to get an air airplane, from one airport to a nearby airport, you literally have to fly it.

Max: Right. And part of the… Part of the, I guess, the opposite of a value proposition of owning a small private air airplane, is that… Okay, yes. It can make travel from point A to point B, where point A and point B are two airports a lot faster, because you can, you know, go as the crow flies, and you can go, you know, for for general aviation, you know, small engine aircraft, you're generally looking at upwards of 100 miles an hour. So between those two things, you can go a lot faster than if you were to do that same drive in an automobile. The problem is, though, now you're at an airport in a new, a new, you know, new region, new location. And unless the place you want to go is the, the, you know, rinky dink burger joint at the airport, now you need to get a car, right. And that's, that's got some hassles, whether, you know, either you got to rent a car, or you need to have a friend to come pick you up, or you need to keep multiple cars at airports across the country, at which point, why are you flying your own airplane? So it comes with some challenges to make it a practical method of getting around. And this in theory, would would bridge some of that gap because you can drive from your garage to the airport, take off land at another airport, and then drive to, you know, the place where you're having a meeting or to the rental, you know, house that you're staying at for your vacation or, or whatnot. So, so that was the concept there. And this is not something new, you know, this is an idea that's been kicking around for almost as long as airplanes have been a thing.

Max: Yeah.

Aaron: But as they pointed out, and at least one of the articles that we talked about there, there's kind of requirements pulling in opposite directions for what makes a safe vehicle on the road, and what makes a safe vehicle in the air.

Max: All those people on the road in Massachusetts, do you trust them? In the sky? I don't even trust them on the roads.

Aaron: Well at this point, one of these flying cars would require a pilot's license anyway. So yeah, ostensibly, you wouldn't have any less trust in these people than you would in the people who are already flying around in the air.

Max: Come on, “Live Free or Die”.

Aaron: What would worry me if I owned one of these is some crazy person digging my car, you're getting a bender, and it's bad enough when it's my I don't know, my $30,000 you know, regular medium, nice car. It's another thing when it's my $300,000 aircraft and now that you've dinged it, I don't know if it can fly again. And so I'm gonna have to bring it in for an inspection that may involve tens of thousands of dollars of repair to get back to being safe and air worthy. So that's one of the biggest hassles is not not just being safe on the road for you, but you're probably not going fast enough. 

Max: You’re probably not gonna parallel park in the city.

Aaron: Yeah, that's gonna be something you're gonna want to garage.,

Max: Yeah, you can afford the flying car, you can play the grass. So there's a company that's actually working on these, right? It's called shoot, what's it called? It's called, we have down here.

Aaron: So there are a couple of them. There's Terrafugia. Right, which is based out of Massachusetts. And actually, they got bought out or acquired by Chinese company a few years back. And it sounds like they're making a pivot away from this space. They have a prototype in the in the self-driving car market space but they're there they it sounds like they're pivoting towards more the the the urban air mobility market but but there's a company out of the Netherlands and I think a company out of either Washington or Oregon who's doing stuff in the similar flying car space.

Max: Yeah, I mean, it's interesting to think about but I don't think we're gonna see people flying their cars anytime soon. Probably not until long after their self-driving their cars. We'll talk about in a minute.

Aaron: Yeah, I think the air taxi urban Air Mobility thing is, is much more likely to catch on in a big way. Before we start seeing a significant number of flying cars that are consumer owned.

Max: Right, right. So you're saying they might be like air taxis or something like that.

Aaron: Yeah, I don't think we're gonna see the kind of adoption pattern that we saw with things like Tesla and electric cars happening with flying cars anytime? Yeah.

Max: Yeah, that's, I mean, what's scarier? A flying car driving past or a or a driverless car driving past? What would make you more nervous?

Aaron: Well, the big next step once they get these, these air taxis approved, which do vertical takeoff and landing is there at least a couple of companies that are designing them that they're not even building them with plans for a pilot?

Max: So are these kind of like helicopters then or something else? 

Aaron: Yeah,they there's, there's a lot of tilt rotor design involved in it. But the design intention is that these are going to be vertical takeoff and landing. So you don't have to have a long runway,

Max: Like a helicopter?

Aaron: In that sense. Yeah, it's it's, it doesn't have the the one giant rotor that a helicopter has, it's gonna have, they generally have multiple small rotors. But the flight profiles is much more similar. And some of them actually have rotating engines so that they can take off like a helicopter and then transition to flying like a regular aircraft.

Max: I see. I see. I see. Okay, cool. Um, yeah, anything else on this before we, before we moved on? This is our first flying car episode. That's exciting. Now, this does not, look like I mean, sometimes this thing comes up in the news. It's come up in the news, you know, years ago, but was just like, this one hit close to home. And I was like, whoa.

Aaron: Yeah, I think the way to view this is that it's not dissimilar to what we've seen in some of the Western States, where they've created special zones and special rules and laws that allow or encourage the self-driving car development work to be done there. So by New Hampshire passing this law, it opens the window to make this a more viable product for the people who are willing to invest in it. 

Max: Gotcha. 

Aaron: And we'll see where that goes. Maybe they'll be the first of many, or maybe this will remain a weird thing that just New Hampshire does. 

Max: Okay. Yeah. Yeah. All right. So finally, I'm also going to, all the articles will be linked on the show notes page, localmaxradio.com/163. The final one is going to link to this is a pretty good, actually a very good article on an academic paper. And so I thought the article did a very good job of kind of explaining this. This is from Tech talks. Why Machine Learning Struggles with Causality by Ben Dixon, March 15 2021. And, you know, I've talked about causality on the show before. I've worked on causal models. The last one I talked about, it was on episode 31 with Shirin Mojarad on, well, she was doing causality on products for education. Do these things cause learning, am I doing it for ads, do these things cause a buying or in my case, going to a place. 

So when it comes to these more complicated AI models, causality is—actually it causes a big problem for them. And it's sort of why, it's sort of where they fall short now even on the deep learning models. So if I could just quote—and it's something that is very much needed to get a grasp on for something like self driving cars where the car needs to know, “Well, if I do this, then X, Y, and Z will happen because this is how physics works. Or “this animal, or this person, or this vehicle in front of me, will respond this way, if I do that, or they'll respond this other way, if this other vehicle does something.” So it's sort of important to make sure that it knows all that. Because if it doesn't, if it's just kind of a series of instructions, even if it knows these, learns these instructions from millions, millions of examples, having these causal models in its model, I guess, having these causality results in its model will allow it to navigate through these kind of one-off situations a lot better. And when it comes to driving or anything in real life, all situations are essentially one-off, you've never been in the same exact situation twice, only similar situations.

Yeah, so let me just read a few things. Lack of causal understanding makes it very hard to make predictions and deal with novel situations. It's very good to like, suss out the exceptions to the rule. Because, you know, machine learning models are very good at coming up with the rules. But then it's like, it's not always very good at coming up with finding like, what are the rare exceptions, but if you know how the universe works, it’s easy to… It's easy to figure out that exception. I'm sort of like, you know, trying to figure out an example. It's like, well, objects fall, but if it's a, if it's a bird, it's probably not just gonna fall the same way as, as a tennis ball does, you know, for example.

So okay, when I first read, it reminded me of Judea Pearl’s work, The Book of Why, on causal language, which I read. And it says here in the article, the paper contains references to the works done by Judea Pearl, a Turing award-winning scientist best known for his work on causal inference. Pearl is a vocal critic of pure deep learning methods. Meanwhile, Yoshua Bengio, one of the co-authors of the paper, and another Turing Award winner is one of the pioneers of deep learning. You need less examples when AI as an effective understanding of causality. And the simple example they give in the article is, you know, they're building the models that first attempt to identify the causal structure. And then my simple explanation actually, is like, so when I was doing something like attribution, or when you're doing experiments, you already have the causal causality in mind, like, mine was ads, cause buying. Or smoking causes cancer, etc, etc, right? 

But when it comes to these large AI models, you don't know what causes what. So it's like, your hypotheses are, your hypothesis space in the Bayesian senses, you know, what causes what? And you're trying to figure that out. So it's sort of, it builds these causal models first, that it sort of tries to factor them out, you know, okay. If I, if I know what causes what, then I could kind of factor that out in my training set. And then I could sort of, like, if I know gravity causes falling, then I could kind of look at what are the exceptions to that, and try to learn that. But if I don't know that, then I'm kind of all over the place. So the side notes may be wishy-washy, we don't have that much time. But that is a very active area of research in machine learning that is going on. And so that's sort of that sort of an area that, you know, when someone talks about deep learning, deep AI, it's often important to talk about where does causality fit into this. Let me see if I get the name of the paper before we, before we head out. The paper is called... yeah, sorry. Go ahead.

Aaron: I was just saying, well, while you're looking for that, what what, what's the takeaway can we put into practice here? Is it that we should be more skeptical of when they talk about using AI and machine learning? Is there something that we can put into practice about causality in our daily lives here, or is this just a thing to watch?

Max: I think it's well, it is a thing to watch. But I think beyond that, I think that there's a connection between causality and intelligence and human intelligence that we should not take for granted. And so it's something that right now machines are still very bad at. They're going to get better in the future. And so what does that mean? Does that mean that we then approach human-level intelligence, which we don't yet have? But it's definitely—so for those of you who are like, you know, engineers and who study this stuff, it's definitely an active area of research. For some of you who would just like to follow the tech news, which is probably most of you on this, who are listening to the show, it's sort of something to ask yourself when you're reading these articles. And to sort of think about, “Hey, when it comes to my Google Photos making a mistake, or when it comes to self-driving cars, making a mistake, it's probably something in the—it's probably a causal deficiency.” I'll put it that way.

Aaron: So that makes me very curious to see that—because humans, we absolutely have areas where our causal intuition or causal reasoning falls apart.

Max: Yes, but we're very good at it. 

Aaron: I wonder how much, as artificial intelligence develops a better understanding of causality, will they adopt our—the kind of our same contours there? Or will they have distinct areas at which they are better at determining causality than we are, and areas at which they are inferior to human intuition there?

Max: That's very interesting. So we have a very… Our visual cortex is very complex, we have a very neat handle on causality when it comes to like, physics here on Earth, and images and–

Aaron: The classic you know, throwing and catching a ball—we don't do math to do that. But no, but we have a, you know, we build intuitive models based on our observations that allow us to complete the equation there. 

Max: Right, but one area of research, of application for this where humans maybe are not so good, or the more kind of scientific, kind of either the pure sciences, or I can think of the medical sciences where you get in something like physics where something like relativity, quantum physics, really, theoretical physics, where, you know, we don't know exactly how causality works on a small scale. Or something like medicine, where we have experiments that are pretty decent. But, you know, we were still kind of shooting in the dark over exactly what causes what

Aaron: Right. I mean, then there's, there's the classic, you know, correlation is not causation. That right, we sometimes struggle to separate the wheat from the chaff there.

Max: Yeah. So this article says that during the coronavirus pandemic, many machine learning systems began to fail, because they'd been trained on statistical regularities instead of causal relations. And as life patterns changed, the accuracy of the model dropped. So, you know, measuring, okay, if the virus comes into a particular area, how do the people react? You know, which is very hard for a mathematical model.

Aaron: That absolutely makes sense, that's a classic, you know. We have somewhat, that's a brutal model. And, you know, if you change it, you can adapt to that. But if you have a dramatic change, like all of a sudden, everybody's in lockdown.That's not moving the goalposts. But it's it's redefining the scenario so dramatically that previous assumptions aren't going to hold up necessarily.

Max: As I read this, though, I think that's not just a causality problem. It's also like, it's also a complex systems problem where you're talking about human response, which is, and each person is unimaginably complex. And then you have the groups of people, which are even more complex than that. So yeah, I think it's not just the causality that is causing these models to get wrong, but it was an interesting example. So yeah, let me see.

Aaron: Did you find the name of the paper? 

Max: No, no,  had it up just a second ago, which I'm like, How come? You know, it should have, should have been? Should have been pretty simple.

Aaron: We can put it in the post.

Max: Oh, here it is. Here it is, okay. It's called Towards Causal Representation Learning. Yeah. So it's basically, the idea is, first you figure out what the causal model looks like, what causes what? And then you train the model in terms of like, well, how do they cause the difference? How does A cause B? First the question is, does A cause B? Or does B cause A, or does C cause A? Which then also causes B. So first you figure that stuff out. And then you train what those variables do to the different other variables. So can you use that in life? I mean, I feel like maybe you can.

Aaron: We can try.

Max: Yeah. All right, we'll try to look for a specific example. As we use Bayesian inference and machine learning examples and different news stories, maybe we'll try to come up with Next time, we'll try to figure out what our causal, potential causal models might look like. That might be an interesting exercise.

Aaron: Very good.

Max: All right, cool. I think we're ready to call it a day.

Aaron: Sounds good.

Max: That was our March news update. Have a great week, everyone. 

That's the show. to support a local maximum. Sign up for exclusive content and our online community at maximum.locals.com. The Local Maximum is available wherever podcasts are found. If you want to keep up, remember to subscribe on your podcast app. Also, check out the website with show notes and additional materials at localmaxradio.com. If you want to contact me, the host, send an email to localmaxradio@gmail.com. Have a great week.

Episode 164 - Anne Griffin on Attracting Employers and AI Product

Episode 164 - Anne Griffin on Attracting Employers and AI Product

Episode 162 - Assaf Lev: Cofounder and CEO of Locals

Episode 162 - Assaf Lev: Cofounder and CEO of Locals