Based in Sydney, Australia, Foundry is a blog by Rebecca Thao. Her posts explore modern architecture through photos and quotes by influential architects, engineers, and artists.

Episode 135 - AI’s Potential in Natural Language Understanding & Human Memory

Episode 135 - AI’s Potential in Natural Language Understanding & Human Memory

While there is room for all kinds of algorithms, artificial intelligence could push the boundaries of natural language understanding. Today, this technology can be used to curate a video archive based on the objects and spoken words we want to find, instead of timestamps. Tomorrow, it could help with expanding human memory. Indeed, the possibilities that AI presents are endless.

In this episode, Max and Aaron continue the discussion on the difference between statistical models and the knowledge needed to understand language. They also get into AI’s potential for extending our memory and what this could mean for our society. Lastly, they debate whether the work-from-home trend will stay or not.

Tune in if you want to learn more about natural language understanding and memory.

Sponsor: Manning Publishing

 
 

Let’s talk about #Rust! Join the Live@Manning Conference on Sep, 15, 2020, from 11:30 AM to 5:30 PM EST. In one Rust-full day, go from ways to learn it to where and how to use it. With applications in game-dev, aerospace, and beyond, get firsthand insights right from the pincers of expert Rustaceans.

Here are three reasons why you should listen to the full episode:

  1. Discover the difference between statistical models and natural language understanding.

  2. Learn more about the potential uses of AI technology to extend our memory.

  3. Get a glimpse of why the future of work may be in-person.

Resources

Movie Mentions

Related Episodes

Episode Highlights

Statistical Models vs. Natural Language Understanding

  • There likely is a difference between doing statistical models and natural language understanding, but the dividing line is unclear.

  • The difference between the two may be best understood by first differentiating semantics and understanding.

On Semantics

  • Semantics is about the relationship between words, ideas, or concepts. It entails choosing the right word for a specific context or audience.

  • People can take into account who they are speaking with by creating mental models and adjusting this as new information comes along.

  • For example, the use of “literally” in a figurative sense shows how languages evolve over time. As a result, the dictionary adjusts its definition.

What It Truly Means to Understand Language

  • People appear to have an intuitive ability to understand words beyond relationships.

  • Data models could be created to understand certain patterns people know to use.

  • Max thinks that an object may be able to understand language if it has data, even if it does not experience understanding as humans do.

  • It is not yet certain if machine understanding can reach the level of understanding that humans have.

Using AI to Archive Memories

  • Mark and Aaron test Google Photos’ search function and discuss how it can identify objects with some degree of success.

  • The Google Cloud tool, Video Intelligence API, enables developers to analyze videos with machine learning.

  • Dale Markowitz used this to make her family archive searchable based on what was said or seen in the video.

AI for Extending Memory

  • This may be a step toward archiving and providing an extension to a person’s memories.

  • Human memory is notoriously bad. Having a way to recall everything you have experienced in higher fidelity would be helpful.

  • This idea could be taken even further by connecting it to a universal memory or global database.

  • Listen to the full episode to hear Max and Aaron further discuss the implications of using AI for extended memory!

The Future of Work In-Person

  • The op-ed by Helaine Olen raises some points about why telecommuting is not the future.

  • In-person work setups have many advantages compared to telecommuting.

  • A compromise between work-from-home and in-person work may be more feasible.

  • Listen to the full episode for more insights on working from home and the future of work!

5 Powerful Quotes from This Episode

“That's the way that humans work too. Anytime you meet a new person, you've got to use the limited knowledge—or perhaps zero knowledge—you have of them to build a mental model of them.”

“I think you could say that an inanimate object has understanding if it has the data, even if it's not experiencing understanding like we do.”

“Envision that not only are you constantly taking in images, but all that's getting stored into a database somewhere. And so you can have, at your fingertips, figuratively, every everything you’ve ever seen.”

“The more you recall a particular event, the less accurate that recollection becomes over time. So having a higher-fidelity version of that that you can kind of tap into could be very very useful.”

“I think this is a very good opportunity over the next year . . . where you could come up with kind of creative and new solutions for home/work balance.”

Enjoy the Podcast?

Are you hungry to learn more about AI’s potential in natural language understanding and expanding our memories? Subscribe to this podcast to learn more about AI, technology, and society.

Leave us a review! If you loved this episode, we want to hear from you! Help us reach more audiences to bring them fresh perspectives on society and technology.

Do you want more people to understand AI? You can help us reach them by sharing the takeaways you've learned from this episode on social media! 

You can tune in to the show on Apple Podcasts, Soundcloud, and Stitcher. If you want to get in touch, visit the website, or find me on Twitter.

To expanding perspectives,

Max

Transcript

Max Sklar: You're listening to The Local Maximum Episode 135. 

Time to expand your perspective. Welcome to The Local Maximum. Now here's your host, Max Sklar.

You've reached another Local Maximum. Welcome, everyone. Welcome to the show. Last week, I spoke about the difference between modeling and understanding. Today, I'm going to continue that discussion with Aaron—do we actually understand all the words that we use? And what does that even mean to understand these words? We're going to talk about that. Also, can universal recording and AI tech serve to extend our memory? Can we tap into some universal memory? Well, this show is really going way out there. And then finally, we're going to talk about—is the future of work actually in-person? That's a more concrete one, I guess, to wrap it up. All coming up in this episode. 

But first, I want to tell you about an event—a virtual event because it's still 2020—but an event nonetheless, being put together by Manning Publishing, a sponsor of the show, as you know. Check out the Live@Manning Rust Conference on September 15th, 11:30 to 5:30 pm—I believe that's Eastern Standard Time—on Twitch. Be inspired by the elegance and powerful Rust language. This is your chance to hear from Rust speakers and Manning's network of experts. Discover industry trends and unique technical advice from Rustaceans. That's what they call themselves, people who program in Rust, Rustaceans. I think I'm getting that right. I know I haven't heard about Rust before, so I might pop in to see what the language is all about. You can get more information on the show notes page. Today’s show notes page at localmaxradio.com/135, and we'll link you over to Manning Publishing and the Rust Conference. 

Alright. Now, for today's fascinating conversation. Aaron, welcome to the show.

Aaron: It's good to be back.

Max: So we haven't done a news update for a while here. We did topology last time, which was a lot of fun. So, well, first of all, how are you doing? People are coming back from Labor Day this year, and I think—I can't wait ‘til we do our year-in-review episode. 

Aaron: I don't know. I don't know. What year? 

Max: Yeah, the 2020 year.

Aaron: Yeah. I think if offered the choice, there are plenty of people who would skip directly to 2021, you know, do not collect $200, do not pass go. Probably some people that would skip directly to 2022 at this point, since they're not so confident about 2021.

Max: Do you ever listen to some of the past episodes of the Local Maximum to see what we were talking about?

Aaron: Usually, not. Occasionally, I will go back. Usually I look at show notes before I'm gonna re-listen, but…

Max: I feel like we’ll find something if we check the Archive.

Aaron: A taste of the before times could be refreshing right now. Either refreshing or very depressing. One or the other.

Max: Yeah. Okay. So one of the things that I started talking about last time, which is very, I don't know how to put this, it's very hard to wrap your head around, which is like the difference between just doing a statistical model, and people say, “Oh, it's just a statistical model. It doesn't really understand what's going on,” and then actually doing, understanding—natural language understanding. And then, of course, I think there's an open question of whether there really is a difference. I kind of fall on the side in that there probably is, but I feel like there are some problems with that.

Aaron: That was gonna be my immediate reaction. Are we talking about something that's a binary? Or is this really a sliding scale or spectrum? And you get to a certain point, and most people will say that, yes, you've transitioned from one to the other, but there's not a clear dividing line or tipping point there.

Max: Okay. Well, obviously, this is a topic that's very interesting to me because I feel like I would like to...oh, well, there's a role for each type of algorithm that you implement. And you know, there's a role for just a plain statistical model, you know, a regression, try to figure out the relationship between variables, and then actually build some data system that understands the underlying words or phrases or concepts that are happening. Very different types of intelligence. And so to try to wrap our head around this, I wanted to, first of all, see if we could suss out a difference between semantics and understanding. Because the way I define the understanding in the last episode might be similar to the way semantics is defined. Let's see if we can figure out if there is a difference. 

So here's the definition of semantics from Wikipedia. And I don't even know if I've ever seen a definition of semantics, even though I know I've used the word a ton of times. They might be one, again, it might be one of those things where we think we understand words, but we don't always. So anyway, semantics from Wikipedia: “Semantics is the linguistic and philosophical study of the meaning in language, programming languages, formal logic, and semiotics.” I don't even know what semiotics is—that's not in the Wikipedia article. That's just my own side. “It is concerned with the relationship between signifiers—like words, phrases, signs, and symbols—and what they stand for in reality, their denotation.” 

So I think all of that is a lot of fancy language for the relationship between words, and you can kind of branch out from words a little bit. It's not just words, but like ideas and concepts because words themselves, you know, sometimes words can have many different meanings. And then, sometimes, there are concepts that you need multiple words to actually get the concept across. You know, when you look at different languages, sometimes they will—German might have a word for this, but the English concept, you have to kind of explain a lot more. I mean, sometimes the German, they have a word, and then in order to explain the meaning of the word, you have to go on like two paragraphs in English. But, yeah.

Aaron: So semantics, when used colloquially...colloquially—that's a tongue twister—but the use that, at least I'm most familiar with, and I think a lot of our listeners might be, is when someone uses a word for something, and another person says, “Well, actually,” and provides perhaps a more precise term for what they're really talking about. And they say, “Well, don't give me that garbage. That's just semantics.”

Max: Right. 

Aaron: A bad example, perhaps being when someone says, “You know, we live in a democracy,” and somebody says, “Well, actually, we live in a democratic republic.”And the response is, “No, no, no, that's just semantics. You know what I meant, and you're being a jerk about it trying to prove me wrong when we shouldn't be arguing about that difference. We're really arguing about something in the bigger picture.”

Max: Right. It is a problem.

Aaron: That’s not exactly what we're talking about here, but there's a kernel of it there. In that, semantics is choosing just the right—it’s a crystal clear understanding of something. When a fuzzier understanding might be out there but doesn't necessarily serve as effectively.

Max: Right. Well, the democracy–democratic republic is always an interesting one because I've been corrected on that. But when I say democracy, I'm talking about the democratic institutions, the democratic nature of our government. I'm making some comments about that. I'm not making a comment about, “Oh, we should be a pure democracy,” or something like that, but some people do, and then you have to say, “Here's why we're not pure democracy.”

Aaron: The context matters very much. 

Max: Yeah, yeah, yeah.

Aaron: And that's what determines—you can make an objective statement that you know, A is right and B is wrong. But the relevance of that degree of separation for the current topic under discussion varies dramatically, depending on the topic at hand.

Max: It also, so now that I think about it, it also matters—it's not just the semantics of the word itself—it also matters of, you know, the speaker that you are hearing from. I mean, could you really build a truly intelligent, could you really be intelligent and intelligently understand language, if you don't take into account who is speaking?

Aaron: It would certainly be more difficult.

Max: Right? I mean, and I feel like humans, we do a really good job, even without thinking about it, of just slipping back. “Okay, the person speaking, this is their background. This is the context that they're speaking in. There are a few things that maybe,” you know, “that are my perspective, that's not their perspective, that I'm going to put aside for a little while.” And you think, “Oh. Okay, a smart person does that.” No, no, no. I think little kids do that. I think they know, like, when there's…

Aaron: Oh, absolutely. 

Max: The parent A versus the parent B. I should say mom and dad, but I guess that was more politically correct. Parent A, parent B, parent C, you know,

Aaron: Well, grandparents and other guardians can enter into that negotiation paradigm as well, so.

Max: Yeah, yeah. They might be like, “Oh, well. You know, that parent B doesn't know that parent A gave me a cookie. And so, when parent B gave me a cookie, then my second cookie, knowing that parent A didn't give me one,” you know, the kid’s gonna be like, “Oh, wow. I haven't had a cookie today.” Or at least, you know, not say something to...no, yeah, people do that. So it's, when you talk about the...

Aaron: I think that’s the difference between, and maybe this is a poor example, but I'm envisioning a chess-playing AI. 

Max: Yeah.

Aaron: It can learn the moves and all sorts of statistical analysis and predictions, but when it “sits down” to play against a player, it doesn't necessarily know who that player is. So it's  going to use its generic chest solving algorithm. It's not going to say, “Oh, this is Garry Kasparov. I know the last 6000 games he's played, and so I can build a special model specific to him.” It's not going to beat—it doesn't necessarily have the ability to do that. But if it did, that would arguably give it an advantage.

Max: Right. And it could also do certain things where, “I don't know who this player is, but I've seen from the first few moves that this fits the profile of these other players like Garry Kasparov.” Maybe it is Garry Kasparov. And you can almost do that with language too. You could be like, “Hey.”

Aaron: Yeah, you could certainly profile.

Max: Yeah, yeah, yeah.

Aaron: I mean, that's the assumption that, I mean, that's the way that humans work, too, anytime you meet a new person. You've got to use the limited knowledge or perhaps zero knowledge you have of them to build a heuristic, to build a mental model of them. And, you know, hopefully, you're continually updating that based on incoming information to improve your ability to predict their actions...

Max: So here's my takeaway. If I were to build a chatbot, it would constantly be analyzing language that you use to chat with it and sort of fit you into a profile that differentiates you from other people. And it could do things simply as changing some of the vocabulary that it uses or, you know, changing some of the ways that it describes stuff or some of the ways that it interprets your meaning. And so that's interesting, but it's also kind of scary that people definitely interpret words and phrases in very different ways even though we have a dictionary, which, you know, is the dictionary THE meaning of the word? Or is that just how most people tend to use the word? It’s sort of a fuzzy area.

Aaron: It's a push-pull there because, and maybe since we kicked off talking about semantics, this might be entirely on topic

Max: Okay.

Aaron: But there's the whole thing when people misuse a word, and people who are strident about language will say you're doing it wrong, and you know, point in the dictionary, and say, “This is the correct meaning of the word. This is the literal meaning of “literally.” But if people misuse it for long enough, then the dictionary adds an additional definition that says “literally” means “figuratively.” 

Max: Right.

Aaron: Because people have been using it that way, and the dictionary isn't an ironclad handed down on tablets from the mount. It is a reflection of how language is used.

Max: I read something, maybe this was just an internet comment on that, but where, you take a word like “literally” and then eventually it starts to mean “figuratively” for some people, and then you invent another word. Anytime you invent a word that has the meaning of the original meaning of “literally,” it ends up meaning figuratively after a certain number of years. So you can't really have that. No, okay. I think, have you ever seen The Nutty Professor with Eddie Murphy? Not the one from the 50s, but the one from like the 90s.

Aaron: I've seen clips, but I don't think I've seen the whole thing.

Max: Well, there was a...I'm trying to remember this from like 30 years ago ‘cause I think I saw this in the theater. And no, I remember there was a scene when he's like—I don't remember exactly what has happened so the 30-year memory might be way off—but when he's like the fin guy who's kind of a jerk. He's like, “I'm gonna kill you,” and then he goes, “No, I'm literally going to take your tie,” you know, “wrap it around your neck and cut off your blood supply and literally kill you.” And for some reason, back in the 90s, or maybe I was just younger, and I didn't hear the figurative term for “literally,” I felt like that packed more punch because “literally” hadn't yet been used as figuratively that much. So when he said “literally,” you actually got the image of what he was talking about. And it was, I don’t know why it’s... 

Aaron: Yeah, it’s emphasizing how not figurative your previous statement was, and that you may have misunderstood, and I'm making this crystal clear.

Max: Right. You’re right.

Aaron: And you could still use that. It comes back to it, all depends on context. Delivered in the right context, it could still have that impact. But you certainly couldn't do that in writing anymore.

Max: Yeah. So, okay, so I just want to end with this question is like, do we really understand words in the way that we think we do? So, I mean, I use the example of chair, and I hate that example because there's always, for some reason, I feel like philosophers always use chairs. I have no—I've never taken a philosophy class. No, that’s not true. I have taken a class before.

Aaron: That's probably Plato's fault.

Max: Yeah, I know. Alright. So I have taken one philosophy class, but it wasn't like philosophy. In what, like I don't know the basics. I'm just taking this from kind of a machine learning perspective and trying to work backwards here. But do we really understand the meaning of words other than just its relationships to other words? I feel like the difference, I don't know, we have like the history of examples we've seen of that word used in the past, and we kind of know the effect it has on other people. So we see kind of the effect of using that word. You know, that it has the, think of...well, you might have more insight into this because you have little children, and you could see how they learn new words. But we don't have a dictionary in our head; we have more of an intuition that's very hard to describe.

Aaron: Yeah. I was joking before a little bit when I said, “Oh, it's probably Plato's fault.” But if you're talking about the fundamental chairness of an object that goes directly back to Platonic ideals, that you know, is there something about a word that fundamentally, that kind of precedes our perceptions? And that gets into a purely philosophical discussion? I don't know that there's anything we can add to the discussion in terms of understanding the fundamental chairness of a chair and how that applies to…

Max: No, and I'm not good at...and, you know, well, maybe it’s better.

Aaron: Perhaps AI or everyday actions.

Max: Right. Well, we're going to talk about image recognition in a second. And there are certain patterns that we get, there are certain things that we know how to use, and so, I don't know if we could build data models that understand that. Does it truly understand it? Or could it be like, no, it really doesn't know what it's doing. If there's no subjectivity there, if there's nothing in—I think that you could have, I think you could say that an inanimate object has understanding if it has the data, even if it's not experiencing understanding like we do.

Aaron: Well, difficult to ride this analogy all the way into the ground. But I mean, to what extent do we really have an understanding there, and if we have any doubts about our understanding, how can we not put machine understanding on that same sliding scale? 

Max: Right.

Aaron: And, you know, I can't necessarily say with confidence that there's no way it could reach an equivalent level of understanding to that which we have.

Max: Right, right. So, yeah, the one thing where I like, I can't accept the sort of the view that like everyone is a machine, is that like we have a subjective experience of the world that inanimate objects or computers don't necessarily experience what they're doing. And so where does that experience come from, you know, when people say, “Oh, it's an emergent property to me. I'm just here,” and yet you don't really know.

Aaron: I'm pretty sure we've had that discussion. 

Max: Yeah.

Aaron: That emergent property is code for we don't understand.

Max: Yeah. I mean, not always, but I think we discussed that last time. Alright. So here's an article now, and we were talking about like slow news week. It's 2020; it's never a slow news week. There's just nothing we want to talk about. Right? Would you say that's correct?

Aaron: Yeah, I'd say there's plenty we don't want to talk about.

Max: Yeah. Well, we'll get to it soon. So here's an article that I've had in the backlog for quite a while, and it's called, you know, “30 years of family videos.” And basically, what this guy give, I think this guy's a Google engineer—who's it by—Dale Markowitz. He's an applied AI engineer, Cloud AI for Google. And he talked about how he took his family video archive—which is something that I've been working with recently and made it kind of searchable and applied some of the AI tools to it, right.

As you know, like if you put your photos, and it’s Google Photos, and there are other systems that do this. I don't know if Apple does; I don't think Apple does it. Actually, I think Apple encrypts things enough, so they can't even do all the processing on their end. I could be wrong. But Google stores the photos, I think. So basically, Google Photos allows you to search your photos by objects. So if I go to photos.google.com, and I go into my photos—give me something to search for that I may have taken a picture of.

Aaron: Cars.

Max: Okay. I'm sure I've taken a picture of a car. Let's see what comes up. Search button. Car. Should be pretty easy, huh? Alright. So, oh, the last night of picture in Times Square. I have, every other picture of mine has a car in it. Oh, here's a picture..  

Aaron: I guess you are in the city. It's hard to.

Max: Yeah.

Aaron: Hard to get a shot with no car in it. 

Max: Here’s a picture without a car. There's a street, but there's no car. Oh, it was a construction site of things that are covered on the street that looked like cars, but I was worried there were like bricks in there, and then if the, you know, if the civil unrest comes back. Anyway, I don't know why I'm taking picture of that. Well, here's me in Staten Island. Yeah. Okay. Good. So there's tons of cars here. You know, beach is a good one. All sorts of, let's say like—what if I put in “drink”? Is there like a...? 

Aaron: Oh, I sure.

Max: I guess we’ll do...Oh, yeah, yeah. Here’s me...

Aaron: You can probably even tell cocktails specifically.

Max: Oh, okay. Well, yea. One of them is a picture of the, in my old building, the vending machine. Let me do cocktails. Let's wait for this to come up. Oh, yeah. Cocktails from the other night. My girlfriend's birthday we we're out. Oh, this is when I was in Ukraine. It's not a cocktail; it's just my breakfast but with an orange juice.

Aaron: Well, I searched for “beer,” which Google is pretty good about, but maybe dozen photos in it has a photo of my wife and me and our newborn baby getting ready to go home for the hospital. There's no beer in that photo.

Max: I went for beer. It is Dr. Brown's Cream Soda. But anyway, let's get back to it. So it seems to understand objects, and Yann LeCun showed me this back in 2010 when I took machine learning with him, and he showed me the system—not me personally—he showed the whole class the system that could do this that I could basically identify objects. So this guy asked, “What about video?” And so he figured it out. I'm going to read from the article. 

“For this, I turned the Video intelligence API, a Google Cloud tool that lets developers analyze videos with machine learning. It allows you to replicate many of the features found in the Google Photos app—like tagging objects in images and recognizing on-screen text—and a whole lot more. For example, the API’s shot change detection feature automatically finds the timestamps in videos where a scene changes.” I actually had this in the system I was using, but it was heuristic. Actually, that works fine, just heuristic basically, you know, I had for that. Anyway.

“This allowed me to split those long videos into smaller chunks. Using the label detection feature, I could search for all sorts of different events, like “bridal shower,” “wedding,” “bat and ball games” and “baby.” By searching “performance,” I was able to finally find one of my life’s proudest accomplishments on tape—a starring role singing “It’s Not Easy Being Green” in my kindergarten’s production of the Sesame Street musical. 

The Video Intelligence API’s real “killer feature” for me was its ability to do audio transcription. By transcribing my videos, I was able to query clips by what people said in them. I could search for specific names (“Scott,” “Dale,” “grandma”), proper nouns (“Chuck E Cheese”, “Pokemon”), and for unique phrases. By searching “first steps,” I found a clip of my dad saying, “Here she comes… plunk. That’s the first time she’s taken major steps,”—so it didn’t actually say “first steps,” but I kind of get the meaning there“alongside a video of me managing to just barely waddle along.

In the end, machine learning helped me build exactly the kind of archive I wanted—one that let me search my family videos by memories, not timestamps.”

Okay. So, very interesting. I should point out, is this—I may have misgendered this person. Let's see. Because I feel like he said, oh, yeah, “Here she comes… plunk.” The first name was Dale, so I thought it was a man. Sorry. But no, so this is interesting. So this person is making an archive of, okay, their family videos, but is this like a step into archiving our life, and can this be used for some memory extensions? Now, when someone says the word “cocktail”, I might have a picture of a cocktail in my mind, but maybe if we have some kind of archived, you know, global database, I could immediately see what the meaning of that term is or what other people might have in their mind. 

So, I've just been thinking about like where is this leading to if it’s taken to the nth degree? And you know, what's interesting about this in just keeping track of your own memories? 

Aaron: Well, yeah. That's where I would, that's where my brain went first. That especially, and this may be a path we potentially could have taken but never did, but I'm envisioning a world where something like Google Gloss caught on a lot faster a lot earlier.

Max: Yeah. Well, that stuff's gonna be coming back next year. We're gonna be, we’ll talk about it. 

Aaron: Well, so if envision that not only are you constantly taking in images, but all that's getting stored into a database somewhere. And so you can have at your fingertips, figuratively, everything you've ever seen. And so, when you try and recall an event, you can pull up that event and a dozen that were similar to it and extract kind of the key features from it in a way that's much higher fidelity and much easier accessed than, at least for most of us, are our functional working memories.

Max: Right. So I didn't see this movie. So well, first of all, there was a Black Mirror about that, where everything got recorded. There's also a 2004 film, The Final Cut,  with Robin Williams. And the idea is, you know, yeah, after you die, they have a whole video of your life, but—I don't know what the idea is; I didn't see the movie—but my impression is they take out some of the parts that maybe we all have that's like maybe we don't want there, that to be in the final cut of our life, you know. So, that's kind of an interesting concept.

Aaron: Yeah, I mean, it's just an extension of the, you know, writing an obituary. You probably want somebody who has your best interests in mind to write that rather than an impartial reporter. You know, maybe the AI is not going to paint the most favorable light—paint you in the most favorable light. It might be more honest, but that's not necessarily what you want to be remembered for.

Max: Yeah. Okay, and I'll take this to the nth degree. Let's say, we could use this as memory extension to remember everything that we’ve ever experienced and recall them, but what if you then hook it up into like the world's memory? So you can remember if...

Aaron: It would be, say, it’s a double-edged sword.

Max: Yeah.

Aaron: Because, on the one hand, human memory is notoriously bad. 

Max: Right.

Aaron: There's been a lot of work studying, you know, classically eyewitnesses to traffic incidents and the like. And people are just bad at remembering, observing and recalling things.

Max: I mean, it's interesting like remembering, just going back to old podcasts, trying to remember what I said—and sometimes, I'm completely right, and I have a crystal clear memory, and sometimes, I flipped it around so much that I'm like, “Wow, I didn't know that happened that way.” 

Aaron: Yeah.

Max: If I looked at old emails about how certain things went down, it's like, “Huh.”  

Aaron: And on top of that, the theory that the more you remember something, basically, every time you recall it, you're going back, and you're writing over it. And so the more you recall a particular event, the less accurate that recollection becomes over time. So, having a higher fidelity version of that, that you can kind of tap into, could be very, very useful. On the other hand, if we're functionally moving this memory storage kind of offline into some other repository, what's to prevent it from being tampered with? And you know, if it's stored outside of your brain, or even if it's stored inside your brain somehow, but it has a digital interface, why can't we begin to tamper with that? And I would imagine it will start with things that we want tampered with.

You know, maybe that's the best way to help people suffering with debilitating trauma and PTSD to get over that. We can go in and kind of edit those memories and clean them up a little bit. But once you've developed that technology, there's some scary doors open there and on what you said about kind of extending it beyond the individual level but to the group or the global level.

Max: To recall or store all events or recall the meaning of words like, you know.

Aaron: There's a lot of brouhaha about people kind of revisionist history, but what if you could literally go back and not just revise the history in the history books but the memories of the people who were there so that there's nothing to counter, there's no original source material. 

Max: So you start with…

Aaron: And we have that problem, that often, whether we should or not, we tend to believe our memories, our recollections of something, over documented facts. And that even goes for, not necessarily an event that you witnessed, but “I remember being told that X, Y, Z happened.” And so when I read that, actually, according to historians or people who were there, they say, “Oh, actually it was Z, Y, X,” and so that's not the way I remember it. And it's difficult to overcome that bias.

Max: Yeah. So you start with trying to preserve family memories and the moments, and now we're in sort of, kind of a sci-fi.

Aaron: It’s a slippery memory hole.

Max: Yeah. Yeah. Alright. So, that's really interesting. Let's finish up with this Washington Post article. This is going to be a little bit more concrete for those who think we're getting too abstract in this episode. It's just to extend what we've talked about before and what I talked about last time when I mentioned James Altucher’s article about New York never coming back, which is, “Telecommuting is not the future.” This is an opinion article in The Washington Post by—let me get the person right—Helaine Olen. And so...

Aaron: I know we had some predictions related to this in our most recent tech retreat, but I can't remember exactly what spin we put on it.

Max: Again, probably a better one to go back to the Archive rather than trying to recall our memory.

Aaron: Yeah, given the discussion we just had. Yes.

Max:  Yeah, but I think that we said some prescient stuff about it, not just in the last one in 2020 but also in 2018. But, so you know, the overall question is, and I tend to agree with some of what she was saying, I don't think we're necessarily going to disagree 180 here. I think it's going to be more, you know, what is the degree along a sliding scale that we're going to see here about whether people are really good—and this is just a lot of, it is my current experience just working alone. It just feels, even if I'm working something really cool, which I'm doing sometimes, it just feels fake.

You know, are we going to want to get together again? And are people going to want to go back to the cities again? And let's say COVID has gotten all that, are people gonna want to hang out, go to the bars, go to parties, you know, just do all these socialization events? For that, you kind of need to be in a place where there are a lot more people. I feel like this whole idea of the city being done might be overstated even though the city government itself has and the kind of the society we have here has a lot of problems, but I don't think the idea of the city is going to be over—I don't think younger people are not going to want to flock to the city at some point.

Aaron: Yeah. Well, and this article wasn't specifically about cities, although it seems that cities are hardest hit with this. 

Max: Right. For telecommuting. Right, yeah. 

Aaron: But my take was that none of the drawbacks of telecommuting that the author cites are wrong. Some of them may be bad reasons to go back to work, but they are reasons for which people will go back to work, and I can't dispute that.

Max: So let's see. Let's go through them. What were they? They were basically, let's see, I'm trying to get to the….well, yeah. Okay. “There are serendipitous benefits to in-person collaboration that no number of Zoom meetings or Slack channels can replicate,” you know, “Certain companies Yahoo and Bank of America come to mind—rescinded telecommuting privileges in the recent past, claiming the practice was detrimental to corporate teamwork.” I think the idea is that there is a lot of like context that you get and a lot of unsaid communication that you get when you're together, that you don't get either online or you don't get when you don't have—when you're not in person. It's kind of a pale shadow of the type of communication again. And I see this too, I think there are so many, like I think one good thing about talking to each other from afar, is people try to be more succinct, but I feel like there's a lot of stuff where people are like, people assume that you get their meaning, but they really don't from just the text and from the short conversations on Zoom or whatever. 

Aaron: Yeah. So the phrase “detrimental to corporate teamwork” strikes me as just gross in corporate, in the worst way. 

Max: Yeah. 

Aaron: But the bit that came before that about the “serendipitous benefits to in person collaboration,” that's the Steve Jobs policy in a nutshell. Like he, my understanding is that he was quite involved in the actual architectural design of the Apple Campus, specifically to maximize that aspect. And maybe he was overvaluing that, but it's something that he put a lot of weight in, and a lot of people will follow anything he's dropped hints about to extremes.

Max: Yeah.

Aaron: For good reason. 

Max: Well, here’s another way to think about it. If you think work is the same as, you know, online, think about trying to date exclusively online and never meeting up if that would work, I mean, just from the communication standpoint, it just doesn't work to like sit there and chat.

Aaron: It's certainly a very different world. Yeah.

Max: Yeah. Okay. So she talks about the recency bias here a little bit. And that’s about, well, you know people are like, “Well, we're at home now so this must be the future.” And actually, there was a meeting at work where we talked about the recency bias. I think I could say this. We talked about recency bias a little bit where, you know, at the beginning of this thing, people are like, “Oh, it's not that bad. What are you talking about?” And now people are like, “It's bad, and it's gonna be bad forever,” which is just the same problem on the other side, where people are just taking what's going on now and extending that out linearly without realizing that there's like an ebb and flow to things. Alright.

Aaron: So, yeah. I mean, so as kind of an aside. Recency bias, it's something to be aware of because it will taint your predictions of the future, your outlook. But all else being equal, looking at how things are now and how things were in the immediate past is a pretty good heuristic for determining what the immediate future is going to look like.

Max: Yeah, yeah. Sometimes it can be.

Aaron: So it's not complete garbage, but you want to be careful that you're not overweighting it, which you know, now, six-plus months into this madness, it's very easy to overweight that, because six months is a fairly long time. That's well past short-term memory, and that's all we've known in short term memory, so.

Max: Right, right. Well, if you take…

Aaron: We have to reach back and remember the before times and kind of sample that a little bit to color your picture of the future.

Max: Well, if you take Lindy's Law, that means that this whole thing will last for another six months. And now, obviously, that's never lit—that’s not always literally true. In this case, I think it's...well, most pandemics in the past happened, you know, have been like a year plus. I think Spanish Flu was something like that. So maybe that's about right.

Aaron: Yeah, and it depends how you define start and end, but…

Max: Yeah.

Aaron: I'd say, right now, that's probably a pretty good heuristic. 

Max: Yeah. It’s the social end versus the actual viral end, which in New York has essentially happened already. But okay, let's talk about where do you think this is going in the future? So are you working like when the pendulum swings back, we're not going to be at the same level of office work? We're probably going to be at a higher level of remote work than we were pre-COVID anyway; people are just gonna stay—some people are gonna stay remote.

And I think she even says in the article at the end, there might be some compromise. And this is what I wanted—this is what I wanted at the beginning, like pre-pandemic. This is what I want in 2019, where we could, you know, work from home one or two days a week because that, I can live with. I mean, that's not only can I live with that—that's ideal. Because then, there are certain things you could do on your own, and then there are certain things that you need the collaboration for.

Aaron: Yeah. I think this will definitely shift the Overton window or the swing of the pendulum or give us a new anchor point—whatever terminology you want to use for it. I think she was quoting some numbers, that like 7% of workers had the ability to work from home before, back in 2019. So I would be not at all surprised to see those numbers when things reach a new equilibrium to be two or three times that, but probably not going to stay at the the ratios we're currently seeing where, I don't know what it is, you know, 75%, 50%, something like that, of people working remotely.

Max: Yeah, yeah. 

Aaron: And the other thing is...

Max: Here’s another problem. Imagine if you're managing a team, you know, how you could tell if someone's unhappy and they might leave soon. You think you could tell that if you're remote? It's gonna be a lot harder. It’s just another thing that came to mind.

Aaron: Yeah, it certainly adds complications, and I think there's a balancing act here. I think management is going to tend to be more resistant to it, as they always have. Some for good reasons; some, like what you just mentioned, some for bad reasons because most managers have been kind of brought up in a management style where the way they can judge whether you're an effective employee or not is whether they see you at your desk looking busy. 

Max: Yeah. 

Aaron: And for those who've come up in that mold, it's going to be very difficult to break out of that and to actually base their assessment on results rather than presentation, but, you know, bits and pieces. I'd say the other piece to this is that we need to consider the weighing of costs and benefits. So the author here has laid out a number of things that are downsides—that are costs—during this remote work. But nowhere in there does she talk about the things that are quite undesirable about working in an office. 

Max: Right.

Aaron: You know, I used to have a job where I spent at least an hour each way commuting.

Max: That’s pretty bad.

Aaron: Five days a week. And that was pretty miserable for me. So if I'd had the option to make at least part or a significant portion of my time work from home for that, that would have dramatically reduced the pain point there. 

Max: Sometimes, especially now…

Aaron: And that's going to differ for various people.

Max: Yeah.

Aaron: It's not an across the board solution even for people doing the same job in the same place. 

Max: A lot of times I found, you know, if you get enough sleep, you're like three times more productive. You know, and so it's, I mean, that's not—that's just like a made-up number—but I'm just saying like it's amazing how much those commutes can weigh you down, and those specific times you have to be in every day can weigh you down. 

Aaron: Yeah.

Max: Especially with, I guess if my job were just like, you know, on—and literally I’m on an assembly line—it might not matter how much sleep I got the night before, but that's not the way the world works now.

Aaron: Yeah. Well, and the other piece that she does point out that I can particularly sympathize with is the dissolving barrier between your personal life and your work life, now that they coexist in the same space. We've been talking about that for years with email and smartphones, kind of giving us an invisible tether to our work. And now that there's, you know, you can't say, “Oh, I left my laptop at work,” or “I turned off my work phone” because there's never a good reason to.

Max: Right.

Aaron: That, combined with kind of some of the time shifting and juggling of domestic stuff going on right now.

Max: Yes.

Aaron: Especially for families, has made it in some ways a living nightmare we're work never ends..

Max: Right. Work never ends, and also, a home life never ends. 

Aaron: Yeah.

Max: You know, you're always like thinking, “Oh, I've got to do some chores around here.” Or you know, and you're always dealing with babies or pets or significant others or whatever.

Aaron: There was a story I'd been told about some, and this is kind of dated. But you know, married couple, kind of 1950s vintage America, and the secret to their successful marriage was that, well, four days of the week he's on the road. And you know, the whole distance makes the heart grow fonder, that if you're constantly with your family or your loved ones, it makes every little flaw about you that much more obvious. And you know, even just having the opportunity to go to work and spend eight hours a day, not directly underfoot of everybody else in your life, can help make the rest of your life more sane. So that's something that a lot of people I think are struggling with right now. 

Max: Alright. Cool. 

Aaron: All that said, we are super fortunate to have the type of work we can be doing from home and that we have jobs right now. So in one way, this is very much a first-world problem, and I realized that even though it's sometimes hard to remember it, but just remember...

Max: Just remember, you know…

Aaron: Just because…

Max: Just because you have it better than someone else doesn't mean you don't have the right to complain.

Aaron: Yeah, the struggle is real, even if it's not the most dramatic struggle of them all out there. 

Max: Yeah. Alright, So just to wrap it up, I think this is a very good opportunity over the next year, or like you said, like the Overton window changed, where you could come up with kind of creative and new solutions for home-work balance. And, you know, management might be more open to making, to trying something almost very new, very radical in that regard. So, that could be interesting to watch and live through.

Aaron: Yeah. I'm very curious to see whether the WeWorks or the Regus or whatever the kind of office space for lease in a short-term office share model, whether that I could see going two ways when we come out of this. One could be that there's a huge explosion in that because a lot of big corporate offices shut down, but there are people that still want a place they can go to work where they're not alone or at home. Or I could see that completely dying out because, you know, people have found ways to work without it, and they can't justify the cost for getting all the downsides and very few of the upsides of going into the office. So that'll be something interesting to watch.

Max: Yeah, yeah. Alright. Cool. So that's all we've got for today. Any last thoughts, or should we wrap this up? 

Aaron: Let's call it a wrap. 

Max: Alright. Great. Aaron, thanks for coming on the show today. 

Aaron: Always fun. 

Max: Have a great week, everyone.

Max Sklar: That's the show. Remember to check out the website at localmaxradio.com. If you want to contact me, the host, or ask a question that I can answer on the show, send an email to localmaxradio@gmail.com. The show is available on iTunes, SoundCloud, Stitcher, and more. If you want to keep up, remember to subscribe to the local maximum on one of these platforms and to follow my Twitter account @maxsklar. Have a great week.

Episode 136 - What Are Martingales in Election Predictions?

Episode 136 - What Are Martingales in Election Predictions?

Episode 134 - AI Breakthrough & Understanding “Understanding”

Episode 134 - AI Breakthrough & Understanding “Understanding”