Based in Sydney, Australia, Foundry is a blog by Rebecca Thao. Her posts explore modern architecture through photos and quotes by influential architects, engineers, and artists.

Episode 286 - Consciousness Bets, Reverse Turing Tests, and Display Tech

Episode 286 - Consciousness Bets, Reverse Turing Tests, and Display Tech

Max talks about a bet that two academics made 25 years ago about the nature of consciousness that has recently been settled. He also talk about the idea of a "reverse Turing Test".

We also get an update on the latest display tech, with reviews on foldable phones and color e-ink making a comeback.

Related Episodes

Episode 213: Artificial Consciousness Controversy

Episode 281 - The Mixed Reality Moment

Links

Scientific American - A 25-Year-Old Bet about Consciousness Has Finally Been Settled

MIT - Large Language Models and the Reverse Turing Test

New York Times - Google Pixel Fold Review: Foldable Phones Are Improving

Tech Crunch - E Ink’s latest color displays have me dreaming of electronic paper magazines

Transcript

Max Sklar: You're listening to the local maximum episode 286. Time to expand your perspective. Welcome to the local maximum. Now here's your host, Max Sklar.

Welcome, everyone. Welcome. You have reached another local maximum. I have some interviews in the can already today, but after all of these episodes about the new constitution that I've decided to write, I just wanted the opportunity to talk to you independently.

When I take too long to do a solo show, I kind of lose some of that solo show muscle memory, and I forget what it's like to talk to you guys for I don't know how long we're going to talk. 2030 minutes without taking a break. It's kind of crazy.

Speaking of which, speaking of the new constitution documents, I got some really interesting feedback on these documents as well as some ideas to short up. So it will look a bit different from the way I described it on the show.

For those of you who weren't listening to those shows, I basically rewrote our entire government — no, basically made some suggestions as to how we could restructure our government based on some of the principles that we've been discussing for many years here on The Local Maximum.

So a lot of people have told me that they find the document interesting because it was not written by a politician, it was not written by a lawyer, but written by someone like me as an engineer kind of approach it from a different perspective. So definitely check that out: localmaxradio.com/283, four and five, if you want to check out the episodes, but also look out for the actual document, which is coming out pretty soon.

I probably won't do another show devoted entirely to the proposal, but I will come out with my paper soon. And definitely some shows on things like proportional representation and election theory and other ideas on that, which is always very interesting from a mathematical and a computational perspective. All right.

Also, I was recently interviewed for the podcast Data on the Rocks. It has not come out yet, but the host, he asked me about episode 213, which was on artificial consciousness, where I personally, I really turned against the idea of AIs being conscious. And this was pre-chatGPT. So this idea is going to come up a lot again and again in the post generative, post-GPT kind of a world.

In that case, it was an OpenAI scientist who said, maybe our AI is a little bit conscious. And I was like, what does that mean? Are they just trying to drum up controversy and that sort of thing? But I decided to talk about that a little bit today because an article came out recently: Scientific American, June 2023. A 25 year old bet about consciousness has finally been settled. So who was betting and what were they betting on and who supposedly won?

So the two bets were Christof Koch, who is a neuroscientist, and David Chalmers, who is a philosopher. Both are probably good targets for being guests on the show, because this is a topic that I want to get into more and more when it comes to consciousness and some of the philosophy of mind and philosophy of AI type stuff.

This article here in Scientific American explains their 1994 disagreement. Just as Crick and geneticist James Watson solved heredity by decoding DNA's double helix, scientists would crack consciousness by discovering its neural underpinnings or correlates, or so Crick and Koch claimed. They even identified a possible basis for consciousness brain cells firing in synchrony 40 times per second.

Not everyone in Tucson was convinced. Chalmers, younger and then far less well known than Koch, argued that neither 40 Hz oscillations nor any other strictly physical process could account for why perceptions are accompanied by conscious sensations such as the crushing boredom evoked by a jargony lecture or I have a vivid memory of the audience perking up when Chalmers called consciousness the hard problem. And I'll get into that a little bit. The hard problem.

That was the first time I heard that now famous phrase. Chalmer suggested that the hard problem might be solved by assuming that, quote, “information is a fundamental problem of reality.”

This hypothesis, unlike Crick and Koch's 40 Hz model, could account for consciousness in any system, not just one with a brain. Even a thermostat which possesses a little information might be a little conscious, Chalmers speculated. So this is getting to me. These guys are going off the rails a little bit, but I'd love to hear from them. But let's go on.

Unimpressed, Koch — and I hope I'm pronouncing that correctly, I know it's like the hard ch — confronted Chalmers at a cocktail reception and denounced his information hypothesis as untestable and hence pointless: “Why don't you just say that when you have a brain, the Holy Ghost comes down and makes it conscious?” Coke grumbled.

Chalmers replied coolly that the Holy Ghost hypothesis conflicted with his own subjective experience. “But how do I know that your subjective experience is the same as mine?” Koch explained, “how do I even know you're conscious?” Koch was implicitly raising what I call the solipsism problem, which I will return.

So the article, which is a really interesting article, it's by Scientific American author John Horgan. It goes on later to discuss how both of these people became involved in something called IIT, which is Integrated Information Theory of Consciousness.

Honestly, I've been having trouble wrapping my head around IIT. And even as someone who's been reading about this for many years, although not too deeply, some of the other subjects I've been on, some of this consciousness is information stuff just sounds crazy to me. I just don't get it. So let's look at the bet itself. What was the bet?

Koch bet that by this time 2023, there would be some clear evidence for a neural signature of consciousness. What is it, our brain that is actually causing this phenomenon for us to experience life and not just be particles colliding into each other and can we measure it? And so it seems like, well, this is a really hard problem, but science moves fast, so maybe in 25 years we'll have a clue as to what's going on.

Turns out we still don't have a clue. All of theory so far has either been unfalsifiable or if there are experiments people have tried, it seems like those experiments have been inconclusive. So I guess Koch lost the bet, but it didn't turn out too badly for him.

The bet was only for a case of wine. It wasn't for like $10,000 or $10,0000. He actually doubled down on his bet for another 25 years when these guys will be in their 90s. To which Chalmer said, “Sure, I think I'll win, but I hope I'll lose.”

In other words, I really hope we get to the bottom of this hard problem of how consciousness is created, but I just don't think we will. So my comments are first of all, it's really funny that for all of this talk of artificial intelligence, as artificial beings, we as humans still cannot wrap our heads around the idea of consciousness.

In other words, like I said before, we don't have a clue. We seem to have some philosophical discussions around it, which is kind of the germ, the seed of a scientific discussion, but not quite a scientific discussion yet.

We have maybe some working hypotheses, but we have no way to adjudicate these hypotheses. There's no scientific method here to try. In other words, how do we use Bayesian inference on these ideas?

If I say consciousness is caused by X and you say consciousness is caused by Y, do we have any test that can tell us whether it's more likely to be X or more likely to be Y? Is it some kind of quantum effect or some kind of vibrations or whatever? How do those cause subjective experiences?

I've never heard a convincing tale about how some physical process causes it and how can we gain evidence even to distinguish between these theories? As far as I can tell, there is not much in the way of that. And there are, by the way, people who don't believe in consciousness.

They say that consciousness is an illusion, but turns out that even illusions have physical properties behind it and reasons for it. So you could study illusions and try to figure out what causes those illusions.

I mean, look, color is itself an illusion. I mean, color kind of points to a physical property of something. But the experience of color is something that we create in our mind. But we understand color and how light works quite a bit.

Personally, I think consciousness is real, but I can't ascribe it to a machine because I don't know what consciousness is right now. And by the way, I think it's almost certain that every person on the planet has consciousness? I mean, I think so just by argument of analogy.

If I'm conscious, I look at everyone else, I see they're kind of similar to me, so they must be conscious as well.

I'm going to stick with that argument. I haven't heard a good counterargument to that, but open to it. So one of the questions that arises from what these people are saying is they believe that consciousness can arise out of pure information or a pure computation.

Is consciousness something that is information based or is it something else entirely having nothing to do with computation? And so I think it might be the latter. It might be something that has nothing to do with computation, in which case AI is just a system that may trick us into believing that it has human like qualities.

That said, I agree with Chalmers and Koch as well that I want to believe that we'll discover something about it one day. And so I have more questions to ask than more answers. Like is it physically possible to gather evidence for consciousness within our universe? Or is it something that comes externally from our universe and doesn't allow itself to be inspected? I don't think we can make that.

Second, I guess it's possible the universe might not allow itself to be inspected in that way. But I think that the history of science shows us that a lot of things that we thought maybe were impossible to discover one day became discoverable.

So I suspect that it is physically possible, but it's not something that we have a clue on how to do because, look, our consciousness obviously connects to the physical world in some way. It's a mind body problem. There likely is some kind of physical evidence for it. What form that comes in, I have no idea.

A good example that I thought of is like we know today what the stars are made of and different stars have a different composition and we know what those compositions are by looking at the stars. Seems like that would have been kind of seemed impossible in the past.

When learning how far away these stars are, it might be like, well, we'll never know that much about them. Turns out we can tell a lot about them. So maybe the same thing is occurring in our consciousness, interestingly.

To go back to the hard problem, what Chalmer said, he kind of creates something called the meta hard problem. So the hard problem, what is the hard problem?

First of all, it's kind of funny because the hard problem is we don't know what causes subjective experience, so we don't know what causes consciousness. So if there's no physical component to the problem, it's hard. If there is a physical component, it's an easy problem.

There could be some so-called easy problems that are still really hard. But he comes up with this meta hard problem which is, like, not what is consciousness, but what is causing us to claim that we have consciousness or perceive that we have consciousness.

And I guess he claims that this is something physical that we could maybe discover through the brain. And if we could discover it through the brain, then because we could discover through the brain maybe why people believe the way they believe in things and what their perceptions are.

There might be some brain scanning technology and putting that through, I don't know, neural network statistical analyses to try to figure something out about that question. So that metahard problem, which is, “Why do we perceive consciousness?” according to Chalmers, would be an easy problem, but still incredibly hard.

But I guess his thought there is that if we begin to use biology and computation to begin to attack that problem, where maybe we do have a clue, then maybe it could give us some clues into the hard problem itself. Just some food for thought there.

All right, so consciousness is one thing. The Turing Test, which is a famous artificial intestine, is entirely a different thing. The Turing Test means that a computer can fool a human into thinking it's human.

Actually, scratch that. It's not just about a computer fooling a human into thinking it's human. That's actually pretty easy to do if the human isn't suspecting it. I mean, imagine how many times you've been fooled, at least temporarily, by chatbots, bots on Twitter, all that. The Turing Test actually requires one human and one machine that would both claim to be human, and a judge would have to distinguish between the two.

So it's not like you get fooled in the spur of the moment. You actually have a 50 50 chance. And you have to really kind of administer a test to make that distinction. That's much harder than kind of a temporary Twitter outfit. And I believe that hasn't been done yet.

I think even with Chat GPT, you could figure out that it's a machine pretty quickly. But unlike consciousness, we do have a clue how to solve this problem. And with Generative AI and Transformers, we're getting closer to the Turing Test all the time.

Recently in the MIT press, there was an article about a reverse Turing Test. So I thought this was interesting in light of these large language models. So we're going to be turning the tables on the humans. In short, we're going to be turning Turing on us people. What does that mean?

Here's a quote from the article. “The Turing Test is given to AIs to see how well they can respond like humans. In mirroring the interviewer LLMs — which are the large language models, chatGPT, all that fun stuff — may effectively be carrying out a much more sophisticated reverse Turing test, one that tests the intelligence of our prompts and dialog by mirroring it back to us.

The smarter you are and the smarter your prompts, the smarter the LLM appears to be. If you have a passionate view, the LLM will deepen your view. This is a consequence of priming, and your language ability does not necessarily imply LLMs are intelligent or conscious in the way that we are. What it does imply is that LMMs have an exceptional ability to mimic many human personalities, especially when fine-tuned.

A formal test of the mirror hypothesis and the reverse Turing test could be done by having human raiders assess the intelligence of the human interviewer and the intelligence of the large language model. According to the mirror hypothesis the two should be highly correlated. You can informally score the three interviews and connect the dots.”

So I think the idea is that the smarter the human seems the smarter the machine seems. And then on first glance to me that sounds like okay well that means that the machine will be as if they can mirror a smart person and mirror a dumb person then the machine has got to be smart because they got to mirror the smart person.

But it might not mean that. It might just mean that the machine is exceptionally good at copying and mirroring. And once it detects your mode of dialogue when it mirrors that it will appear to have the same intelligence. So that's really interesting.

In addition to what I mentioned before the OpenAI engineer saying it's a little bit consciousness there was also episode 236 with this engineer. His last name was Lemoine, I'm not sure what his first name was.

That engineer at Google who, for lack of a better term, claimed that Google's LaMDA had come to life and there were many ethical concerns there. So basically the test is rating both the machine and this case is mentioned in the MIT article. So basically as well as a few other cases where humans and machines were interacting in that way. So basically in this test the human is rating both the machine and the human and then is rating the machine being judged by the human and rating the human judging machine that is administered during the forward Turing test.

This is kind of some kind of Turing Test inception sort of crazy. I'm not entirely sure what the outcome of the test is supposed to be other than we see that the two values: how the human speaks and prompts and how the machine speaks back are correlated. But I think a lot more can be said about intelligence in general and I look forward to debating this with Aaron when we do our show on the AI Doomers.

All right one more thing and this is way easier tech to wrap your head around than artificial intelligence and consciousness and all that. So let's get to something a little easier. Let's give our minds and brains a little break here and especially since we've been living through these really fantastic upgrades in this particular area of technology over the last few decades and we feel the effect so easily. And that's display technology, what you can see on your monitor through your glasses, all that.

When a better display comes out, there's no one who has to write a think piece about here's how to benefit from so and so's new display technology. I know how to benefit. I buy it and I look at it. That's it. So we all kind of know what to do.

I remember in the early 2010s when 3D TVs and curved TVs were the latest, and I couldn't find the article, but I remember reading an article saying that folding TVs were considered, were going to be the next big thing for 2020. Well, that didn't happen. I don't have a TV that folds up. I've seen some pretty cool OLED TVs. A lot of people are getting that kind of look like paintings when they're off, which is pretty awesome. But what is the point of this folding technology? And could it still be on the horizon? It's interesting that foldable technology and the Vision Pro, which is Apple's glasses, are both space saving technology.

In other words, it means that I have a desk right here in front of me with all of these monitors and all these screens. It would be nice to save some of that space. If I could just put it right in front of my eyes, I could see a full desk of monitors without anything physically being there. If I have a folding TV, that would be pretty awesome, because I don't know why it would be awesome.

Honestly, it's not like I'm going to put something in that area. But actually, no, in my apartment in Brooklyn, when it was pretty small, it might have been pretty useful.

So, of course, the Vision Pro, which is I spoke about with Aaron in episode 281, promises to remove all other physical displays, which is pretty interesting. But in terms of the foldable technology, and also the corollary to that is the rollable technology. Like, imagine you have a phone or whatever that you can roll up, like a scroll or something. Hear ye, hear ye, you know all that.

Anyway, this has come up in the New York Times recently. According to the New York Times review by Brian Chen of the Google Pixel fold, these foldable phones are getting quite impressive. They're getting better and better. So maybe it's not just a fad. Maybe it will be standard someday. And I for one, would celebrate the return of a flip, floating goodbye click. That could be a lot of fun. The OLED technology that makes these new flip phones possible is expensive. That foldable Google phone is one $800.

But display technology does move fast. Prices do come down over time, as we know from our TVs and such. And if they can get the price down, as the author of this review expects, then maybe this will become the standards.

It sounds way better than and it sounds like. From the review, he finds that this technology is way better than either a phone or a tablet because unlike a tablet, the fold allows you to kind of open it up and read it like a book where a tablet is kind of like this big clunky thing in your hand. And it's more than a phone because that extra screen real estate apparently is a game changer. So we'll see.

Another display tech that I came across. This is going to be a blast from the past, from the 2000s, this time in TechCrunch. And all of these articles will be going on. Localnextradio.com 286 for this show is Color E Ink. Remember E Ink? I don't know if you have a Kindle. I should have gotten my Kindle out for this.

I don't know if you have a Kindle or an Oasis from Amazon where you can read books. Some of you might still have e-readers. As you can see, I have a lot of physical books back here still. So maybe ten years ago, I thought I was going to move over to e-reading, and I never quite did it. I have some reasons for that. I remember when the Kindle came out, I was working at Wireless Generation, an education technology company at the time.

And we were coding for the Palm Pilot. So we had our educational assessments or whatever. We had teachers marking things down on the Palm Pilot that would sync to the web. And so we were really interested in mobile technologies used for education.

So when the iPhone came out in June 2007, no one was going out and buying that immediately. But our VP of Product bought an iPhone for the company, brought it into the main conference room, and we checked out that first iPhone in June 20, 2007. It was pretty awesome.

And then the same thing, the Kindle in November 2007, they brought that in and back then it was a big Kindle, like the size of not even a book, but I can't maybe the size of like this yellow pad right here, these big yellow pads that I use to write things down, something like that.

And the refresh rate, it took like two, 3 seconds to refresh the next page. It was still pretty cool. And so we got to look at some of the devices of the future back then. I don't think there really is that was a really good time to be in tech in 2007 for new display tech. I don't think there really are those kind of hardware product launches now.

I mean, now it's like, okay, they launched the Vision Pro. Maybe it'll come out like next year. People are excited about it, but there just aren't those kind of ideas of game changing hardwares that there were that might change, by the way. But anyway, so I saw the Kindle in November 2007.

A few years later there were some articles out about Color E Ink, but like the folding phones, it sounded too expensive. The refresh rates were too long. The color wasn't even that good. And honestly, people were reading these books that didn't have color. They were all words, so they couldn't make it into a working consumer product.

Interesting enough, I think there's a way to integrate color into books. Like if you're using the reader and you kind of want to click the book to kind of look up a dictionary definition, you might want to have some of the commentary being a different color or something like that to kind of enhance your reading experience so you could see what the actual words of the book is.

But apparently they never got that to work as a product at some point. I remember like ten years ago, there were these things coming out called color e-readers on the market, but they weren't really color e-readers, they were really some kind of glorified iPad.

So it's like, okay, back to I might as well just read on my iPad rather than reading on my Barnes and Noble, whatever that thing was, but not their e-reader, but their e-reader that was basically a tablet. And so it's like, why this is not a good reading experience? Why am I reading here?

But a recent article came out in TechCrunch saying that some of this E Ink technology is getting better and better. And so they speculate as to where it's going to go. Are e readers still as popular? They still sell them. I've gone back to physical books, but I like having, I have more space since leaving New York, though, which is why I have room for these physical books. I feel like I'd love to have books on both the e reader and in physical, know, Amazon or whatever should sell them know, as a package, in my view, because it's a better experience.

You get the physical copy and then you could read it on the go as well these days. And this is something that I did not consider in 2012 and 2013. These days, I feel with digital books, they're going to change it. They change it on you when it says something inconvenient in the current year, it's not politically correct. They want to go back and change what the author says. They'll just wipe it clean.

So you want to change my physical book, come and make me. But that's kind of the danger in digital books still. I think color E Ink is cool technology. And so this article has some stuff on E Ink, which is there's one company called E Ink in the Boston area that makes it for Amazon, Kindle and all the other ones. And so they were presenting at CES and right again, E Ink is in the Kindle. All the other e-readers, let's read this.

E Ink posted up at the very, very nice, very nice hotel and casino. E Ink posted up at the Venetian for CES 2023. And inside its makeshift showroom, the MIT spinoff crammed its latest tech, including pieces of its Wacky PMW wrap and its latest Gallery 3 colored displays.

The latter tech is now trickling into the market, starting with devices like the Pocketbook Viva. And let me tell you, these displays look outright vivid next to the washed out hues in E Ink's Kaleido color displays, which debuted just two years ago. Gallery 3's CMYK displays can spit out 50,000 colors at 300 DPI, way, way up from the Kaleido’s 4000 colors. The company said, we aren't ever going to be the best movie showing screen. US business lead Timothy O'Malley stated the obvious in an interview with TechCrunch, but E Ink's goals are still stretching way into iPad territory.

Eventually E Ink aims to build a magazine reading experience that's good enough to win over even the most demanding publishers, O'Malley told TechCrunch also pointed out the popularity and great use case for signage, which I agree that is a great use case because you don't care if the refresh rate is high for signage. It could be 10 seconds. Or as they're achieving here, they're achieving one to 2 seconds to refresh color, whereas refreshing black and white is measured in milliseconds, 100 milliseconds, nearly instantaneous.

But one to 2 seconds or even 10 seconds to change a sign that I have like, oh, I want to change the specials that I have in my store today. 10 seconds, I don't care. 2 seconds. Great. So as long as it looks good and can be changed inexpensively, I think that's a good use case. So the company has a great demo which shows promise, but the journalist here and again, who do you trust more, journalists or people pimping their company? I don't trust either of them. I'm not in a very good position here.

But the journalists here also stated that the company got quite cagey and hand wavy when asked, okay, when does this actually come out on the Kindle and stuff? I'm sure the Kindle does some great innovative stuff. Or not the kindle. I'm sure the E Ink Company does some great innovative stuff. I'd even consider having them on the show, but it sounds like we can't quite count on them having this Minority Report changing newspaper magazine quite yet.

I don't think they're there. It sounds like they're not even making any promises. So that's a little disappointing. But still, I want to follow this technology in the future.

What do you think? localmaxradio@gmail.com or join our locals, maximum.locals.com, to discuss this tech and what you think about it. All right, next few weeks, I've got some interviews from people that I met at Pork Fest, including Peter Earl on ESG in companies environmental, social, government requirements that companies required, why it's harmful and why it might be going out the door, thankfully.

I want to talk to Aaron about a bunch of things, including AI doomerism. So a lot of great things in the pipeline. Always remember to subscribe to the local Maximum. Have a great week everyone.

That's the show. To support the Local Maximum, sign up for exclusive content and our online community at maximum.locals.com. The Local Maximum is available wherever podcasts are found. If you want to keep up, remember to subscribe on your podcast app.

Also check out the website with show notes and additional materials@localmaxradio.com. If you want to contact me, the host, send an email to localmaxradio@gmail.com. Have a great weekend.

Episode 287 - The Rise and Fall of ESG with Peter Earle

Episode 287 - The Rise and Fall of ESG with Peter Earle

Episode 285 - Max Changes the Constitution Part III

Episode 285 - Max Changes the Constitution Part III