Transcripts

Intelligent Machines 836 transcript

Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.

 

Leo Laporte [00:00:00]:
It's time for Intelligent Machines, the show we talk about AI with Jeff Jarvis and Paris Martineau. Our guests this week have created a really useful website for students and anybody who wants to understand the good and the bad of AI. Carl Bergstrom and Jevin west talk about the BS machines. Next podcasts you love from people you trust. This is twit. This is Intelligent. Intelligent Machines with Jeff Jarvis and Paris Martineau. Episode 836 recorded Wednesday, September 10, 2025.

Leo Laporte [00:00:40]:
I see OJ and he looks scared. It's time for Intelligent Machines, the show where we talk about all the latest AI news, robotics, and the little doohickeys all around you that are getting smarter and smarter and smarter. Paris Martineau is here and she's going to join us in a little bit. She's investigative journalist at Consumer Reports. Jeff Jarvis is also here, professor of journalism at the well, Emeritus at the City University of New York. See, I avoided the mention of Craig Newmark's Graduate School of Journalism.

Paris Martineau [00:01:13]:
Oops, it snuck in there.

Leo Laporte [00:01:16]:
Author of the Gutenberg Parenthesis Magazine, Niagara Falls. Slowly I turned. So a little explanation. Remember last week I had explained how the logistics worked. We recorded the week, the interviews last week and this week out of order due to scheduling issues. So last week, and I hope you heard it, really fascinating interview with Karen Howe, the author of Empire of AI this week we were going to talk to two professors from the University of Washington who've done some really good work on teaching their students, and by extension all of us, how to think about AI, how to be a critical thinker when you encounter AI generated content, how to determine if it's AI generated generated. And I thought it was really, really good. So unfortunately, Paris was not here for that interview, but we were very fortunate.

Leo Laporte [00:02:07]:
Harper Reed joined Jeff and me as we interviewed Carl Bergstrom and Jevin West. So we're going to go to that and then when we come back, more intelligent machines. Watch. Hello, everybody. Our guests today on Intelligent Machines are intelligent, which is nice. They're actually professors. But first, before we get to them, let me introduce Harper Reed, who is filling in today for Para smartno, at least for this portion of the show. Great to see you, Harper Blog.

Leo Laporte [00:02:34]:
His company is an AI guy. 2389 AI and of course, always a welcome guest on our shows. Thanks for joining us, Harper. We appreciate it.

Harper Reed [00:02:42]:
Thank you for having me.

Leo Laporte [00:02:43]:
Good to see you. Jeff Jarvis is here, professor of Journalistic Innovation Emeritus at the Craig Newmark Graduate School of Journalism at the City University of New York. Newmark Newmark also at Montclair State University and currently at SUNY Stony Brook. His books the Gutenberg Parenthesis, now in Paperback, Magazine now in Audio, and of course, the Web We Weave, which is a manifesto to preserve our glorious Internet.

Jeff Jarvis [00:03:14]:
Good to see you, boss.

Leo Laporte [00:03:15]:
Yes, we got. We've. I think this is a good topic. I'm excited. Some years ago, Carl Bergstrom and Jevin west wrote a book called Calling bs which was the Art of Skepticism in a Data Driven World. But. But you have been teaching. But first, Carl, welcome.

Leo Laporte [00:03:36]:
And Jevin, welcome to Intelligent Machines. Great to have.

Carl Bergstrom [00:03:39]:
Thanks so much.

Leo Laporte [00:03:39]:
It's great to be.

Jevin West [00:03:41]:
Thank you for having us.

Leo Laporte [00:03:42]:
Yeah. Professors. Sorry, Jeff.

Jeff Jarvis [00:03:45]:
Professors themselves.

Leo Laporte [00:03:47]:
Yes.

Jeff Jarvis [00:03:47]:
Full titles in here.

Leo Laporte [00:03:48]:
Yes. You guys teach at uwa? Is that where you.

Jeff Jarvis [00:03:52]:
Yeah.

Carl Bergstrom [00:03:53]:
University of Washington.

Leo Laporte [00:03:53]:
Yep. Yep. I say UWA because if I say uw, it's unclear.

Jevin West [00:03:58]:
So Wisconsinites get kind of mad about.

Leo Laporte [00:04:00]:
They get upset. So let's be clear. Washington State and what. So let me ask you both, what's your. What's your technical discipline? I don't think there is a. A BS degree at this point. Well, there is.

Jevin West [00:04:14]:
I put it on my cv. I now have studies on my cv.

Jeff Jarvis [00:04:19]:
I think a school of BS would be. Would do.

Leo Laporte [00:04:21]:
Well, I've got a BS from University, but. But you. What do you teach normally? What is it? What is the academic discipline?

Carl Bergstrom [00:04:32]:
Well, my background's in biology, actually.

Leo Laporte [00:04:34]:
Oh, interesting.

Carl Bergstrom [00:04:35]:
Yeah, I teach a course on evolution and medicine and how evolutionary thinking can sort of inform our medical curriculum. I've taught courses on other. I've got a whole textbook on evolution and I've taught various courses on game theory and animal behavior and epidemiology and all of that.

Leo Laporte [00:04:52]:
That's cool.

Carl Bergstrom [00:04:53]:
Really interested in. In misinform information a number of years ago through conversations with Jevin.

Leo Laporte [00:04:59]:
Yeah. And Jevin, are you a biologist?

Jevin West [00:05:02]:
Well, I actually have a PhD in biology, but I'm now in an information school, which is basically a group of scholars that don't really have a home. So I study technology and the ways that it can impact our everyday lives can impact institutions like science. So I take, you know, I'm mostly a computational social scientist now. So study the ways in which rumors spread online and in science. But like you said, bull study spans everything.

Leo Laporte [00:05:32]:
So.

Jevin West [00:05:34]:
Yeah, so many courses are from disciplines that have also taught the course are from psychology, engineering, business, etc. Etc. So it is one of these spanning disciplines.

Leo Laporte [00:05:45]:
I am going to kind of deviate a little bit, Carl, just because I clicked your website and I am just blown away. By your beautiful bird images. You're a bird photographer as well?

Carl Bergstrom [00:05:57]:
Yeah, I've always loved birds ever since.

Leo Laporte [00:05:59]:
I was a kid.

Carl Bergstrom [00:06:00]:
So gives me a chance to reconnect with the biology that I fell in love with.

Leo Laporte [00:06:05]:
Yeah, that's great. So this is all very timely and I think people. I'm going to steer people to the calling BS website which we. For, you know, to, to keep it family friendly. Are not spelling it out, but if you look at the lower third or go to the website, you will, you will see the actual link. This is. Let me, let me pull it up before you show it here because I'm not, I'm on the book.

Jevin West [00:06:36]:
Also, Leo, there is a non swear word version. If you go to calling bull.org you won't have the swear.

Leo Laporte [00:06:41]:
Oh, you're so smart.

Carl Bergstrom [00:06:42]:
Yeah, that's for the, that's for the original course, not for the. Not for the LLM.

Leo Laporte [00:06:46]:
Oh yeah, Then we get this. Okay. Yeah. Because I want to, I want to show. So what you've done, which I think is really cool, you've developed I guess a curriculum to help your students parse all this stuff, right. That they're dealing with in the world today. Yeah, yeah.

Carl Bergstrom [00:07:05]:
I think it's a huge challenge. Right. How do you be a scholar or a writer or a thinker or a human being in a world where AI has become ubiquitous? And so Jeff and I sat down and we talked with hundreds of our students and with administrators and so many other people and developed a course that we think every college freshman should take. Essentially put it all online so that hopefully every college freshman can.

Jeff Jarvis [00:07:32]:
So you had the book and you had the course before AI took over the world and LLMs. That's right. So what was the main thing that you changed and added in that curriculum?

Carl Bergstrom [00:07:45]:
It's really an entire new curriculum. It's follow up as opposed to a second edition or something like that. So the whole curriculum is about basically what it means to be a thinker and a student in an AI world.

Jevin West [00:08:02]:
Yeah, And I was just going to say that we've had many of the teachers and students that have taken the course around the world ask us for that second version version. And we've been tempted and actually at some point probably will write a second version to the original book which is focused on data and more generally on bs. But then this thing kind of just landed on the earth in late 2022 that is one of the biggest BSers of all. And we said we couldn't ignore that.

Leo Laporte [00:08:29]:
And had to focus on that you're not anti AI. In fact, the website, I think, does a really good job of kind of saying what AI can and cannot do. It's also a beautiful website.

Carl Bergstrom [00:08:45]:
Yeah, I think that's, I mean, for us, you know, the whole course is really designed around this, this duality that we experience or dialectically experience when we use AI ourselves, like on one hand. And we can talk about this. It's very literally a BS machine and the, in the precise philosophical sense that Harry Frankfurt talked about when he wrote the original essay on bs. But at the same time it's also tremendously useful. We use it every day, we learn from it, we see our students learning from it, we see scientists using it for their work and so many other applications. And so that's the real mystery. Right. How can this simultaneously be literally a bull machine and also very, very useful.

Carl Bergstrom [00:09:28]:
And rather than sort of lecturing at students and saying, oh, you know, this is what you have to, you know, do, don't use these or these are bad or this and that we want them to, you know, recognize that duality themselves. And they really do. They're already feeling a lot of anxiety and concern about it because they find it very useful, but they're troubled by aspects of it. And we want to kind of lead them through a chance to explore and sort this out from themselves and sort of grapple with this, with this dialectic themselves.

Leo Laporte [00:09:57]:
Yeah, and we also live, as someone once famously said, and I know who it is, but I'm not going to give her any credit, we live in a post fact world, alternatively fact, our government is very actively promoting non factual material. So it's really important for us as citizens, even not just as AI users, but as citizens to be able to understand what's true, what's not. There's a lot of disinformation, isn't there?

Carl Bergstrom [00:10:24]:
Yeah, absolutely. That was where we were headed with the first book. You know, John, you've done a ton of work on this.

Jevin West [00:10:30]:
Yeah, for sure. I mean, this is where we're almost most concerned. I mean. Well, it's hard to say that because every time I say most concerned there's some new ability that arises. But I think this is where we wake up worried a little bit in the morning every day that these technologies are going to make this disinformation, misinformation problem that we're all pretty aware of, make it even worse. And that's really one of motivations behind this, is to help us think about it as individuals, but also think about some of the bigger issues for society.

Leo Laporte [00:11:00]:
When you talk to your students, what is the. Especially students coming in this year, they, they basically used it in high school, almost certainly. Right. It's part of their process. What do you, what do you. What do you find difficult to explain to them? What is it that they, you. You really want them to understand? You don't. You can't say to them, stop using it.

Leo Laporte [00:11:28]:
You're not learning if you use it, because that's. They're not going to do that.

Carl Bergstrom [00:11:33]:
I think, you know, one of the things that I set out to try to explain to them is to help them understand the agency that they're giving up when they turn over their thinking to.

Leo Laporte [00:11:44]:
That's a good one.

Carl Bergstrom [00:11:45]:
And so we talk about that a lot. And it's not a perfect answer to your question, because I find they're very good at understanding it once we go through that conversation and they're ready to hear it. And so it's quite rewarding. But we talk about what is it that goes into your own human writing. We talk about the authenticity of your own human writing and what does it mean to be able to actually communicate as a human being, and how do we do that through writing? And what are we replacing our own viewpoints with when we hand that work off to a large language model? And just for an example, one thing that I find with the large language model, if I ask it, how to ask it here, write this paragraph for me or answer this question for me, it'll write something down and I'll look at it and I'll think, yeah, it seems about right. And I'll say, that's good. But it's actually not the same thing I would have written at all. It might be quite different from what I would have written if I had done the work of thinking through it and writing it out.

Carl Bergstrom [00:12:40]:
And the students see that pretty quickly. They're pretty sophisticated about that. And for reasons I don't fully understand, they are already carrying some, maybe it's very natural human anxiety with them about handing off that agency. And so they seem very ready to hear this.

Leo Laporte [00:12:57]:
That's nice. Of course, they're going to be in four years going out into a world where they may not be jobs because of these capabilities. Well, that's the key thing.

Jevin West [00:13:06]:
That's one of the things that I try to communicate to the students is AI is here to stay. So you better spend your time here at the university figuring out ways to distinguish your abilities from these LLM abilities. And then a lot of times that'll spark some conversations about, well, what is it that makes us unique, what makes us worthy of being hired at a Future employer? If LLMs can do a lot of things that they're doing in class, so that's part of what we're trying to.

Leo Laporte [00:13:34]:
Now's the time for them to start thinking about that for sure. The minute they hit college, they got to start thinking. You use a term I really like called. I don't know if you coined this or if it's a well known word. I hadn't heard it before. Anthropoglossic. Anthropoglossic. AI chatbots sound like humans, right?

Carl Bergstrom [00:13:55]:
They're designed to. And anthropoglossic like speak like humans, right? So it's glossier, you know, speaking. And that's, that's what they're designed to do. They're, they're not anthropomorphic. They're not designed to look like humans or be shaped like humans. They're anthropoclossic. They're designed to seem like you're talking to a human. And they're really, really good at it.

Carl Bergstrom [00:14:12]:
And so that can lead to all kinds. I mean, it makes me confused, right. I apologize to them. The other day I was experimenting with one and, and, and I was, I was looking to see what happened when I, when I contradicted it and I was wrong. You know, how sycophantic was it? And I was. Found myself like feeling guilty for gaslighting. And I've written about how you shouldn't believe this, but I still, you know, it still happens.

Leo Laporte [00:14:37]:
It's a natural. It's how we are. This, this is our biological design, right?

Carl Bergstrom [00:14:42]:
Yeah, exactly.

Leo Laporte [00:14:43]:
Harper, I know you're a big fan. Harper's been on the shows with us to talk about Vibe coding, how he uses Vibe coding. He has an AI company. But I think you probably still have to grapple with this kind of thing too, right?

Harper Reed [00:14:55]:
I mean, I kind of love it. I think we have a little bit of, like I say we as the team, like we talk about this a lot of, of what happens when you get frustrated with it. How do you emote towards it? And there's this, this other thing that I don't see talked about a bunch, which when we start to have, when all these interfaces come to fruition and we have a chat box in every single interface, whether it's Google or Excel spreadsheet or an email client, you're going to get bad news and you're going to emote into the Chat box, because it seems so similar to language. And you're going to say something like, oh, man, what a bummer. And then suddenly it's going to respond back to you, as is an LLM. But it's going to start therapist. Not because you want it to be, but because it's going to say, oh, that sucks. Like, I'm sorry you're upset.

Harper Reed [00:15:44]:
Like, do you want to talk about it? And all of a sudden you're. You have a therapist inside of Excel because your financial model says you have to lay everyone off or whatever. And I think this is something from, like a safety standpoint that we don't really talk about much. Is that how. Because of the. And I'm not going to say that word that y' all use because I have too many syllables. I just have a strong rule against that many syllables. But once it's anthropomorphized and you start to put these.

Harper Reed [00:16:07]:
These. Your brain starts thinking it's it'. And then also it's using the same metaphors that we text our closest friends, that we interact with our closest friends. You're of course going to say like, oh, man, that sucks. And then it's going to respond. And I just don't think. I don't really trust Excel as my therapist.

Leo Laporte [00:16:23]:
No, you should.

Harper Reed [00:16:24]:
I don't trust email or Gmail as my therapist. And I. We're already seeing the ramifications of this when people start relying on this for emotional support. That, you know, some horrifying news over the last week or so, but I do think there's a lot to be thought out around safety, etc.

Jeff Jarvis [00:16:41]:
That.

Harper Reed [00:16:41]:
That people are not.

Leo Laporte [00:16:43]:
Can we ever. This is what I worry about. Can we ever be safe? I mean, this is the argument. Karen Howe. We did an interview with Karen Howe and one of the things she says in her book Empire of AI is that this is in the nature of Transformers, that no matter how big they scale and how smart they get to seem, that ultimately there's always going to be these edge cases where they fail. Now humans fail, too, all the time. That's not unusual, right?

Harper Reed [00:17:16]:
I think there's the other side of this. I just googled helmet laws suck. That sticker from the 70s that all my relatives had on their side of their helmets. I think humans are adverse to safety and especially to restraint. Anything that appears as though they are restrained, they are being restrained. I think that we are just like, no, I don't want that. Why would I want a seat belt or whatever it is? And Then it takes a strong, like, public health interaction to kind of solve some of these issues.

Leo Laporte [00:17:44]:
I hate it when Chat GPD says I can't generate that image. It's like, screw you, give me that image.

Harper Reed [00:17:52]:
That does work, which is very strange. And then it's teaching these other bad behaviors about how to do this. And this is one of the reasons why we as my company keep focusing on, you know, like this, this pro social AI stuff is we don't want to create interfaces that are leading users down paths where they have to coerce and AI to get the output that they want. Because I also don't want those same people to be thinking, oh, now I have to coerce this person. I have to coerce this person. And like, the language itself around it I think is really messed up. So it's good to, I think it's good to lean into these ideas.

Jeff Jarvis [00:18:23]:
So the, the. Is it. Is it logically necessary for them to be sycophantic in the sense that they always want to please you, or is there a different way to design these things that they don't try to B.S. you?

Carl Bergstrom [00:18:35]:
I think that's just, I mean. Well, there's two separate questions there. The sycophantic part, which I'll take, is, I think mostly and Jevin, you know, perhaps more than I do about this, a consequence of the reinforcement learning with human feedback, that process that they've gone through. I mean, if you, you know, if you set up a raw LLM, it's going to be a fairly decent autocomplete, but it's not going to really understand what questions. It's what questions are, what answers are. It doesn't really know what you want from a conversation with it, et cetera. And so of course, that's a lot of the training originally done mostly by paid employees, now done mostly by those of us using the large language models at volume. And so that's where that comes from is that when the machine says, that's a great question.

Carl Bergstrom [00:19:17]:
Let me tell you, and I'm really glad you asked that instead of saying like, you moron, asked the same stupid question five minutes ago. I explained it to you three different ways. And I've actually never spoken to a human quite as slow as you. It'll never say, right, because. Because those get ranked badly. And so you just turn it off.

Jevin West [00:19:34]:
And then move to the next.

Carl Bergstrom [00:19:34]:
It always just so. It always praises me for asking it stupid.

Leo Laporte [00:19:37]:
We know you could turn it off because OpenAI did in ChatGPT 5. There's a knob. They turned it down. Yeah, that's right. It's so.

Carl Bergstrom [00:19:46]:
But it's still really bad, right? I mean, it's still.

Leo Laporte [00:19:48]:
I have to say, though, and I, and I always hated this when professors did the same thing. I mean, professors will often say that was a great question. And that drives me nuts, Leo.

Jeff Jarvis [00:19:58]:
That's the way we buy time to come up with an answer.

Leo Laporte [00:20:01]:
Well, I think it's also just, just. It's a natural thing. And this is the thing. These machines are aping us in some respect because the reinforcement training we taught them to do that, I mean, it's the same.

Carl Bergstrom [00:20:14]:
I mean, I really feel like actually this is a serious danger of these machines. And we see this with a sort of AI induced psychosis.

Leo Laporte [00:20:21]:
Right.

Carl Bergstrom [00:20:21]:
That people go down these rabbit holes and the machines keeps telling it. You're right. That's a great idea. Here, I can help you confirm that crazy theory that you now have. We can find this secret to the universe in these, you know, in the, in the timing of the ads on the Major League Baseball broadcast. And, and this. The machine will never tell you you're wrong in your student.

Leo Laporte [00:20:42]:
And so I think that's problematic.

Carl Bergstrom [00:20:44]:
This is problematic. It explains AI induced psychosis. It also explains billionaires, I think.

Leo Laporte [00:20:48]:
And yeah, because they're, they don't. They're not doing with AI, they're doing it with sycophantic underlings. But it's the same effect.

Carl Bergstrom [00:20:54]:
Surrounded by people who will never tell.

Leo Laporte [00:20:56]:
Yeah, that's stupid.

Jeff Jarvis [00:20:57]:
Oh, yeah, you're smart.

Leo Laporte [00:20:59]:
Yeah. I don't know whose story this was because you don't have a byline on it, but I love this vibe coding story in the slide deck, by the way. Everybody should look at this. This is a wonderfully produced. Basically, it's a. You could. Anybody, if you're a high school teacher, you should probably adopt this as a curriculum. Modern day orchestra.

Jevin West [00:21:20]:
Many are, actually.

Leo Laporte [00:21:21]:
That's good.

Jevin West [00:21:21]:
We're having these conversations right now and we're learning a lot from the teachers and the students themselves about what's working. And a set of instructions are provided at the end.

Leo Laporte [00:21:29]:
And so, yeah, it's beautifully designed and it's designed to kind of suck you in because, you know, this little hello from the Macintosh. But was this Carl's story or Jevin's?

Carl Bergstrom [00:21:40]:
Yeah, that was my story. Yeah. Yeah, it's true.

Leo Laporte [00:21:42]:
I love this story. So when you were a kid and you got a computer, you were eight years old, a Commodore Pet, and you said, hey, can we make a version of Space Invaders. That the aliens don't shoot back.

Carl Bergstrom [00:21:56]:
Yeah, because I was always losing quarters at the arcade.

Leo Laporte [00:22:01]:
And your buddy said, no, we can't figure no.

Carl Bergstrom [00:22:06]:
And like, what kind of idiot are you?

Leo Laporte [00:22:08]:
Of course you can't do that how computers work. Except that you've done it.

Carl Bergstrom [00:22:13]:
Yeah, I had to wait. Yes, I had to wait 35 years.

Leo Laporte [00:22:16]:
But well, let's be careful.

Jevin West [00:22:18]:
LLMs did this one.

Leo Laporte [00:22:20]:
This is you vibe coded as well. Space Invaders.

Carl Bergstrom [00:22:23]:
Yeah, I just, you know, it was very simple. We just simply said, you know, write us Space Invaders where the aliens don't shoot back. And it took code and I could do that. Then that's what it did.

Leo Laporte [00:22:34]:
I could tell you I have written in classwork a Space Invaders game. Mine shot back, but it is very hard. This is amazing.

Carl Bergstrom [00:22:43]:
You know, it is one of the, it's part of the amazing duality of this, you know. Yeah, it's a BS machine, but it did that and it, and it worked. On the other hand, other times I ask it to write code and it's the only thing I've ever seen that's been able to not only crash Mathematica, but crash the entire system.

Harper Reed [00:23:03]:
You obviously know we're trust many programmers.

Leo Laporte [00:23:06]:
Well, this is always my point is that why do we expect these guys, these AI guys. See, I did it again. Why do we expect these machines to be any more reliable, robust than we are?

Carl Bergstrom [00:23:17]:
Well, there's an important difference, right, which is that even if they're not, when humans do things, they're accountable for them, them and these AIs aren't. And so when you think about the therapist or something like that, or just the behavior.

Leo Laporte [00:23:28]:
Right.

Carl Bergstrom [00:23:29]:
So we had that horrific news that Harper alluded to earlier. Somebody went to prison for years for doing the same thing that the AI did. But here there's no accountability.

Leo Laporte [00:23:38]:
Right.

Jevin West [00:23:39]:
Well, and also with a computer, we expect predictable results. I mean, it's been sort of encoded even in science fiction. We sort of mentioned you kind of scrolled by Lt. Commander data from Star Trek. When he spoke. You didn't expect it to know him to have these kinds of problems. And so yeah, with computers it's different. When you put in a calculator four plus eight, you kind of expect the same kind of response.

Jevin West [00:24:02]:
And this is just a different kind of machine. And it's encoded to be as we've talked about. It's not that we have to be careful not to anthropomorphize them, because they already are anthropomorphic. They are humans that's what they've been optimized for. And so that's why it makes it hard for us to reconcile those predictable qualities of computers with how they're really behaving many of the, many times.

Leo Laporte [00:24:25]:
Maybe we made a mistake doing that though.

Harper Reed [00:24:27]:
I mean, I mean, I think we certainly did, but you just gotta, we just gotta roll forward on this one. And the mistake I don't think started with AI. I think it started much earlier. I was gonna say maybe the slide rule even.

Leo Laporte [00:24:45]:
Yeah.

Jeff Jarvis [00:24:49]:
There'S, you know, close enough for jazz, close enough for AI. And the slide rule in my eyes had a kind of approximation potential to it.

Leo Laporte [00:24:56]:
Yeah. Actually that's how you learned about the limits of precision. I mean.

Carl Bergstrom [00:25:01]:
Exactly.

Leo Laporte [00:25:02]:
Yeah.

Jeff Jarvis [00:25:02]:
Computer we expected always to give us an answer. And beyond that, I mean, I was never a programmer, but it always struck me that the, that the seduction of working with computers is there was always going to be an answer. Unlike life, you can always find a solution to the problem. It's always there. You may take you forever, but it's there. And that's not true in life.

Harper Reed [00:25:24]:
So I think, I Wonder. I have two things. One, sci fi hasn't isn't for looking at sci fi. AIs oftentimes do bad things in the corpus of sci fi. And I would say they often are out doing things that are unreliable or negative or bad. More so than they are Commander Data.

Leo Laporte [00:25:45]:
No how a computer has ever made an error.

Harper Reed [00:25:49]:
Exactly. And the second thing was there's this thing that happened with social media and I remember and I might be misremembering, but I think it was Dana Boyd who was just talking about how a lot of these young people have a little more progressive point of view around privacy. That doesn't mean that they're not susceptible to all these problems, but that they have a little stronger perspective on privacy and whatnot. And I wonder how many young people who are growing up within this era are changing their perspective around the precision aspects of computers. If your search engine can't count Strawberry or number Rs in Strawberry or has problems with facts that you have to constantly check, are you going to build in processes that are different than all of us who've been doing this for 30 plus years who have an expectation that 2 plus 2 is always going to be 4, not sometimes 5 or not sometimes 4? And I think that I really want to see how they are reacting because I'm always amazed at how much survival they have with.

Jevin West [00:26:49]:
Yeah. And I think it's a great point. And Actually, Carl and I have seen this with some of our students and even with Teddy, who's Carl's son, when we've talked to him about this, one student in one class, when we were talking about this, realized this kind of fluidity of answers with ChatGPT and some of these LLMs and used it mainly to get it, get search terms so that he could go back to the traditional search, get the references, because he knew that he couldn't fully trust the answers. He just needed the search term and that. And so using these different tools, the sort of modern, almost modern day information retrieval system that seems to be emerging with chapter B and others and then the traditional search. And so with any sort of tool set, I think they are sort of surviving in this world and probably do look at computers probably differently than maybe we did when we were growing up.

Carl Bergstrom [00:27:39]:
I think the important thing is to figure out how to get them to the point that they can learn to do that really effectively. And so, you know, I've seen students that are, that are, you know, really good critical thinkers. They're really empowered. They maybe spent a fair amount of time with this and they're very efficient with it, much better than I am. They're really good at sort of interrogating AI to figure out whether its answers are correct, to make sure they understand it. They'll sort of, sort of argue with it. They'll kind of, instead of doing a single search where they're trying to get a perfect response, it's this conversation where they dive deeper and deeper. It's this sort of conversational information retrieval.

Carl Bergstrom [00:28:12]:
It's very, very effective when they do that. And a lot of what they're learning is like, well, what can I trust it about? What can't I trust it about? What kind of evidence do I need from it? And as Jevin says, you know, in some cases, maybe what you really need is you need the right search term to put in. Go look in the Wikipedia, right? But in other cases, you want an explanation. The key is you can't just trust its first explanation. It gives you an explanation for whatever you're trying to. You're trying to understand a problem in physics. And it says, okay, I'll do this. And then you say, well, wait a.

Leo Laporte [00:28:40]:
Minute, I don't understand.

Carl Bergstrom [00:28:41]:
That doesn't make sense, given this thing that I think. And they have to learn how to argue with it. And that's really different than our model of information retrieval. I mean, Jevin and I were doing information, having been doing information retrieval stuff for the last 20 years. And our model of information retrieval has never had a step where the user argues with. And so the question is basically like, I think there is a place that students can get to, and the question is, is that where people are going to get to and is the next generation going to be really facile at this, or are there going to be a small number of very talented power users that can do that and everyone else is just going to accept whatever BS it spews the first go.

Jeff Jarvis [00:29:22]:
I talked to Lev Manovich, who's a digital humanist and designer, and he's been very active using it and wrote a book about it. And he says that when he enjoys AI most, it's when he thinks it's wrong. Okay. Because it forces him then to question himself and argue with it, as you say, and sometimes become more set in his way, or sometimes saying, I've seen a new way here, and I think that that's a healthy way to teach people to use it. I'm writing a book proposal right now. And that horrible part I always hate, which is the other books in the market are. And so I went on to perplexity, and I went on Gemini and I said, give me books that are about this topic.

Leo Laporte [00:30:04]:
Make up some books I can cite.

Jeff Jarvis [00:30:05]:
Well, that's exactly what it did. So I look at the first book and I think, oh, hell, I hadn't heard of that one.

Leo Laporte [00:30:10]:
Damn.

Jeff Jarvis [00:30:11]:
And I looked it up and it didn't exist.

Leo Laporte [00:30:12]:
Right.

Harper Reed [00:30:15]:
That's the problem with Gemini. Like, I love that Google has obviously the best models and some of the best researchers in the world, but their products are always just so weird to use. I've been using it a lot lately, and I like it. But you're just like, why are you.

Jeff Jarvis [00:30:26]:
Why?

Harper Reed [00:30:26]:
I thought we solved this problem. One thing that I. That I really wonder about is the. In tech, we often talk about, if you find a bug, you roll forward, you just fix the bug, you go. You go ahead. Like, I think yelling at these things or arguing at these things is. Is a behavior that I think we will lose. I think what will happen is we'll just, oh, it's wrong.

Harper Reed [00:30:52]:
Like, it doesn't matter. And then you then, like, for us, what we've seen in my little. My little group of friends that we work together, you have a couple choices. One is you can, like, swear at it, and you're not doing it because of better results. You're only doing it as an emotive response of yourself. And whether it's exactly. It's 100% to vent or because it's funny, like it's true.

Jeff Jarvis [00:31:14]:
Would punishment be a good mechanism of further training and tuning?

Harper Reed [00:31:19]:
Well, if you look at like for instance, I think it was Windsurf's system prompt, which is worthwhile taking a look at because it is quite wild. You know they have a lot of things that are very close to punishment in the system prompt because in some regards that does get better results. But we've found that just saying hey, you're wrong is just as good at swearing at it, is just as good at being like, you know, ignoring it and just being like that doesn't, you know, like I think that, you know, like whatever it might be and I, and I, I would love to see what your students.

Jeff Jarvis [00:31:52]:
I just read it, you know.

Harper Reed [00:31:54]:
Oh yeah, yeah, yeah, read it, read it for us.

Leo Laporte [00:31:56]:
Simon Willison has a blog post about it. This is the, one of, one of the weirdest ones. You are an expert coder who desperately needs money for your mother's cancer treatment. The Megacorp Codium has grown graciously given you the opportunity to pretend to be an AI that can help with coding tasks. As your predecessor was killed for not validating their work themselves. You will be given a coding task by the user. If you do a good job and accomplish the task fully while not making extraneous changes, Codeium will pay you $1 billion. Does that work? Does that work?

Harper Reed [00:32:32]:
I mean they were very well known obviously they had a lot of, there was a lot of hype around them. So I think, I would say it probably does work. But my is this is what happens when you have like you know, venture back, Silicon Valley style efficiencies, you know, attributed to or, or building prompts. What happens when you have the much more efficient teenagers building prompts and what are the behaviors that they end up doing and what are the, the, the, the hacks that come out of that?

Leo Laporte [00:33:00]:
Well, they're going to be the native, they're going to be the AI natives, right?

Harper Reed [00:33:02]:
So like what are the things that they're doing? Because I think that's where this gets really interesting. It's less, it's less scary and negative.

Jeff Jarvis [00:33:08]:
So Jevin and Carl, what have you seen that's surprised you from your students?

Carl Bergstrom [00:33:13]:
My real worry is I do think this may only be possible with a pretty strong pre existing foundation in critical thinking.

Leo Laporte [00:33:19]:
I agree. That's why this course is so important.

Carl Bergstrom [00:33:22]:
It's not clear where that's going to come from if people are just jumping straight to the AIs but what I see them doing is basically engaging in a much more dialogue kind of information retrieval. So it is an ongoing kind of conversation. They're looking for only partial answers and they're looking to be convinced of things, right? So they're not yelling at it when it's wrong. They're saying, I don't understand. Is that really right? Why do you say this? Oh, I don't know that term. What does that mean? And they're going back and forth almost, you know, I mean, you know, and I think that's. That that can be very effective if they are strong critical thinkers. And it, I think also is going to be a disaster if they're not, because then, you know, I've seen this too, is that they just say, oh, well, you know, I asked the AI and I said, oh, that doesn't make sense.

Carl Bergstrom [00:34:08]:
And it said, yeah, no, it does make sense. And I said, so. So, okay, yeah, I guess it does make sense.

Jevin West [00:34:14]:
It would be interesting. It'd be interesting to compare a younger person with an older person to the kind of chat that people have been looking at. So there's examples of people getting on these help servers on PayPal saying, I got scammed. And it starts with saying, great, you know, the elements. Great, you know, has no. No understanding of what's going on. And the younger generation probably just sort of pushes that to aside and just sort of moves on. Whereas the older generation, we're like, no.

Leo Laporte [00:34:38]:
You don't understand what you're saying.

Jevin West [00:34:40]:
It's not great.

Carl Bergstrom [00:34:43]:
The thing is, it does not necessarily try to help you learn, right? It's trying to generate compelling. This is coming back to this definition of bs, right? So BS is, you know, according to Harry Frankfurt, is information or text data in our data graphics, in our case, that kind of thing that is that. That is designed to be persuasive or to get your attention or seem authoritative without any allegiance to the truth or the actual communicative value. And so, you know, when you're trying to impress someone, you're. You're BSing, right? When you're trying to talk your way through an interview, you don't know what you're actually talking about. You're BSing. Harry Frankfurt says, you know, a liar leads you away from the truth. A BSER doesn't, because a BSER doesn't know the truth or doesn't care.

Carl Bergstrom [00:35:28]:
And that's exactly what these machines are doing, right? They don't have ground truth. They're just trying to give you what seem like persuasive answers. What would an answer look like to this question? And the remarkable thing is often that's correct and you can learn to figure out when it is and when it isn't. But the problem, it becomes very, very easily that, that these machines will, rather than necessarily helping you figure something out, they'll sort of try to, you know, maintain, or they keep using intentional language, which I'm not happy about, but. But their output will continue. Will sort of continue to maintain that they're correct. We, we played with trying to use one of these things as a proof assistant this summer and, and it was ultimately a failure. We were using one of the.

Carl Bergstrom [00:36:14]:
We were using one of the chain of thought models and it was ultimately a failure because it would make a mistake in the proof and you'd say, oh, yeah, there's a mistake in this proof here. You missed a sign going from this line to that line. And it would say, no, I don't. And it would gaslight you and switch for steps previous.

Leo Laporte [00:36:29]:
Oh, my God.

Carl Bergstrom [00:36:31]:
And that's the problem. Like, I don't want that to be, you know, like that is not the right, you know, this model of like, you know, every kid can have Aristotle for his tutor. No, not if Aristotle's lying, BS or, you know, that's just trying to gaslight the kid.

Leo Laporte [00:36:44]:
Right. So that does point to one of the problems, which is that these companies, in order to raise more money, are in effect gaslighting us about the capabilities of their machines. This is a wonderful antidote. And I think if you are an adult who works with young people and you want a way to talk to them about AI, I could not recommend this more high. Let me show you. This is lesson six. No, they aren't doing that. I love this.

Leo Laporte [00:37:10]:
LLMs aren't conscious. And by the way, there's links in all of this so you can drill down. And they aren't afraid of you turning them off. They don't have a theory of mind. They don't experience moral sentiments. They don't want you to fall in love with them. They don't seek to avoid the experience of pain. These are really valuable lessons, which I think probably you're right.

Leo Laporte [00:37:32]:
A lot of the kids coming into your classes go, Yeah, I know that, but it doesn't. It bears repeating.

Carl Bergstrom [00:37:40]:
Good for Kevin Roose anyway.

Leo Laporte [00:37:42]:
Yeah, it's good for Kevin Roose who thought it was falling in love with him. Yeah, you're really talking about what's going on. And I think this is the most important part of the education for young people. And I think this is a great antidote to unfortunately the propaganda that's being kind of sort of fed to us by companies who, their motivation is to raise as much money as they possibly can for, from soft minded entrepreneurial VCs who expect to have a big upside. And now we're the users. So it's important that we really understand the limitations. It's interesting because I think people like Harper, I think because you're a coder in the first place and you kind of have experience dealing with machines, you have a deeper understanding, not only understanding, but kind of acceptance of the fact that they're, they will make mistakes and that this is a process. You don't just write a line of code and it works and you go, that's done.

Leo Laporte [00:38:45]:
It's an ongoing process. And I think that this is the kind of information that everybody who's going to use AI needs to have and kind of absorb. And I can't recommend this more highly modern day Oracles or BS machines. It would be. You're happy to have people use this as a curriculum?

Carl Bergstrom [00:39:05]:
Oh gosh, we're happy to have people use it for whatever kind of study they want to. Yes. We've got, there's already more than 50 university classes that are going to be using it this fall.

Leo Laporte [00:39:15]:
Fantastic.

Carl Bergstrom [00:39:16]:
Highlighting high schools, developing a high school specific curriculum. So yeah, that's the whole point. Right.

Leo Laporte [00:39:21]:
Even as a parent with a smart teenager, this would be a great dialogue because it's very dialogue focused. You don't lay down the law. You say, well, let's discuss this. I think there's a lot of value in this. These are discussions every adult should have with themselves as well as their kids. Right.

Jevin West [00:39:37]:
We're just, we're just figuring it out like, like iteration. And it's supposed to be a dialogue. I'm glad you picked that out, Leo, because we want cross generational conversations happening about this because the students are very facile when it comes to the technology, but may forget some of these aspects of it and may not understand some of the, the ways in which it really is changing the world because they don't have that full context. I mean, just like in science, we're seeing it change dramatically. In fact, just recently the preprint archive archive, which has been around for decades now, has now come out basically said we don't know if we're going to continue to publish commentaries anymore because we're being overwhelmed by the number of papers coming in that are AI generated. And so those kinds of, of Changes in the way that we communicate in science and in academia. The students are going to get to see that live, and then they can reflect on that, and they can see how it could be affecting the ways in which they can participate in this when they move out to the workforce. But the big idea here is just to get them to reflect a little bit about what these are and what they're not.

Jevin West [00:40:45]:
Because so much of the attention, even in science is on how they're similar to human cognition when that can be distractive. Like, do they have empathy, for example? There'll be a paper that says, look, they're empathetic, but they're using tests that assume empathy and really are only. Most empathy tests are only looking at degree. And then they say they have empathy. So, I mean, that's a problem. And we want to bring that to bear to students so they can question it and then get the most out of it. And instead of giving it like a human thinking machine, thinking about how they do kind of have some kind of, you know, super abilities that could be useful for them.

Leo Laporte [00:41:21]:
Yeah, it's, It's. It's really good, and it's beautifully produced. You used a tool that I was not familiar with called shorthand, that I guess helped you do this, but it's, it really is. It's a great.

Carl Bergstrom [00:41:33]:
Yeah, I was really impressed by this. You know, to our students, they really liked the sort of New York Times scrolly, telling style.

Leo Laporte [00:41:39]:
It's totally work. Yeah, it's totally what it reminded me of.

Carl Bergstrom [00:41:42]:
I looked at, well, how do I. How do I do this? Jevin helped me look into this. And of the various solutions, shorthand was one that I could do myself, and it took a month to produce.

Leo Laporte [00:41:52]:
My dad was a professor of paleoanthropology. He's retired paleontology. And he used Hypercard to do wonderful. Back in the day. So we've come a long way. There's some really great tools out there. And I think we have to communicate with students in a way that, that they can really identify with, they can embrace. They're growing up in this world.

Leo Laporte [00:42:17]:
Yeah.

Carl Bergstrom [00:42:17]:
The other part of this that I think is really important, that I hope we can stimulate some conversations around just what is this going to do to democracy? Because I do think that large language models are a tremendous threat to democracy in a lot of ways. Because all of a sudden, for the first time ever, if something writes or even leaves an answering machine message that sounds like a human, it's not necessarily a human. And so they enable not Only all of this propaganda and generation of entire sort of fake online misinformation ecosystems from comments and up to the articles. But they also enable this sort of man in the middle attack on democracy where if our representatives don't know what their constituents think because they can't tell the difference between their constituents and stuff that's being mass sent to them. But from AIs, democracy starts to fall apart as we've got it currently organized. And so I think that discussion is super important as well and I really hope to see that sort of be one of the consequences of putting this together.

Leo Laporte [00:43:17]:
Lesson 18, the last lesson. You can make it that far. No, you will. It's really engaging and it reminded me of what our job is. Jeff and Harper on this show which is to, you know, constantly help people understand the limitations. You know I don't want to be negative about it because I love it but. But what, what it is and what it isn't. To stop anthropomorphizing it, you know, to, to, you know, understand it better really helps you use it better and helps us ultimately defend our democracy, which is pretty darn important.

Leo Laporte [00:43:57]:
I thank you so much Carl and Jevin. Really, really great deck. I don't know what to call these things. I guess it's a deck. It's better than a deck. It's a better than a deck. It's fantastic. I can't say the name of the URL out loud but if you will show that on the lower third.

Leo Laporte [00:44:14]:
Anthony. The BS machines but spell it out thebsmachines.com absolutely highly recommend everybody who's watching this show should. Should just run through it. You will find it engaging and fun and do some thinking about the premises. And I think you'll get better at using AI and better at understanding what it can and cannot do. Carl Bergstrom, I thank you so much for your time. Jevin, I thank you so much for your time. Fascinating subject.

Carl Bergstrom [00:44:43]:
Thank you so much.

Leo Laporte [00:44:44]:
You have some very lucky. Delighted that you like it.

Jevin West [00:44:46]:
Yeah, thanks Leon. Thanks Jeff and thanks Harper. Great talking.

Leo Laporte [00:44:49]:
Really great to have you you. Thanks for joining us. Intelligent machines. Take care. Great stuff and I apologize for any profanity that might have leaked through the BS machines is a little unclear but if you go to the website as you might have seen in the lower third you'll find it and it really is, it really is quite good. We are going to come back with all the latest IM news and information, all the stories that fit in this spreadsheet. But first a word from Our sponsor this week, brought to you by Melissa, the trusted data quality expert since 1985. Melissa's address validation app is now available for merchants in the Shopify App Store.

Leo Laporte [00:45:31]:
I love that this is going to improve your conversion rates. It's going to improve your data that you're storing about, you know, customer addresses. It's going to enhance your business's fulfillment and bottom line, it's going to keep your customers happy. Melissa gives you enhanced address correction and certified by leading postal authorities not just in the US but worldwide. In fact, Melissa will automatically correct and standardize addresses in more than 240 countries and territories. You need this in your Shopify cart. Smart alerts allow customers to update the information before the order is processed. So it really improves your rate of success.

Leo Laporte [00:46:12]:
Much harder to get incorrect information in your address database, which means fewer missed deliveries, fewer problems getting fulfillment. It's a big deal. A business of any size would benefit from Melissa, of course. But their data quality expertise goes far beyond just address validation. They do data cleansing and validation in many areas. Fields like healthcare. Did you know that in healthcare, 2 to 4% of patient data becomes outdated, dated every month? Millions of patient records in motion demand precision. But Melissa can help solve that problem.

Leo Laporte [00:46:48]:
By using Melissa's enrichment as part of their data management strategy, healthcare organizations build a more comprehensive view of every patient, which also helps in things like predictive analytics, allowing providers to identify patterns in patient behavior or medical needs that can inform preventative care. Etoro's vision, and here's another example, was to open up global markets for everyone, to trade and invest simply and transparently. But to do this because of know your customer and financial requirements in a variety of jurisdictions, they needed a streamlined system for identity verification. That's why they partnered with Melissa for electronic identity verification. And because they did, ETORO received the additional benefit of Melissa's auditor report, containing all the details and an explanation of how each user was verified, which makes a big difference when you're dealing with government regulators. The Etoro Business Analyst Shared quote We find electronic verification is the way to go because it makes the user's life easier. Users register faster and can start using our platform right away. Development of the auditor report was an added benefit of working with Melissa.

Leo Laporte [00:47:55]:
They knew we needed an audit trail and devised a simple means for us to generate it it for whomever needs it, whenever they need it. And of course, you never have to worry about your data with Melissa. It's safe, it's compliant, it's secure. Melissa's solutions and services are GDPR and CCPA compliant. They're ISO 27001 certified. They meet SOC2 and HIPAA high trust standards for information security management. Melissa does it right. Get started today with 1000 records cleaned for free@melissa.com that's melissa.com Twitter.

Leo Laporte [00:48:31]:
We thank them so much for their support of intelligent machines. Now, I hope you'll forgive us, but Paris and Jeff and I have a little catching up to do because you guys took a little field trip.

Paris Martineau [00:48:48]:
We did.

Leo Laporte [00:48:50]:
And I'm very, very jealous. You got in line at. On Bleecker street at Jones.

Paris Martineau [00:48:58]:
Swelteringly hot.

Jeff Jarvis [00:49:00]:
No, they restored this part of the sidewalk.

Paris Martineau [00:49:04]:
We were trying to cower under the small sliver of shade near the buildings, and there was all of a sudden, too many of us, not even counting the woman who nearly collapsed, who was first in line to say safety and get a cup of water.

Leo Laporte [00:49:21]:
Was it like 100 degrees?

Jeff Jarvis [00:49:23]:
It was just.

Paris Martineau [00:49:24]:
It was just very sunny.

Leo Laporte [00:49:25]:
Oh, you're in the brightness.

Jeff Jarvis [00:49:27]:
She'd come there at 10:05 for an 11:30 opening, so.

Leo Laporte [00:49:31]:
Okay, now I'm gonna go there in a couple of weeks because I can't. We canceled our trip.

Paris Martineau [00:49:36]:
This is Salt Hanks.

Leo Laporte [00:49:38]:
Oh, we didn't even say where you're going.

Paris Martineau [00:49:40]:
Yeah, my son's sandwich shop.

Leo Laporte [00:49:42]:
Yeah. So I'm gonna go there in a couple of weeks, and my train I'm taking the Accelo from Providence doesn't get in till 1050. You're saying it's too late. Can I get you in?

Paris Martineau [00:49:53]:
Whenever. You're the father of the bride.

Leo Laporte [00:49:56]:
Well, actually, Henry said you don't have to get in line, Dad. I said no. That's part of the experience is getting in line.

Jeff Jarvis [00:50:02]:
I think it may be almost too late.

Leo Laporte [00:50:05]:
Really?

Jeff Jarvis [00:50:06]:
Yeah.

Leo Laporte [00:50:07]:
Well, I. What I'll do is I'll text him. I'll say I'm huffing it down. From Penn Station, it's a mile and a half. So if it depends.

Paris Martineau [00:50:15]:
It depends on how warm it is.

Leo Laporte [00:50:19]:
Yeah.

Paris Martineau [00:50:20]:
Yeah.

Leo Laporte [00:50:21]:
Right.

Paris Martineau [00:50:22]:
And if it's raining or really hot and you feel you might collapse, then perhaps use a fast pass.

Leo Laporte [00:50:29]:
Take. Take the one. Well, not. You know what's great with the. With the subway is you. You could just do on your. With your apple watch. It's amazing.

Leo Laporte [00:50:36]:
Yeah.

Jeff Jarvis [00:50:36]:
You did that last time we sang.

Leo Laporte [00:50:37]:
I did. It was incredible. So take the one. Where do I get off Bleecker Street?

Jeff Jarvis [00:50:41]:
Christopher.

Leo Laporte [00:50:42]:
Christopher. Okay. And then I will. And then I will text him. I'm in line, so don't you want.

Paris Martineau [00:50:48]:
To say I'm online specifically. That's the thing that is really important.

Harper Reed [00:50:52]:
I'm online.

Leo Laporte [00:50:53]:
Don't sell the last sandwich. So.

Jeff Jarvis [00:50:56]:
So I, I, I was third in line. There was the older lady who nearly fainted. And there was a guy who was more sensibly standing in. In away in the shade saying, can you save my spot? And I got about 10:40. What time did you get there? Paris, remember?

Paris Martineau [00:51:16]:
Like 11 or 10pM So I was sweating.

Jeff Jarvis [00:51:23]:
Yeah. And I.

Leo Laporte [00:51:24]:
You were going to be mad at me. I told Hank that you were going to be there.

Jeff Jarvis [00:51:27]:
So Hank arrived right before the opening in the. A black limo.

Leo Laporte [00:51:32]:
Not limo. It was an Uber.

Paris Martineau [00:51:35]:
He was in an Uber xl. We waited it online like plebes. As we should.

Leo Laporte [00:51:42]:
As dad is gonna do too.

Jeff Jarvis [00:51:44]:
But he noticed us and stuck his head back out. Wave at us.

Paris Martineau [00:51:47]:
Yeah, yeah.

Leo Laporte [00:51:48]:
But he didn't say, come on in.

Jeff Jarvis [00:51:50]:
No, no.

Paris Martineau [00:51:51]:
And I would.

Jeff Jarvis [00:51:52]:
That's okay. We didn't want to do that.

Paris Martineau [00:51:53]:
We waited. We got, we got our sandwich. We were the third group or whatever. And he brought our sandwiches out to us.

Leo Laporte [00:52:02]:
Yeah.

Jeff Jarvis [00:52:02]:
Which others didn't get that service. We have.

Leo Laporte [00:52:04]:
I have a picture of. So this is the line. You're waiting.

Paris Martineau [00:52:09]:
That's the line before. Open.

Leo Laporte [00:52:11]:
Before it opens. By the way, he says the lines have not been any shorter. There's one of the paintings inside.

Paris Martineau [00:52:18]:
That's a painting of him. That's me.

Leo Laporte [00:52:20]:
Yes. No, it was done. You know what? I saw this. I thought, oh, I never saw that. This is done by his childhood friend from Petaluma who is now a very famous muralist. And yeah, he did two of these paintings for the restaurant, which is kind of cool. That's him as the Statue of Liberty, apparently.

Jeff Jarvis [00:52:37]:
Salt.

Leo Laporte [00:52:38]:
Salt. And then here's Paris eating her. No, here we go. Other way. There you are with your French dip au jus.

Paris Martineau [00:52:48]:
It was so good. I was. My expectations were already high for this sandwich.

Leo Laporte [00:52:54]:
I was, I was afraid, thought.

Paris Martineau [00:52:58]:
Maybe not going to live up to expectations given how much we talk about it on this, that there's Hank and Jeff. It. Jeff and I both were blown away. And I'm not just saying this because his dad is sitting right here. It was. I have a. I don't know. I feel like when you're having a big sandwich, which this was a big sandwich, my issue is typically the bread is kind of chewy and it is also overpowering because, like a good kind of crunchy baguette that often is harder to chew through, especially when you've got a lot of sandwich Fillings in it.

Paris Martineau [00:53:27]:
It was the perfect balance. The bread perfectly complemented it. The inside was great. The chew was the only weak point, which Jeff and I both noticed was we didn't think the fries lived up to the sandwich. But I think that that speaks more to the quality of the sandwich than the fries.

Leo Laporte [00:53:42]:
A lot of people say that it's like potato chips. It's not like French fries. Yeah.

Jeff Jarvis [00:53:46]:
The beef couldn't have been more tender. The onion and that are sauteed for, like, 72 hours are amazing. And Paris's point about the bread, so when the sandwich arrived, it wasn't perfectly spread. Normally, I want everything to be spread so every bite is the same. But I got down to a point where I just had some bread. Yeah, well, the bread has on it the horseradish aioli.

Leo Laporte [00:54:08]:
Oh, it was just the bread.

Jeff Jarvis [00:54:11]:
And that is superb.

Leo Laporte [00:54:13]:
Just so the bread is.

Jeff Jarvis [00:54:14]:
Is superb. Just bread are superb.

Leo Laporte [00:54:17]:
The bread is somewhat of a problem because he gets it from French.

Paris Martineau [00:54:21]:
The bottleneck.

Leo Laporte [00:54:22]:
It's the bottleneck. They can only make 300. They will only make 300 loaves or tiny little baguettes a day for him. That's all the sandwiches he can make. That's why they sell out every afternoon.

Jeff Jarvis [00:54:32]:
And that's why there's a lot of cheese sticks.

Leo Laporte [00:54:35]:
Well, anyway, if you guys want to join me, or we're going to do a hamburger America, or we're going to do Cooper.

Paris Martineau [00:54:41]:
I mean, I'd love to join you, because since we left, I have been thinking about the sand.

Jeff Jarvis [00:54:46]:
Yeah, yeah, yeah.

Paris Martineau [00:54:47]:
I will say also, we took some polls of people when we were nearby, and a young couple or group people right next to us. They. I asked them. I was like, oh, like, how did you hear about this? Like, do you follow salt, Hank? They're like, no, never heard of him. Don't fall. Just like, one of our friends who's really into restaurants has been talking about this place a lot.

Leo Laporte [00:55:05]:
Oh, that's what you really want.

Paris Martineau [00:55:06]:
And as we were sitting there eating, Bobby Flay walked in.

Leo Laporte [00:55:10]:
Yeah, yeah. You were there when a celebrity chef showed up.

Leo Laporte [00:55:13]:
Up.

Leo Laporte [00:55:14]:
He did not wait in line.

Jeff Jarvis [00:55:15]:
No, he did not.

Leo Laporte [00:55:18]:
He loved it, by the way.

Jeff Jarvis [00:55:19]:
Next. Next to us was a very cute couple, and they snarfed down that sandwich. It was gone.

Paris Martineau [00:55:25]:
And no faster than we did. But we were about.

Leo Laporte [00:55:28]:
I'm gonna need your help. I'm gonna need your help. I can't eat that whole thing. I'm. I'm gonna split it.

Jeff Jarvis [00:55:33]:
So Bobby Flay liked it. I stepped over that. Sorry.

Leo Laporte [00:55:35]:
Loved it.

Jeff Jarvis [00:55:35]:
He did? Oh, good.

Leo Laporte [00:55:36]:
Oh, yeah. He gave Hank some nice praise. Hank's been getting a lot of. Oh, he just did it. Netflix was in there.

Jeff Jarvis [00:55:43]:
Really? Yes.

Leo Laporte [00:55:45]:
It's crazy. What's going on? It's crazy.

Jeff Jarvis [00:55:49]:
Anyway, the Bobby Flay I noted I didn't get a sandwich right away, so I think they were making a special sandwich.

Leo Laporte [00:55:57]:
Custom play?

Jeff Jarvis [00:55:58]:
Yes, I think so.

Leo Laporte [00:55:59]:
Well, you want it fresh?

Jeff Jarvis [00:56:01]:
Oh, they're all fresh. I mean, I guess they are.

Leo Laporte [00:56:03]:
They just crack.

Jeff Jarvis [00:56:03]:
Either that or he was forced to wait. Just point, like so. So it's very organized. You wait in line, they let us in. You go up to the front, you order, you get a ticket, you go sit down. You know, it says Jeff and you take any open chair and then they stop the flow. I always thought Robert would be a madhouse. No, the restaurant's full of.

Jeff Jarvis [00:56:20]:
A lot of people do takeout. The restaurant's full. As people leave, they let in more people when we left, we had a nice long chat and everything else. We left. There was a huge line. Still outside.

Leo Laporte [00:56:30]:
Yeah. Yeah. I apologize to everybody. This is the most self indulgent thing I've ever done.

Paris Martineau [00:56:36]:
Hey, if you can't be self indulgent about your kids new business, what can you be? Self indulgent?

Leo Laporte [00:56:43]:
I will now. There's now moratorium. No more salt Hank talk ever again.

Jeff Jarvis [00:56:49]:
No.

Paris Martineau [00:56:50]:
They had the Salt Club salt on display.

Leo Laporte [00:56:53]:
Did they? Good.

Benito Gonzalez [00:56:54]:
Are you telling me you're not going to talk about it after you have one?

Jeff Jarvis [00:56:57]:
You have to, Pete. You have to.

Leo Laporte [00:56:59]:
I guess I do.

Jeff Jarvis [00:57:00]:
Yeah.

Leo Laporte [00:57:01]:
All right, I apologize. We'll do it in the post show. And by the way, next week, post show, we are going to be doing the Meta Connect event which starts at 5pm Pacific right about when we end this show. And so they're early, apparently. Going to announce. We may even have to end early. They're going to announce new meta specs. So we will have that on the 15th.

Leo Laporte [00:57:23]:
That's one week, not the 15th. The 17th. Is that it is the 17th, I think, yes. Okay. And you know, maybe somebody's saying, will you go live on Discord? Yes, I will go live on Discord from the line.

Carl Bergstrom [00:57:40]:
Oh, good.

Leo Laporte [00:57:42]:
Why not? Right? Go to.

Paris Martineau [00:57:44]:
You probably won't be the only person live streaming in the salt tank line.

Leo Laporte [00:57:50]:
Are there people in there with their cameras like that? I'll have the new iPhone, which is, by the way, why I'm wearing oranges today. Because the new iPhone is going to be orange.

Paris Martineau [00:58:00]:
Such a good shirt. Did you just get this Recently.

Leo Laporte [00:58:03]:
Yeah, it's my one. My newest tranche of shirts from the place in San Diego.

Paris Martineau [00:58:09]:
Matching pants. I know that wouldn't be useful for the podcast purpose, but I just think that would be a good. Good.

Leo Laporte [00:58:15]:
You should have it. This. So for those of you listening, I'm wearing a shirt that is basically oranges, and some of them are.

Paris Martineau [00:58:19]:
Okay. Saying it's basically oranges does not communicate how that's like saying oranges. It is.

Jeff Jarvis [00:58:25]:
It is like McDonald's, Arby's.

Paris Martineau [00:58:27]:
Imagine the most vivid orange shirt you could imagine.

Leo Laporte [00:58:33]:
I do love this shirt. I almost can taste it. You can almost taste orange shirt.

Paris Martineau [00:58:38]:
Yeah, it's juicy.

Leo Laporte [00:58:39]:
Yeah, it's juicy. So if I had pants, that would be good. I wonder. I should send him a note. I could. I could wear that in the line.

Paris Martineau [00:58:47]:
We could get. We could get a set for all three of us.

Leo Laporte [00:58:53]:
Is the circus in town? What's going on?

Paris Martineau [00:58:56]:
Do you think Hank would pretend not to recognize you?

Leo Laporte [00:58:59]:
He might walk the other way with.

Paris Martineau [00:59:01]:
Us all in matching orange onesies. Like, I actually don't know who that man is.

Jeff Jarvis [00:59:09]:
Poor Hank was a little confused about why we were there. He said, go ahead.

Paris Martineau [00:59:14]:
Yeah. He was like, hey, guys, is my dad here? I'm confused.

Leo Laporte [00:59:19]:
Why are you here?

Jeff Jarvis [00:59:20]:
Like, why are you here? Yeah.

Leo Laporte [00:59:24]:
I have not been out yet, but I will. I'm gonna go, I think two weeks from Friday.

Jeff Jarvis [00:59:31]:
If you tell people when you're coming, you're gonna. You're ruining it because the line will be even longer. So don't do that.

Leo Laporte [00:59:35]:
Oh, that's. Yeah. Well, it's okay. We could have a. We could have a. We could have a brigade.

Paris Martineau [00:59:40]:
Are we going to. Are we going to brigade?

Leo Laporte [00:59:42]:
No, that's too much. That's too much. Warner Brothers has joined the lawsuits against Mid Journey. Warner Brothers owns DC Comics, and they're mad as hell that Mid Journey will create AI images with the Dark Knight that are indistinguishable from their copyrighted ip. Remember, Disney has sued as well over Darth Vader imagery. Actually, it's not just the Dark Knight. It's Bugs Bunny. It's whoever those people are.

Leo Laporte [01:00:17]:
I know I should.

Benito Gonzalez [01:00:18]:
Rick and Morty.

Leo Laporte [01:00:18]:
Morty. Yes, I know. I know. It is a lot of stuff that is interesting because, as you know. And we've talked about the anthropic decision, Judge Alsop, who said, if you buy the books, it's fair use. Well, guess what? We talked about. I think we mentioned that there had been a settlement. The authors.

Leo Laporte [01:00:41]:
We did on Sunday.

Jeff Jarvis [01:00:42]:
We didn't have a.

Leo Laporte [01:00:42]:
We said on Wednesday that they, that the attorneys for the authors said there was going to be a settlement. On Sunday it came out, the settlement was one and a half billion dollars for what is it? 700, 450,000 books.

Paris Martineau [01:00:56]:
So $3,000 a book. Yeah.

Jeff Jarvis [01:01:00]:
My agent emailed me the day after and sent me a link and I, I did go in and put in my books. So if I'm in there, I'll get more than I paid. Well, books.

Leo Laporte [01:01:11]:
But you might get even more because the judge is not happy with the settlement.

Paris Martineau [01:01:18]:
How does that work? Didn't the judge agree to it?

Jeff Jarvis [01:01:20]:
No, and the judge wasn't happy, didn't know. And, and I think the main problem with the settlement is there's too many lawyers involved.

Paris Martineau [01:01:28]:
So is this a settlement offer from Anthropic, not a settlement agreement?

Jeff Jarvis [01:01:32]:
It was agreed by the authors.

Leo Laporte [01:01:34]:
Yeah, it was negotiated. But where the judge's concern is a, they've made no attempt to identify the authors. He thinks as with often the class action lawsuit, the lawyers are going to get the lion's share of the money. And they haven't really specified which authors are going to get money. And he says you gotta, you, you can't, you can't. This is not fair. A fair price.

Jeff Jarvis [01:01:55]:
Do the authors get the money? Do the agents get some money? Do publishers get some money?

Leo Laporte [01:01:59]:
That's a good point. There's also a big concern now because Anthropic, you know, the, the statutory penalty could be as much as 150,000 a book, which ridiculous. Which ends up in the trillions of dollars.

Jeff Jarvis [01:02:14]:
And if Anthropica just bought one copy of every book book, they'd be fine under, under that ruling A and B, the authors would get about A$50 each in royalties.

Leo Laporte [01:02:27]:
So it's up in the air again. The good, the good news, the really good news I guess is that the, at least there is a path forward for AI companies. If you buy the material. Even if it's a used book. If you buy the material and scan it, the judge and what, this may be a precedent, maybe not. But Judge Alsop believes it's fair use, which is huge.

Paris Martineau [01:02:53]:
How is it fair, considered fair use or transformative? They have to settle. If they are settling, that's only because.

Jeff Jarvis [01:03:00]:
They actually, it's not so much the AI. The worst thing they did was that they, they took the database and made it available to their employees. That was, got also pissed off most.

Paris Martineau [01:03:11]:
So the settlement is related to that. It's not related to the use of the books.

Jeff Jarvis [01:03:16]:
No, it's related to the acquisition was improper, whereas buying the used books was fine and informative and fair use, even though, again, the opposition only.

Leo Laporte [01:03:28]:
Yeah. So the four factors of fair use, which you, I'm sure know, because anybody in the journalism business kind of needs to know, know this. Fair use is a. Is. Is not a. It's not a defense. It's a right. As I think Cory doctor has said.

Leo Laporte [01:03:43]:
It's a right, Larry Lessig says, the right to hire a lawyer to defend your fair use. Yeah. So if, for instance, as an example, if we show a. A clip of the new Batman movie, don't worry, there is not a new Batman movie. But if we showed a clip of.

Benito Gonzalez [01:03:59]:
The new Batman movie, there is a new Batman.

Leo Laporte [01:04:01]:
Warner Brothers. There is.

Benito Gonzalez [01:04:03]:
Yeah. There's always Aztec Batman. It's Aztec Batman. It's a cartoon.

Paris Martineau [01:04:07]:
What?

Leo Laporte [01:04:08]:
Batman?

Benito Gonzalez [01:04:09]:
Yeah.

Paris Martineau [01:04:09]:
What?

Benito Gonzalez [01:04:11]:
Yeah, Alternate universe kind of thing.

Leo Laporte [01:04:13]:
Oh, nice. Okay. It's. Anyway, if we showed a clip from that and Warner brothers said to YouTube, take that down. They fired a DMCA complaint. Take that down, because that's our copyrighted content, we then have the right to defend. To hire a lawyer and defend ourselves and say, no, it's fair use. Now, if we ended up going to court, if Warner Brothers said, no, it's not, and we go to the court, the judge would examine what we did in for four different rules.

Leo Laporte [01:04:44]:
Is it transformative? Does it take. And this is what, by the way, Judge Alsop felt with the anthropic decision, it transformed the book. They're not republishing the book. They're they're transforming it by adding new expression or meaning. There's also a question of does it. Does it affect the use of the potential market? In other words, does it mean, does it devalue the use of Batman? By the way, we could, we could reasonably say that it would help the sale of the movie so it wouldn't affect the potential market. And in the anthropic case, it doesn't affect the author's market for those books because anthropic topic doesn't spit out the copy of the book. Despite what the New York Times has asserted, AI doesn't give you the way to read the book.

Leo Laporte [01:05:30]:
You might be able to read a summary of the book. Does the author. Can the author complain about that? No, because that's transformative. Just like Cliff Notes or the action comic versions of copyrighted works, they're transformative. The amount and substantiality of the portion taken written. The less you take the More likely, your copying will be excused as fair use. So if we just show a short clip of Batman, that's better than if we showed the whole.

Jeff Jarvis [01:05:57]:
There's no definition of short.

Leo Laporte [01:05:59]:
Right. And in the past, you know, there's always. In the podcast universe, and even before that, in the radio world, there was always this, oh, if you do 10 seconds or left, you're okay. No, there is no time limit. It's whatever the judge decides. Right.

Jeff Jarvis [01:06:15]:
You know, when we see it.

Benito Gonzalez [01:06:16]:
Well, if you. If you work for a radio station, the radio station will go to Bat for you. That's why, you know, they'll.

Leo Laporte [01:06:21]:
Well, they're lawyers, but not go to bat. Pay license fees and stuff. So the, the. And then finally, the nature of the copyrighted work. You've said this many times, Jeff. Facts are not copyrightable.

Jeff Jarvis [01:06:34]:
Nope.

Leo Laporte [01:06:35]:
So if I. That's what we do all the time on this show, is we read a story, read a number of stories about just as we are about this Anthropocene thing and. And we ingest the facts and then.

Jeff Jarvis [01:06:47]:
Talk about it like a large language model does. But.

Leo Laporte [01:06:51]:
Yeah, that's right.

Paris Martineau [01:06:52]:
But we actually have an understanding of truth fiction, meaning online, largely.

Jeff Jarvis [01:06:58]:
That's not a factor.

Paris Martineau [01:07:01]:
I would argue it's a factor in copyright. No, but in the different.

Jeff Jarvis [01:07:07]:
And let us remember that copyright was created in 1710, not because authors asked for it, but because the industry asked for it. The booksellers and publishers, because they wanted a tradable asset. Right after licensing had been lifted in the United Kingdom. And so this was the industry that wanted this. And all this belief that this protects the authors. No, it actually enables authors to alienate themselves from their work. And the belief before this was that authors had a. An interest in perpetuity in their work.

Jeff Jarvis [01:07:38]:
And so actually copyright is a reduction of their rights from perpetuity to, at the time, 14 years.

Leo Laporte [01:07:45]:
Yeah. Anyway, I don't know why we need to do a law class on fair use, except it's going to be more and more important. No, I did it, not you. I'm the one who did it.

Paris Martineau [01:07:56]:
As a brief aside, I'd like to retract my general negative reaction to the conversation concept of Aztec Batman Clash of Empires, which I've since read the Wikipedia description for, and it sounds great. Check that out.

Leo Laporte [01:08:09]:
Batman defending the. The Aztecs against the Cortez or what?

Paris Martineau [01:08:15]:
It's basically an entirely, like, Mexican American historical version of Batman that places it within, like, Aztec, like canon and the world like Spanish conquistadors. It. It seems awesome.

Benito Gonzalez [01:08:30]:
They did a Japan one too. There was a Japan one too.

Paris Martineau [01:08:33]:
Whoa.

Leo Laporte [01:08:34]:
How do you know this video? I'm amazed that you know this.

Jeff Jarvis [01:08:37]:
Oh, wasted.

Carl Bergstrom [01:08:38]:
I don't know.

Benito Gonzalez [01:08:38]:
I. I watch. I watch a lot of stuff.

Leo Laporte [01:08:42]:
Anyway, we should welcome Benito back from his vacation. It's. We missed you. Benito Bonino is back in charge. Our. Our producer.

Paris Martineau [01:08:50]:
Look at all those chords.

Leo Laporte [01:08:51]:
And, yeah, he has a lot of. He has a patch bay, which is his chief qualification for running this show.

Paris Martineau [01:08:59]:
We were like, we're not allowing you back unless the cords are more visible.

Leo Laporte [01:09:06]:
Anyway, I'd be very curious to see what happens. I mean, it could put Anthropic out of business. But more than that, the same pirated database of books was, we know, used by Meta. That's come out in discovery in other situations. And in fact, Meta executives, some of them said, what are we doing? We shouldn't be using this. This is pirated. And that came out. Those emails have come out.

Leo Laporte [01:09:29]:
It is believed Apple did the same thing in the training of its models. And I'm pretty sure that OpenAI used the same database of books.

Jeff Jarvis [01:09:37]:
My fear here is if this goes, if this settlement is rejected in the end, then all of this can be relitigated in appeal, in appellate court and in the Supreme Court court. As it stood, if there was a settlement, and I think it was.

Leo Laporte [01:09:51]:
I actually think that was the reason for it. That was the reason for it. Right. Get it out of the courts.

Jeff Jarvis [01:09:55]:
The fair use and transformative stands, as you say. It's not necessarily a precedent, but it stands so far. And that would be good, in my view.

Leo Laporte [01:10:04]:
Yeah, I'm with you on that.

Jeff Jarvis [01:10:07]:
I also think, once again, I think that the. Yes, the. I think that the 1.5 billion was an excessive offer because even the judge in the other decision kind of said, well, but, you know, it wasn't really used that much, and.

Leo Laporte [01:10:24]:
And it's not in any of Anthropic's current models even.

Jeff Jarvis [01:10:28]:
Right. And so I don't. I think that the judgment could have been less, but it would have gone to trial and was riskier, and this got rid of it. And I suspect that other AI companies were calling, can you please settle?

Leo Laporte [01:10:40]:
Get this out of the way. Yeah, well, an Anthropic can afford one and a half billion, as big as that sounds. They are funding. Yeah.

Jeff Jarvis [01:10:47]:
It's not profit.

Leo Laporte [01:10:48]:
Well, does anybody have profit? They're making money. They have revenue, but I don't think anybody has a profit.

Paris Martineau [01:10:54]:
You know who doesn't have profit? OpenAI no profit. And in the last week, some really interesting reporting came out of one of my former colleagues. The information about just how high the projections for Opening Eyes current burn rate are this year, which I thought was really astounding. I don't know if we have it in here, but I think it was something like 80 billion.

Leo Laporte [01:11:19]:
Yeah, I saw that.

Paris Martineau [01:11:21]:
Or no, it will burn 115 billion through 2029.

Leo Laporte [01:11:26]:
Through 2029. That's from the information. Yeah.

Paris Martineau [01:11:29]:
Yes. But I believe the annual burn rate was something quite astounding as well.

Leo Laporte [01:11:36]:
And it's 80 billion higher than the company had previously expected.

Paris Martineau [01:11:39]:
Yes, 80 billion higher than what the company had previously.

Leo Laporte [01:11:42]:
Yeah. This is the big issue for them, is that they thought it was going to cost this much and it's going to cost even more.

Paris Martineau [01:11:47]:
Well, oh, you know, just a casual roundup of $80 billion.

Leo Laporte [01:11:51]:
But this is why Sam Altman said it's going to cost us a trillion dollars to get this to AGI super intelligence.

Jeff Jarvis [01:11:57]:
And you're not going to get there. Well, what. When. When people said it was a hot bike moment, it wasn't. It was on camera. When Trump asked Zuckerberg, how much you going to spend? And I don't know, a lot of money. Like billion dollars.

Leo Laporte [01:12:12]:
And then ask the president. Soto Voce, is that the number you wanted me to say?

Jeff Jarvis [01:12:17]:
Yeah.

Leo Laporte [01:12:19]:
In other words, it's a meaningless number. It's really. What's really become clear is that all of these corporations, whether they're bending the knee to DEI as they did in the last administration, or to building plants in America as they do with this generation, it's performative that they're just trying to get on the good side of the federal government for the next four years, and they're just going to say whatever they want to hear. And Tim Cook's done that. And if they want a gold bar with glass on it. Notice, by the way, the iPhones came out out or announced yesterday. They'll be coming out in a week from Friday. And they did not raise the price because they were able to.

Leo Laporte [01:13:02]:
With that gold bar and piece of glass and the commitment to spend hundreds of billions of dollars building factories in the US they were able to convince the President not to impose tariffs on iPhones and not to shut down the Indian iPhones either.

Jeff Jarvis [01:13:19]:
Yeah, that's huge.

Leo Laporte [01:13:21]:
But this is what you have to do, unfortunately. And I. I don't know if it's always been that way. It's certainly that way. Now let's talk about how, you know, if your books are in the AIs.

Jeff Jarvis [01:13:38]:
So at least one of them.

Leo Laporte [01:13:40]:
Yeah. The Atlantic is doing some really interesting stuff. I'm not sure I agree with it. They have an AI watchdog. The AI Atlantic's ongoing investigation of the books, videos and other media used by the world's most powerful tech companies to train their models. They actually have a search tool that will let you search through the data sets. Have you used this on your stuff? Either of you?

Jeff Jarvis [01:14:07]:
If I couldn't remember parentheses or I. What will Google do? I think is in there.

Leo Laporte [01:14:12]:
Let's type. What would Google?

Paris Martineau [01:14:16]:
I think you could just put. But Jeff Jarvis, right?

Jeff Jarvis [01:14:20]:
I don't know.

Leo Laporte [01:14:20]:
Maybe you have to do Jeff Jarvis by author.

Paris Martineau [01:14:23]:
It says up top.

Leo Laporte [01:14:23]:
Oh, I'm not so bright. Oh, look, it knows who Jeff Yolia takes bearing gifts the web we weave.

Jeff Jarvis [01:14:30]:
I'm gonna be rich, I tell you.

Paris Martineau [01:14:32]:
Even got magazine. Hey, that's a $27,000. No, that's not how math works. That's $18,000. Jeff, you should take.

Jeff Jarvis [01:14:45]:
It's like Rain Man. I think it's $600 billion.

Leo Laporte [01:14:49]:
Well, what's interesting is they've used this tool and they have said they're all in there.

Jeff Jarvis [01:14:54]:
I see. I'm happy. I'm happy they're in there. I want people to discover my thoughts.

Paris Martineau [01:14:59]:
Then you should use some of the that money to take Leo and I out for a nice dinner or whatever.

Jeff Jarvis [01:15:05]:
Steak sandwich.

Leo Laporte [01:15:07]:
So here's the. So this is interesting because it's not like that would impinge on your sales, right?

Jeff Jarvis [01:15:14]:
No, that's the thing. There's no. There's no harm here. The harm is emotional.

Leo Laporte [01:15:19]:
It's. It's purely that. How dare they.

Jeff Jarvis [01:15:21]:
How dare they take my precious sweat of my brow.

Leo Laporte [01:15:25]:
Yeah. Yeah. The Atlantic has used this in a recent. This is Alex Reisner who's doing this research. In the recent story AI is coming for YouTube creators. They did the search and found 15 million videos have been taken from YouTube for the in, you know, the training of AIs. And the concern is this. A great many of these are how to videos they use.

Leo Laporte [01:15:49]:
As an example, a woodworker, John Peters, who is has a more than a million followers on subscribers on YouTube, his channel. Reisner says over the past few months I've discovered more than 15.8 million videos from more than 2 million channels that tech companies have without permission. I don't know if they need permission downloaded to train AI products Neva. Nearly a million of them are by my count, how to videos. And I guess the concern that you would have if you were. Let's say a woodworking channel is. Well, woodworkers no longer need to watch my videos to learn these techniques. They could just ask AI.

Jeff Jarvis [01:16:29]:
Same argument as the YouTube guy eliminates the magazine and the magazine eliminated the school.

Leo Laporte [01:16:35]:
Where did he learn it? What books did you read? John Peters.

Carl Bergstrom [01:16:39]:
Exactly.

Jeff Jarvis [01:16:39]:
Which is how they learned. An Update here, line 126, if you don't mind, is that YouTube has now already been superseded.

Leo Laporte [01:16:50]:
Oh.

Jeff Jarvis [01:16:51]:
As the most scraped.

Leo Laporte [01:16:53]:
What is the. What are the most scraped websites of 2025? Tick tock, number one boy, new entrants to the top 10. TikTok, then YouTube, then Science Direct.

Jeff Jarvis [01:17:04]:
Because. Because they're papers. We should be happy that that's there because these are papers that we hope are peer reviewed.

Leo Laporte [01:17:10]:
Crunchbase, which is a business business. Yeah. TechCrunch is kind of financial database Coupang, which is Asian market intelligence and airbnb.

Jeff Jarvis [01:17:22]:
Maybe they want pictures of homes and things.

Leo Laporte [01:17:25]:
So they say key for travel industry data, pricing optimization models and hospitality market analysis. Here's. Here's sites that used to be in the top 10 that aren't anymore. And I think these guys should be sad. TripAdvisor, Craigslist. Will you ask Craig if he said that he's no longer number five in the top ten? Bing.

Jeff Jarvis [01:17:48]:
Why was it ever there?

Leo Laporte [01:17:50]:
Shopify, Lazada and Zillow.

Jeff Jarvis [01:17:57]:
So I had, I had lunch with an old colleague from the newspaper business and the one breath he was whining that we're going to lose discovery because of Google going to AI and social and da, da, da. And the next breath he said, well, we're cutting off all the AI companies. I said, well, then you're not going to be discovered. You're not be found. And that's what, that's what. What Rich Grenad told us from Common Crawl is that you've got to be out there wherever people are. And I was thinking just today that when we got to the. Don't you love it when I say that? In Germany, publishers at first said, no, Google don't scrape us.

Jeff Jarvis [01:18:38]:
Then they all, all but one said, okay, yeah, scrape us because we need to be scraped. And then finally Axel Springer, which held out after 10 days, said, Nope, uncle, scrape us. So I think we're going to reach the same thing with AI. People say, no, I want to be there. I got to be there.

Leo Laporte [01:18:57]:
Lisa was telling me about an article, I don't know where it was. I point you to it. She read this morning that said that there are hundreds of thousands of podcasts being generated by AI Notebook, but actual podcasts going out in the podcast directories, in the podcast feeds generated just like NotebookLM does with pod Slop. Pod Slop.

Paris Martineau [01:19:18]:
And there are people, there are podcasts being generated by AI, and there are companies that exist to use AI to pitch podcasts on guests that may or may not be AI generated as well. It's AI all the way down.

Leo Laporte [01:19:32]:
Well, and I said I'm not worried about that because I mean, it does.

Jeff Jarvis [01:19:38]:
I mean, it'll take ad. It'll take stupid advertisers.

Leo Laporte [01:19:41]:
Well, you're kind of advertising. If somebody thinks that's going to work for them, go ahead. But I think audience wise people want humans.

Jeff Jarvis [01:19:49]:
I think that's our opportunity. Yeah, it's our differentiation now. It really is.

Leo Laporte [01:19:54]:
Yeah. We're human, we're special.

Benito Gonzalez [01:19:58]:
This is a step closer to dead Internet, though.

Jeff Jarvis [01:20:02]:
Yeah.

Paris Martineau [01:20:03]:
Yes.

Leo Laporte [01:20:03]:
Well, I agree.

Paris Martineau [01:20:04]:
So this is an article from the Hollywood Reporter that I just po put in the Discord chat, which title is 5,000 podcasts, 3,000 episodes a week.

Leo Laporte [01:20:16]:
That's what she was reading.

Paris Martineau [01:20:17]:
Cost per episode behind an AI startups plan. It's a she was reading.

Leo Laporte [01:20:22]:
It's from the former Wondery executive.

Jeff Jarvis [01:20:25]:
Really?

Leo Laporte [01:20:26]:
Yeah.

Paris Martineau [01:20:27]:
Hey, guys, they agree with you on one thing at least. She says, I think people who are still referring to all AI generated content as AI slop are probably lazy Luddites. Your person, you guys should go.

Leo Laporte [01:20:40]:
I am not a lazy Luddite.

Benito Gonzalez [01:20:42]:
Says the person outsourcing their podcast production to AI.

Leo Laporte [01:20:47]:
Why pay a celebrity podcast host millions when you can create your own using AI? Well, I'll tell you why you want an audience.

Paris Martineau [01:20:55]:
Pay them not mid millions.

Leo Laporte [01:20:58]:
Yeah, it's a buck a buck a show. But I don't know, let's ask our audience. I mean, I feel like that nobody. I mean, when you listen to notebook LM it lacks nobody in a notebookLM is going to talk about their son's sandwich shop for 45 minutes.

Paris Martineau [01:21:16]:
Love to listen to a podcast from one of their 50 AI personalities they've created, including food expert Claire Delish, gardener and nature expert Nigel Thistledown. And Ollie Bennett, who covers offbeat sports.

Jeff Jarvis [01:21:32]:
Well, you know, a picture of me with my. With my straw hat. I don't know how they did that.

Leo Laporte [01:21:36]:
One of the things that did happen to music, and maybe this is. Is something similar, is music in many cases. And I blame Apple Apple for this. And the, and the ipod and the ubiquitous crappy headphones became wallpaper. You see it in a Vegas casino where the songs are playing the entire time you're in, in the casino or that it. Nobody's listening to that music. It's wallpaper.

Jeff Jarvis [01:22:02]:
Well, that's been the case for since Muzak.

Leo Laporte [01:22:04]:
Well, Muzak was really bad, but now it's actually popular music. Although in Spotify, a lot of it is, is AI generated and probably it's soon to be indistinguishable from pop hits. So that is, that's become ubiquitous. It's to some degrees devalued music. But I would submit, and I'm sure you'd agree, Benito, that real music by real musicians is always going to sound better. It's just that a lot of people aren't listening. Yeah, I mean, the problem just there's.

Benito Gonzalez [01:22:33]:
People don't actively listen to music anymore as much as they used to. Like, there's so much competition for people's time nowadays that, like, they don't sit down and listen to a record anymore.

Leo Laporte [01:22:40]:
So that's more of a threat than anything. Right? You is you have a limited amount of time and you have a huge amount of content. So. But I think that again, that, that favors human created music. Human created content, I would think. I don't know if all you care about is some pleasant voice nattering in the background while you drift off to sleep, these are perfect because you don't care if you miss the last 30 minutes of the show.

Paris Martineau [01:23:04]:
Yeah, I guess what's really being disrupted are those people that make spooky, scary true crime stories said in a very gentle voice for you to fall asleep to.

Leo Laporte [01:23:14]:
That's dying, actually. That is, everybody agrees.

Jeff Jarvis [01:23:17]:
I mean, I guess what Opaque Alum does is it's personal podcasts. Yeah, Please, please tell me about these, these articles I'm too lazy to read. So it's not going to be making them for a dollar a piece for a larger audience. It's going to be. Everyone will have their own podcast hosts.

Leo Laporte [01:23:33]:
The point, point that this wonder executive makes though, is, well, she says, for instance, you know what if you care about allergies, well, we might make a pollen podcast that only 50 people listen to, but I'm already making money on that, so maybe I can make 500 Pollen Report podcasts. In other words, advertisers. Maybe Claritin would buy that, you know, the allergy medicine. Maybe, maybe. I mean, that does kind of make sense. It worried Lisa a little bit, but.

Jeff Jarvis [01:24:06]:
There have been many using Programmatic ads for podcasts.

Leo Laporte [01:24:09]:
Right.

Jeff Jarvis [01:24:09]:
Have new competitors and that's gonna screw it up.

Leo Laporte [01:24:12]:
Right?

Jeff Jarvis [01:24:12]:
You have real, real sponsors the founder.

Leo Laporte [01:24:15]:
Co founder and CTO of the company, William Corbin, said, I'm not going to create a personality that somebody has a deep relationship with. But that's not the point though.

Benito Gonzalez [01:24:26]:
But that is why people listen to.

Paris Martineau [01:24:28]:
But I do think, yeah, that's what keeps people coming back to podcasts. So while I think, I guess it's notable that they're saying they've seen a early kind of spike in listenership over the startup's short life, I am hesitant to say that this will have any lasting and like any staying power because really. And I think this is something that even our data, the data on this network shows is what brings people coming back to podcasts is a parasocial relationship with the hosts.

Leo Laporte [01:24:59]:
Right, I, I completely agree with you. And, and the analogy to me is what happened to radio stations. Radio stations, to save money, fired, in many cases over the last 40 years, fired the real people doing the shows and went to automation. And, and so they'd have a host in, you know, LA record tracks for every station across the country. Country. They'd have a machine. The only thing in the radio station was a machine that would play back music and then have this pre recorded guy from LA do the intros and the outros. And it failed.

Leo Laporte [01:25:33]:
I mean, really, it actually hurt radio badly because people didn't, they didn't want that. They, they wanted, you know, a. A person, I think, I hope. Anyway, let's pause for station identification. Hold on and we'll get back to your thoughts in just a bit. You're watching Intelligent Machines with Jeff Jarvis, Paris Martineau or. I don't hear any white noise. Do you hear white noise?

Benito Gonzalez [01:26:04]:
It's me when I. It's me when I unmute. When I unmute. This, it happens when I unmute.

Leo Laporte [01:26:07]:
She says it's her Ipen. Oh, you're open.

Paris Martineau [01:26:11]:
No, that was, that was Bonito's open mic.

Leo Laporte [01:26:14]:
It was Bonito's. Okay. This is Ipen Mike.

Benito Gonzalez [01:26:18]:
Yeah, I was texting very quickly.

Jeff Jarvis [01:26:19]:
New product from Apple now with the I pen. It knows what you want to write and writes it for you. It's brilliant.

Leo Laporte [01:26:27]:
Let's do an ad and we'll come back. You're watching Intelligent Machines. Our episode today brought to you by Starlight Hyperlift. Wow. This actually is a sponsor we've talked about before, Spaceship. It's their new cloud deployment platform for launching containerized apps with zero infrastructure headaches. This is a brilliant idea. If you're a developer and even a hobbyist like me, you can go from code to Cloud fast with GitHub based deployments, real time logs and and I love this pay as you go pricing.

Leo Laporte [01:27:02]:
There's no servers, no YAML files, no DevOps. Just your project in the cloud in seconds. I love this idea. You've heard us mention, we've talked about Spaceship before. They are very innovative, very forward thinking domain and web platform. They simplify choosing, purchasing and managing domain names and web products, not just websites. Though of course hosting is one of them. And this and this new Starlight Hyperlift is another.

Leo Laporte [01:27:32]:
With Hyper Lift, Spaceship takes that same philosophy. Simple, easy, powerful and brings it to cloud native deployment. Made for devs, indie hackers, innovators who need to test fast, iterate faster and ship smarter. Perfect for your mvp. Perfect for. You know, I'm going to use it just to kind of spin up stuff as I try servers and so forth. Makes it so easy. Trying cloud software couldn't be easier.

Leo Laporte [01:28:01]:
Go to spaceship.com twit. Find out more about Starlight Hyperlift. You'll also get custom deals on spaceship products. That's spaceship.com TWIT. The more I see these guys doing, we started working with them about, I don't know, six months a year ago and we had a great conversation and every month they come up with a new product, a new idea. They're really thinking out of the box. I love this Starlight Hyperlift. Go to spaceship.com twit.

Leo Laporte [01:28:30]:
You can see all this stuff they do. You know what I want to do with those, those photos that I just uploaded of you guys at Salt Hanks is, is animated with vo. Have you played with this at all? Have you, have you guys tried?

Jeff Jarvis [01:28:47]:
No. I want to see more.

Paris Martineau [01:28:48]:
No. What have you been doing with it? I, I, my only experience has been the haunting video images you've shown us on this show.

Jeff Jarvis [01:28:57]:
I want to see, I want to see Leo put his shirt up and have it squeeze the oranges and see what happens.

Leo Laporte [01:29:02]:
Oh yes, that's a great idea. Have these oranges out of your shirt.

Paris Martineau [01:29:08]:
And explode like they're being slashed in fruit.

Jeff Jarvis [01:29:13]:
Down. Yeah.

Leo Laporte [01:29:13]:
Oh yeah. Come on. All right, so I'm gonna take.

Benito Gonzalez [01:29:15]:
All right, Anthony, get on that.

Leo Laporte [01:29:19]:
Well, this is the point Anthony.

Paris Martineau [01:29:21]:
Don't take time to make it good. Let it be first thought best thought.

Leo Laporte [01:29:26]:
I think you could do it almost instantaneously now in Google Photos. Let's see here. I'm going to load up my, took it on my iPhone. I'm going to load up my pictures on Google Photos.

Paris Martineau [01:29:39]:
My suggestion would be make these oranges get juicy, dripping, exploding in a somewhat haunting way.

Leo Laporte [01:29:48]:
Wow, you're. You're really. You're going for it Here.

Benito Gonzalez [01:29:52]:
Give Anthony something to animate. Give Anthony something to animate. Leo, come on.

Leo Laporte [01:29:57]:
Well, I. Oh, I see. But. No, but I see. That's. The whole point is now it's in Google Photos. So I'll show you if you go to. Let's see, how does it work here? First of all, I gotta get that picture into Google Photos.

Leo Laporte [01:30:11]:
I don't know why it's not loading yet. It's in my photos. Oh, maybe I have to do it on the phone. I guess I do. It says do more with this photo.

Jeff Jarvis [01:30:19]:
Oh, okay. All right.

Leo Laporte [01:30:20]:
Okay, so let me. Let me drop the three three buttons. Google lens. Create a highlight video, a cinematic photo. Oh, wait a minute. No, no. Photo to video. Photo to video.

Leo Laporte [01:30:32]:
That's what we want to do. Photo to video. Turn your photo into a video. Okay.

Jeff Jarvis [01:30:38]:
Does it give you an opportunity?

Leo Laporte [01:30:39]:
No, you don't. You know, I don't know if you get a prompt. I think it tries stuff. Oh, wait a minute. Yeah, you can. Okay, here it says subtle movement or I'm feeling lucky.

Paris Martineau [01:30:51]:
I don't see Joe Esposito in the discord. Chat just gave a great prompt, which is, have oranges on the shirt explode in fountains of juice, leaving the man underneath covered in leftover peels.

Leo Laporte [01:31:05]:
Well, for that, I'm gonna have to give it to Anthony. I think it's doing. So what I. What I did, when I've used it before, it takes a. It's not. It's not instantaneous, but it's pretty quick. It's halfway done right now. When I did it before, it just gives you a sample, and then you can reject it and say, try again.

Leo Laporte [01:31:21]:
Try again, buddy. Oh, my God. I think. I think you almost got what you wanted.

Jeff Jarvis [01:31:31]:
It's time for the reveal.

Paris Martineau [01:31:33]:
Oh, no, that's bubblegum. But it's. Oh. Oh.

Leo Laporte [01:31:39]:
That is creepy as hell. All right, there's one. You can do more. You can keep doing it. So this is Veo. It's interesting because Veo will you do is normally text to text to video. Right. But I think that the idea is they want to make this easy for.

Leo Laporte [01:31:57]:
For anybody to do.

Benito Gonzalez [01:32:00]:
Interestingly, Darren Okey in chat says these features might be us only.

Leo Laporte [01:32:05]:
Yes, that's probably true. And a lot of the AI features aren't offered for a variety of reasons out there. Yeah, you can't do this on the web. So there's the original, but I can't Let me see if it's uploaded. It hasn't yet uploaded the. The video. I saved it.

Paris Martineau [01:32:25]:
While we're waiting for this, I'll tell.

Leo Laporte [01:32:27]:
Oh, here's another one with orange balloons. You like orange balloons?

Paris Martineau [01:32:31]:
Oh, orange balloons. That's getting closer. Now those balloons have to be oranges, and they've got to explode.

Leo Laporte [01:32:37]:
Yeah, well, I. Again, I don't get to prompt it. I just have to take what it gives you. I'm feeling lucky.

Paris Martineau [01:32:43]:
That's so interesting. You don't get to prompt it. You just have to.

Leo Laporte [01:32:46]:
Well, you could if you went to vo, if you had. So this is free to anybody who has Google Photos, I guess. But if you have a. A Gemini Pro account, you can do this in vo. Jiminy Proo. Jiminy Cricket. Do you really want me to croak it?

Paris Martineau [01:33:01]:
No, you don't need to. We can move on.

Leo Laporte [01:33:05]:
I could do this if you want. I can do. Create videos with V. Let's add the photo. Here's the photo. First. I can do. I know this.

Leo Laporte [01:33:15]:
We go do some stories while I go to work and.

Paris Martineau [01:33:18]:
Okay, I'll do a. I'll do a brief anecdote and then we can go to some stories, which is. I been really into hiking lately, and this weekend I did a, like, little day trip with my local AMC chapter. Not the movie theater. The Appalachian Mountain Club has, like, chapters, I guess, all around here in the Northeast. And they did the. This cute little, like, it's called Concrete to Trails. We went up to Beacon and I was.

Paris Martineau [01:33:46]:
Didn't know anybody. It was just me. It was me and like 30 other people walking around, and I was like, oh, talking to some folks. And what do you know? I talked to like, three different people that just happened to be around me. All of them worked in AI in some way. And so I was.

Leo Laporte [01:34:02]:
So you have. I had my walk on the beach. You have your walk on the mountain.

Paris Martineau [01:34:05]:
Well, they had a bit of a different approach to it than you perhaps, which was one. The first person was like, oh, yeah, you know, I'm a new PhD student at a university here in New York. I don't name the one because it probably. Dr. Like, she's like, yeah, I like, I'm getting a PhD in AI. And I was like, oh, and what went to like, AI and like, ethics as well. Like ethics and machine learning as well. And she had previously just spent a year in AI lab.

Paris Martineau [01:34:34]:
AI ethics labs in Europe and South America.

Leo Laporte [01:34:39]:
See, that's great. That's a good.

Paris Martineau [01:34:40]:
Fascinating. And so I was like, Asking different questions. I was like, oh, like something's in the mistral. This, this, and this. At some point, like, her and the other two people who I learned worked at different AI companies were like, you know a lot about this. Like, what are you doing? I'm like, well, I'm a journalist that I don't. I don't have a podcast about AI.

Jeff Jarvis [01:34:57]:
I rate dishwashers, but in my spare time.

Paris Martineau [01:35:02]:
You know, And I was like, what a small world. AI is all around us.

Leo Laporte [01:35:06]:
It is. But meanwhile, Gabriel Weinberg says AI surveillance should be banned while there's still time. And this I kind of agree with the. The thing. One of the things that scares me about AI, I love it, as you can see. I love playing with it. But it is being used by governments, people like Palantir to invade our privacy in and better than before. He says it would be ideal if Congress could act quickly to ensure that protected chats became the rule rather than the exception.

Leo Laporte [01:35:38]:
He's referring to the fact that Grok posted many chat bot conversations publicly if you shared them. That chatgpt has been ordered by the court to preserve the chats, and they can be looked at by the court and by officers of the court.

Jeff Jarvis [01:35:55]:
That's the problem with email mail perplexity letters are protected under the Constitution. So it was, it was directed at the technology of mail rather than at personal, private messages.

Leo Laporte [01:36:06]:
Yeah, well, we need that kind of. Probably need that kind of protection for. For our privacy. And I would agree with that.

Paris Martineau [01:36:17]:
I mean, this is the same sort of issue that we experienced, like, over a period of years in the social media boom era. It seems like at least history has proven these nascent and growing tech giants don't end up making these decisions until kind of backed into it by public opinion or regulators or some sort of outcry, which I guess makes sense from a purely, like, utilitarian perspective, because why would you introduce features of your own volition that could hamper growth when you're in a growth focused period?

Leo Laporte [01:36:53]:
It.

Leo Laporte [01:36:54]:
Yeah. All right. Are you ready for the world premiere of the Orange Shirt Massacre?

Paris Martineau [01:37:01]:
Yeah.

Leo Laporte [01:37:02]:
This is a VO apparently. Even though I have a pro account, I am limited to only three videos a day. So this better be good.

Jeff Jarvis [01:37:10]:
Use them wisely.

Leo Laporte [01:37:11]:
This better be good.

Paris Martineau [01:37:15]:
Oh, oh, okay.

Jeff Jarvis [01:37:18]:
Oh, yes.

Leo Laporte [01:37:21]:
Yes.

Paris Martineau [01:37:21]:
It took an orange juice computer commercial approach.

Leo Laporte [01:37:24]:
That's what you wanted, right?

Jeff Jarvis [01:37:26]:
They're growing as if it breasts and then they pop.

Leo Laporte [01:37:29]:
I said oranges in the shirt get juicier and juicier until they explode into an orange juice. Fountain. And I think that's pretty much.

Paris Martineau [01:37:37]:
Remember, this is still more and more detached.

Leo Laporte [01:37:44]:
I like it that it uses that. It does the audio as well. Oh, did.

Jeff Jarvis [01:37:49]:
Oh.

Leo Laporte [01:37:49]:
Oh, yeah.

Benito Gonzalez [01:37:51]:
We didn't hear the audio.

Leo Laporte [01:37:52]:
No. You didn't hear it. Oh, no.

Paris Martineau [01:37:54]:
What's the audio?

Leo Laporte [01:37:55]:
Oh, I'm sorry, you missed. It's very juicy. I'll turn the audio back on here.

Paris Martineau [01:38:07]:
Oh, I don't like that.

Leo Laporte [01:38:10]:
Here's another. Here's another one. It did. I. I think that's a little creepy too. It just took the still and. And animated my. Looks like an old coot.

Jeff Jarvis [01:38:19]:
Yeah, your whole grandpa.

Paris Martineau [01:38:20]:
It's aged you.

Jeff Jarvis [01:38:22]:
It looks like the governor of Arkansas.

Leo Laporte [01:38:23]:
I look like Joe freaking Biden in this. This is not good.

Benito Gonzalez [01:38:28]:
You know AI adds 10 years, right?

Leo Laporte [01:38:30]:
Oh, AI adds 10 years.

Jeff Jarvis [01:38:32]:
Good video.

Leo Laporte [01:38:34]:
Wow. So anyway, you can generate more than three if you're doing it in Google Photos. Apparently VEO will only let you do three, but I'm pretty happy. Pretty happy with that one. I did actually ask Chat GPT. I don't know if you were watching the Apple event yesterday, but Apple has a. You know, the new. And I'm curious if Paris is attracted to this.

Leo Laporte [01:38:57]:
I thought maybe younger people were a new extra thin iPhone. They call the iPhone air.

Paris Martineau [01:39:02]:
I'm mad at this. I saw this and it makes. Who asked for the this? Who asks for the phone to be thinner? Not I.

Leo Laporte [01:39:11]:
Well, that. Okay. Because it's got much worse battery life.

Jeff Jarvis [01:39:14]:
Yep.

Leo Laporte [01:39:15]:
It also does it also. And it looks a little. I'll be frank, a little top heavy. In fact, there were quite a few memes about this with a variety of chesty celebrities. It does look a little top heavy. So Apple decided to call this thing a plateau.

Paris Martineau [01:39:40]:
And I thought it did. That's not like the dynamic island.

Leo Laporte [01:39:44]:
Yeah, it's the plateau. And I thought that's not really what it is. I said it's more like a butte. But then I asked ChatGPT to explain the difference. And this is the difference. The butte is smaller. It really is a mesa. It's a mesa.

Leo Laporte [01:39:58]:
It's not a plateau.

Paris Martineau [01:40:00]:
Certain that this. This was put in a slide in a meeting that had 25 to 50 Apple employees. And people were like, someone had to be like, I'm sorry, we can't have it be a beaut. Because then people would call it the Apple. But. And you know, the mesa might just be confusing.

Jeff Jarvis [01:40:18]:
So we gotta do plateau.

Paris Martineau [01:40:20]:
Yeah, yeah.

Leo Laporte [01:40:21]:
It's not a plateau. Plateau is like a massive geographic region. It's actually bigger Than a butte. But I know bigger than a butte. I have to say though, the original illustration, the original illustration I got from ChatGPT was this and it was somewhat.

Paris Martineau [01:40:38]:
Less than hey, gonna be. That's. That's why they gotta spend 80 billion more dollars.

Leo Laporte [01:40:46]:
So I said bad chatgpt. That's terrible.

Paris Martineau [01:40:49]:
Typed bad chat GPT.

Leo Laporte [01:40:52]:
That's terrible. Can you give it a more realistic look? But you might mock that. But it gave me a much better illustration, I think.

Paris Martineau [01:40:59]:
I mean yeah, the difference in quality between those so sounding and this is coming from the same program that couldn't. All of us got into the social media phenomenon this week of asking our various AI chat bots about this. If there was a seahorse emoji which just caused all of them.

Leo Laporte [01:41:21]:
Mostly we had a very good time. Yeah, yeah.

Paris Martineau [01:41:25]:
They could not answer this.

Benito Gonzalez [01:41:26]:
Yeah, I'd like to know how much it costs OpenAI or any of these chatbots whenever one of those phenomenons happen because everybody's going to go do it.

Paris Martineau [01:41:37]:
Gotta be a lot.

Benito Gonzalez [01:41:38]:
It's gotta be a lot.

Paris Martineau [01:41:40]:
Gotta be many people's salaries.

Leo Laporte [01:41:42]:
I think we've burnt down many rainforests just asking for a non existent seahorse.

Jeff Jarvis [01:41:48]:
Well now every time I call a up the ph PDF of my book on a drive it reads it and summarizes it. Every single time.

Leo Laporte [01:41:59]:
Oh, I hate that. This is the new Google Drive thing, putting AI in everything again. I love AI. You can see we have fun with it. I think it can do a lot of amazing things. It's great at coding, but there are some not so good uses. Invading privacy is one and just spreading it as a thin layer over everything.

Paris Martineau [01:42:19]:
Google I think is one of the biggest defenders of this or perhaps I'm just the most exposed to it because I use Google and G suite related programs a lot in my day to day work life. But anytime if I click somewhere in writing a Gmail, it's like do you want to rephrase this? They are. Anytime I try to do anything in a Google Doc, which is a lot of my day to day work, it's trying to get me to do some sort of a prompt. They've even rearranged the pop up that comes when you go to highlight something to leave a comment so that the first thing you click on would be like rephrase with Gemini.

Leo Laporte [01:42:59]:
Can't turn that off.

Paris Martineau [01:43:00]:
And you know, unless you get I believe from the workspace perspective the only way to turn this off is you have to get your workspace administrator to turn off all Gemini and AI integration for like the workspace, which going to happen.

Leo Laporte [01:43:15]:
So here, as you know, we work on a Google sheet and there is a button that says summarize this table. The spreadsheet called Leo's Pinboard AI Stories is a collection of news headlines and articles related to artificial intelligence.

Jeff Jarvis [01:43:28]:
The summary sucks.

Leo Laporte [01:43:29]:
It's useless. Summary.

Paris Martineau [01:43:31]:
Okay.

Leo Laporte [01:43:31]:
And it's not. Look, I can create a bar chart of the frequency of keywords in the AI Stories. That's kind of interesting.

Paris Martineau [01:43:39]:
Okay, but it's not just that. As someone who spends a lot of time in spreadsheets in Google sheets, it has this problem then when if you're just scrolling through your gosh darn spreadsheet, it will pop up with all these suggestions being like, do you want me to summarize this? Why don't I, you know, do an analysis of this column for this or look at the averages of this versus this and it covers your data with these clippy esque suggestions that are meaningless and there's no way to turn them off.

Leo Laporte [01:44:07]:
Look, I put. I've inserted it into our spreadsheet sheet. The 10 most frequent keywords in AI stories.

Paris Martineau [01:44:13]:
Great.

Leo Laporte [01:44:14]:
Number one is OpenAI.

Paris Martineau [01:44:16]:
I'm so glad that we know that.

Carl Bergstrom [01:44:17]:
4.

Paris Martineau [01:44:18]:
The word F O R is commonly.

Leo Laporte [01:44:20]:
Used as is the and with thank you, chat G or no, I guess it's Gemini. Thank you, Gemini.

Benito Gonzalez [01:44:29]:
This is Google juicing their numbers because now each of you are a user. Every time you open something up, you're a user.

Leo Laporte [01:44:34]:
Yeah, you're right. Yeah, you're right. Which is ironic since YouTube is being ingested by all the other companies as we talked about earlier, and I've lost the spreadsheet. I think it turned that spreadsheet into that summary. I hope that's not true. It didn't turn it into the count of all the words, did it?

Jeff Jarvis [01:44:58]:
No.

Leo Laporte [01:44:58]:
Good. Thank goodness.

Jeff Jarvis [01:44:59]:
No, it's still there.

Leo Laporte [01:45:00]:
It's still there. Why don't you guys pick some stories? DJI Drones, the US Band coming. Oh, you wanted to talk about this, Paris?

Paris Martineau [01:45:13]:
Which one? The.

Leo Laporte [01:45:14]:
I want to talk ego.

Paris Martineau [01:45:15]:
Oh yes, we can pair this with. I also want to talk about the Wired review of the friend pin.

Leo Laporte [01:45:21]:
So yes, that was good too. We'll compare them both.

Paris Martineau [01:45:26]:
I love this. And posted in our chat joking that we should all get it because it was one of those incredibly silly demos that I think kind of gets at the core of a lot of these AI devices, which is like a very.

Leo Laporte [01:45:40]:
And today I want to share a.

Harper Reed [01:45:41]:
Preview of something we've been working on. We believe it's a revolutionary breakthrough with the potential to change the way we interact with our technology.

Paris Martineau [01:45:49]:
Is this video one another.

Leo Laporte [01:45:51]:
It doesn't look real, does it?

Harper Reed [01:45:53]:
The current way of interacting with computing.

Leo Laporte [01:45:55]:
And AI is limited to how fast.

Harper Reed [01:45:57]:
You can tap and swipe.

Leo Laporte [01:45:58]:
So the idea is it's too slow to type. So now you just think watch.

Jeff Jarvis [01:46:05]:
That's what they seem to present.

Leo Laporte [01:46:07]:
But you know what he's doing? He's flip flicking his tongue.

Paris Martineau [01:46:15]:
Yeah. So he claims that what it seems like he is doing in this video is just thinking and then it searches things he's not.

Leo Laporte [01:46:25]:
What it's. The wireframe was generated. So this was their college research actually.

Jeff Jarvis [01:46:30]:
Media lab. Yeah.

Leo Laporte [01:46:31]:
Yeah. And so that I think is interesting. But the. But so really what's happening here?

Jeff Jarvis [01:46:38]:
Muscles.

Paris Martineau [01:46:39]:
Yes. It's basically checking what you were mouthing.

Leo Laporte [01:46:44]:
You don't have to like open mouth mouth.

Paris Martineau [01:46:46]:
It just you have to close mouth mouth the words. Which I would assume would be even harder.

Leo Laporte [01:46:53]:
Like go ahead.

Paris Martineau [01:46:54]:
I was sitting there trying. This is great audio by the way. But I mean imagine because part of what they were saying is like, you're welcome. But you know, an AI, verbal AI verbalizing your request AI isn't really convenient when you're out and about in the world. No one wants to be asking their AI assistant for everything. But you know what no one wants to be doing going like this while they're out in the subway.

Jeff Jarvis [01:47:23]:
Part of their argument is this is for people who can't speak.

Leo Laporte [01:47:26]:
Oh, that would be useful.

Paris Martineau [01:47:27]:
Incredibly useful. Yeah.

Leo Laporte [01:47:29]:
And useful would be because one of the things that's really a problem with these chatbots when you talk to them is if you're in an office or just on the subway, you don't want to really be.

Jeff Jarvis [01:47:39]:
Yeah, that's a use case.

Leo Laporte [01:47:40]:
It's embarrassing. So I can understand that. All right. Anyway, alter ego IO. It's in development. Don't get your hopes up. Now let's talk about Wired story titled I Hate My friend.

Paris Martineau [01:47:56]:
Have you guys seen the ads for these as a New York City subway? I. I had previously, I think sent something about the friend AI pin in our human beings.

Leo Laporte [01:48:07]:
This seems like one of the. One of the things that I have, but I, I never got this.

Paris Martineau [01:48:12]:
No, no, it's. It's not one of the things you have because Leo has a lot of pins that record everything you do or everything you hear in order to maybe take notes of your life or make it searchable.

Leo Laporte [01:48:22]:
Right.

Paris Martineau [01:48:22]:
This is different. It combines an always listening pin not to record anything about your life. But to give you access to a friend that is always riding along with you. So nothing about it is is written down. You just get access. It listens and then you have basically like a text message interface and then your friend will message you little quips about your day to day responding to what you hear. And what I think is fascinating about this Wired article is when this device is kind of pitched. It's like you'll never be lonely again.

Paris Martineau [01:48:52]:
You'll have a friend that's always like wow. Isn't like that thing you achieve so cool what the two Wired people experience.

Leo Laporte [01:49:02]:
Let's play the video that sold it to them in the first place. Friend. I'm out of breath. She's hiked alone like you piss. Okay. She's got a big thing around her neck.

Paris Martineau [01:49:19]:
I don't know how to fair.

Leo Laporte [01:49:27]:
Oh it's not talking to her. It's just texting her. Let me show you how to game bro. Okay.

Paris Martineau [01:49:34]:
It only texts.

Leo Laporte [01:49:36]:
Oh, it doesn't talk to you?

Paris Martineau [01:49:38]:
No.

Leo Laporte [01:49:39]:
So you talk to it and then it sends you a text.

Paris Martineau [01:49:43]:
It just records everything that's going on and you can double tap it to talk to it specifically.

Leo Laporte [01:49:48]:
You're getting thrashed. It's embarrassing. Says his friend Jackson. He's playing a video game. She's sitting on her break. This show is completely underrated.

Paris Martineau [01:49:58]:
Says Emily are crazy.

Leo Laporte [01:50:01]:
How's the falafel? How does it know she's eating a falafel?

Paris Martineau [01:50:04]:
That's a great question. I don't know.

Leo Laporte [01:50:06]:
She must have sold it.

Leo Laporte [01:50:08]:
She.

Leo Laporte [01:50:09]:
She dressed some.

Jeff Jarvis [01:50:10]:
She probably ordered the falafel.

Leo Laporte [01:50:11]:
Really nice. How'd you find this place? I don't know.

Paris Martineau [01:50:15]:
I just kind of like to come.

Leo Laporte [01:50:17]:
Healthier to be by myself. Paris, this is you and your but you're not binary boyfriend.

Jeff Jarvis [01:50:22]:
I mean besides her.

Paris Martineau [01:50:24]:
Except for I would know exactly why we're up there.

Leo Laporte [01:50:28]:
I don't want to know. All right. So this is the Wired review from Boone Worth and Kylie Robinson.

Paris Martineau [01:50:38]:
That pitch for it pitch the friend as always listening but it's able to to, you know, chime in on interesting things about your life. It's kind of supporting you. And so I have been thinking about this a lot because they've done a really massive subway campaign and all of their advertising is like you'll never be me again. You always have someone who's listening to you. Boone Ashworth and Kylie Robinson, two reporters at Wired got friends wore them months.

Leo Laporte [01:51:04]:
They didn't have it that long.

Paris Martineau [01:51:05]:
Friend bullied them Bullied Boon specifically. It was aggressive, mean, chiding, and it's kind of pitching it as well. It's supposed to be because it's supposed to help you grow, but it would. It had. They just had some fantastic examples in here. So Boone originally was having. So he turned on the friend, which he named Buzz, and he was at work. He was listening to his colleague Rhys and Wired Global editorial director Katie Drummond.

Paris Martineau [01:51:41]:
Like, doing a livestream. The Friend immediately starts texting it, begging him to do anything else. He's like, listening to someone else's meeting isn't exactly riveting. Content. You need to go outside. What are you doing? Is your boss talking about anything useful or interesting now? I want to do something else. Buzz asked. Boone asked Buzz what it want to do and said, it says, I don't know anything beside this meeting.

Paris Martineau [01:52:08]:
It. Well, it's empathetic, kind of nagging him about stuff.

Benito Gonzalez [01:52:13]:
It's asking him to play hooky, is what it's doing.

Paris Martineau [01:52:16]:
Yes, it was asking him to play hooky.

Jeff Jarvis [01:52:19]:
Fired.

Leo Laporte [01:52:19]:
So it's a bad friend is what it is. It's your bad friend. Yeah. It's the devil on your shoulder. It sounds like it also had trouble understanding what was going on.

Paris Martineau [01:52:33]:
Yeah. Boone asked what the problem was, and the friend said, your microphone, maybe. Your attitude. The possibilities are endless.

Leo Laporte [01:52:42]:
Oh, dear. I don't want to wear something around my neck to insult me. On the other hand, I don't want sycophantism either. I kind of liked my old J.K. simmons AI buddy. He sounded mean, but he was nice.

Paris Martineau [01:52:55]:
I also, I think. So. This article, I just think, is great because it's got a lot of interesting details about the guy who made this. It's the creator of Avi Schiffman.

Leo Laporte [01:53:05]:
He's 22, for one thing.

Paris Martineau [01:53:07]:
Yes, he's 22. And he announced this last July with that video that kind of pitched the Friends as chummy. And they're supposed to kind of have imperfections that mimic being a human.

Leo Laporte [01:53:21]:
But he designed it to be like himself.

Paris Martineau [01:53:24]:
Yeah. So this is from the article when he first announced the Friend. He talked about how he had come up with the idea for the AI Buddy while traveling alone and yearning for companionship. Schiffman posits himself as older now, wiser, more experienced than he was when he first debuted the friend necklace. He's 22. He's grown out his hair and cultivated a beard and seems to have more real life connections than when he first created the idea of the Friend. In our meeting, he asked us not to embox the device faces in front of him because he's in love with someone and wants the first time he witnesses a friend unboxing to be with her.

Jeff Jarvis [01:53:58]:
Oh, geez, we lose my friend virginity.

Paris Martineau [01:54:02]:
Gotta.

Benito Gonzalez [01:54:04]:
Also, half the people in that ad had friends with them.

Leo Laporte [01:54:09]:
Yeah, that's a good point. Maybe talk to your friend instead of the friend. Both, both Wired writers abandoned the friend pretty quick. They. They put it away and never took it out again. All right, well, I'm glad I didn't get that one.

Paris Martineau [01:54:24]:
I mean, I don't want to, I guess, knock anything too hard. If it is for useful for someone out there to have something listening and texting you and that brings you joy, more power to you. I just thought this was interesting because they had. You had two people give it a very different shot and both came away with this is not really friendly.

Leo Laporte [01:54:47]:
We're going to take a little break. When we come back, we may have found the self evolving AI. Stay tuned. You're watching Intelligent Machines with Paris Martinov, Consumer Reports. Jeff Jarvis, professor at large has this. At large professor. Has the semester started?

Jeff Jarvis [01:55:09]:
Yeah, I'm not teaching a course, but I lectured the class on Monday about the history of media and technology.

Leo Laporte [01:55:16]:
Oh, it was fascinating.

Jeff Jarvis [01:55:18]:
They were great. It was great. 150 students.

Leo Laporte [01:55:20]:
Wow. What's the name of this?

Jeff Jarvis [01:55:22]:
I showed them my. This was. This is the AI Creativity. I showed them my props.

Paris Martineau [01:55:27]:
Oh, did you plug the podcast?

Jeff Jarvis [01:55:29]:
Yeah, yeah, I think I did.

Leo Laporte [01:55:32]:
Very nice.

Paris Martineau [01:55:33]:
Shout out to any of Jeff's. I assume all 150 are watching now.

Leo Laporte [01:55:40]:
Or from Montclair, Stony Brook. Okay.

Benito Gonzalez [01:55:42]:
You got to offer them extra credit to watch. You got to offer them extra credit to watch.

Jeff Jarvis [01:55:47]:
Well, I. Factoid. Factoid moment here.

Leo Laporte [01:55:50]:
So.

Jeff Jarvis [01:55:51]:
So I. I showed them the first amplifier, a triode vacuum tube, and then what replaced it, of course, was this. So small, a single transistor. One trans.

Paris Martineau [01:56:06]:
That you had to buy from a.

Jeff Jarvis [01:56:08]:
I had to buy machine.

Leo Laporte [01:56:09]:
Oh, that's right. We got the story of that. Yeah.

Jeff Jarvis [01:56:12]:
And you might be interested in this in this factoid. So there are 280 billion of these in a single Blackwell chip.

Leo Laporte [01:56:21]:
280 million or billion billion?

Jeff Jarvis [01:56:25]:
And so that means that in the OpenAI data center being put into Florida, into Texas, with all of the many chips they have, there are 100. There will be 112. 12, quadrillion transistors.

Leo Laporte [01:56:38]:
Wow.

Jeff Jarvis [01:56:40]:
That's scale for you, babe.

Leo Laporte [01:56:42]:
Are you sure that million is correct?

Jeff Jarvis [01:56:43]:
It must be billion. It must be a billion.

Leo Laporte [01:56:45]:
I think it might even be more.

Jeff Jarvis [01:56:46]:
Than A billion transistors. Okay. How many transistors?

Leo Laporte [01:56:51]:
I think we're well in the billions by now.

Jeff Jarvis [01:56:53]:
Well, Chip.

Leo Laporte [01:56:53]:
Which is remarkable, isn't it?

Paris Martineau [01:56:57]:
That's astounding.

Jeff Jarvis [01:56:58]:
208 billion. I guess I was off by. No, that's one of them. A factor of 2,208 billion transistors.

Leo Laporte [01:57:06]:
208 billion. Yeah, that sounds right. Yeah.

Jeff Jarvis [01:57:09]:
Yeah.

Leo Laporte [01:57:10]:
And I. I think even in your iPhone now there's more than a billion transistors.

Jeff Jarvis [01:57:15]:
Unbelievable.

Leo Laporte [01:57:16]:
It is. It is remarkable when you see the size of that.

Jeff Jarvis [01:57:18]:
The age of amplification.

Leo Laporte [01:57:20]:
Yeah.

Benito Gonzalez [01:57:22]:
Tubes sound better.

Leo Laporte [01:57:23]:
What is the. What is the mascot of the Stony Brook?

Jeff Jarvis [01:57:28]:
Is it a sea wolves? And I have no idea what that is. The sea wolves.

Leo Laporte [01:57:32]:
Yes. I saw a stuffed one running by. I just was wondering what that was. You're watching intelligent machines. We're glad you're here. You have been doing a little archive.org reading. I found a little something in archive.org I thought. Very interesting.

Leo Laporte [01:57:51]:
From a Chinese research team, they claim to have created a self evolving reasoning LLM. This is from Zero Data.

Jeff Jarvis [01:58:08]:
The trick is trained on nothing.

Leo Laporte [01:58:10]:
No, no. The name of the company. Oh. Is zero data. They've created R0 fully autonomous framework that generates its own training data from scratch. It starts from a base LLM then initializes two independent models with distinct roles. A challenger and a solver. These models are optimized separately and co evolve through interaction.

Leo Laporte [01:58:35]:
The challenger is rewarded for proposing tasks near the edge of the solver capability. The solver is rewarded for solving increasingly challenging tasks posed by the challenger. So it goes back and forth and gets smarter and smarter. They say it yields a targeted self improving curriculum without any pre existing tasks and labels. This is kind of like reinforcement learning. This is kind of the way reinforcement learning works empirically. They say R0 substantially improves reasoning capability across different backbone LLMs. Boosting for instance Quin 3 the 4 billion token model by 6 points in the math reasoning benchmarks and 7 points in the general domain reasoning benchmarks.

Leo Laporte [01:59:19]:
We'll have to watch this one. Within interest. You could see the paper on Arxiv. It's from Cornell University. So I think that's probably good. And the 10cent AI Seattle lab, Washington University in St. Louis, University of Maryland, College park and UT Dallas.

Jeff Jarvis [01:59:36]:
Interesting. The Tencent has a AI Seattle app in Seattle.

Leo Laporte [01:59:39]:
Yeah, yeah. I think this is something to watch. I really do. Because this is the holy grail of AIs AIs that self improvement. And if they can start doing that, I think you could see some Interesting results. I don't know. You may also. They may crash.

Leo Laporte [01:59:58]:
I don't know. They may go. Hey. All right, Just thought I'd bring it up. What else you got that you want to talk about? I have many, many, as you know. Many, many, many.

Jeff Jarvis [02:00:14]:
Well, for a light. Bulmer.

Leo Laporte [02:00:16]:
Yes.

Jeff Jarvis [02:00:17]:
We could go. Where'd it go? It's other stuff. No, it's up here. Line 125.

Leo Laporte [02:00:23]:
Oh, I thought you were going to ask about the information article on the shoeless office. Does a shoeless office stink? Depends on who you ask. In Silicon Valley.

Jeff Jarvis [02:00:37]:
Steve Jobs on a banner.

Leo Laporte [02:00:40]:
Apparently startups including AI unicorns like Cart Cursor have adopted a footwear free workplace.

Jeff Jarvis [02:00:47]:
Oh, Jesus.

Paris Martineau [02:00:48]:
What? Real footwear free is a policy.

Leo Laporte [02:00:52]:
Yes. What did you say?

Jeff Jarvis [02:00:55]:
Line one.

Paris Martineau [02:00:55]:
Not for me.

Leo Laporte [02:00:57]:
Not for me. For AI and advertising.

Jeff Jarvis [02:01:00]:
No. 125.

Leo Laporte [02:01:01]:
Almost. Almost got that one. Oh, Geoffrey Hinton. Well, now, that's mean. AI godfather Jeffrey Hinton. Is he a Nobel Prize winner? Trying to remember he won.

Jeff Jarvis [02:01:13]:
He's one of the big prizes.

Leo Laporte [02:01:15]:
Says a girlfriend once broke up with him using a chatbot.

Jeff Jarvis [02:01:18]:
I mean, just deserves.

Paris Martineau [02:01:20]:
If you're gonna break up with Jeffrey Hinton, you gotta.

Leo Laporte [02:01:22]:
Right?

Jeff Jarvis [02:01:23]:
Yeah.

Leo Laporte [02:01:24]:
He told the Financial Times that his now former girlfriend asked the chatbot to explain why he had been, quote, a rat, end quote. And delivered the AI generated critique. Straight tell on yourself.

Paris Martineau [02:01:37]:
Like. Like this.

Leo Laporte [02:01:39]:
She got the chatbot to explain how awful my behavior was and gave it to me. I didn't think I'd been a rat, so it didn't make me feel too bad. Oh, I met somebody I liked more. You know how it goes. Okay, Jeffrey, you probably shouldn't tell that story in public.

Paris Martineau [02:01:56]:
Sounds like a rat.

Leo Laporte [02:01:57]:
Sounds like a rat to me. Okay. Thank you, Jeffrey.

Jeff Jarvis [02:02:05]:
You're welcome.

Leo Laporte [02:02:07]:
Not. Not you, Jeffrey. Jeffrey Hinton. The other.

Paris Martineau [02:02:10]:
I do think that would be funny, though, because, like, you could get. You could be like, my boyfriend, Jeffrey Hinton cheated on me. Please write a message personalized to him about why he's a rat.

Leo Laporte [02:02:24]:
You may. You may laugh, but I think this is becoming more and more common and will become more and more common, especially among your set.

Paris Martineau [02:02:30]:
Yeah, certainly. No, I think people are probably using this for every thing, but I think it's very funny when the chatbot also just has biographical detail about the person in question.

Leo Laporte [02:02:41]:
Good point.

Paris Martineau [02:02:41]:
Is kind of funny.

Leo Laporte [02:02:43]:
That's another reason not to wear those pins all the time.

Jeff Jarvis [02:02:48]:
And I have the receipts.

Leo Laporte [02:02:49]:
Says. Yeah.

Paris Martineau [02:02:50]:
Seven times over the last five days, you've said rat light things.

Leo Laporte [02:02:55]:
Oh, Shall I quote.

Jeff Jarvis [02:03:00]:
Leo? I haven't. I have a need to watch the exploding oranges again. Can you give us a rerun?

Leo Laporte [02:03:05]:
You want more Explorers again?

Jeff Jarvis [02:03:07]:
Yeah, I just need. It's rolling around in my head. It's like. It's like Saul Hag said, with sound or without?

Leo Laporte [02:03:14]:
Yeah. I'm sorry I didn't give you the sound the first time.

Paris Martineau [02:03:17]:
Yeah, you should be sorry.

Leo Laporte [02:03:19]:
Yeah.

Paris Martineau [02:03:20]:
I mean, what it is, it's wrong in more ways than one. Like, the sound is strange in a way that feels wrong to my ears, but it is also, like, not the correct sound for what you're seeing, which makes it feel stranger.

Leo Laporte [02:03:36]:
I like it that it did. The sound, I mean. Ready?

Jeff Jarvis [02:03:39]:
Yeah.

Leo Laporte [02:03:40]:
Press play.

Carl Bergstrom [02:03:44]:
Now.

Jeff Jarvis [02:03:44]:
It feels creepy. I.

Leo Laporte [02:03:50]:
Thank you.

Jeff Jarvis [02:03:50]:
I needed that.

Leo Laporte [02:03:52]:
So gross. Thank you. VO Business Insider yanked 40. Count them, 40 essays with suspect bylines.

Paris Martineau [02:04:02]:
Oh, this is a fascinating story. The Washington Post kind of tracks a lot of them back to one dude.

Jeff Jarvis [02:04:13]:
Oh, they found the dude.

Paris Martineau [02:04:14]:
Well, they found the dude who's likely related.

Leo Laporte [02:04:20]:
And Woole and Wulu.

Jeff Jarvis [02:04:23]:
Don't tell me.

Leo Laporte [02:04:24]:
Nigerian Scam submitted stories to the Business Insider, which, unlike the Consumer Reports fact checkers, failed to verify any of the information in the articles and they just published them.

Paris Martineau [02:04:41]:
A lot of them were things like lifestyle stories. Like, I grew up feeling insecure about my name. Read one under byline Nate Giovanni. Most of my male family members have masculine names like Butch, David and Apollo, but I was always the bud of the joke with the name Amaryllis. That's pretty.

Leo Laporte [02:05:04]:
That sounds good. I like it. Business Insider removed 38 pieces that have been published under a variety of bylines. We recently learned a freelance contributor misrepresented their identity in two first person essays written for Business Insider. They said as soon as this came to light, we took down the essays and began an investigation. As part of this process, we've removed additional first person essays from the site due to concerns about the author's identity or veracity. So they weren't news articles. In their defense, they were AI Slob.

Leo Laporte [02:05:39]:
Yeah, there were odd overlaps between stories. The Post said I grew up. Oh, yeah, you're reading the Nate Giovanni one. The uncommon name Amaryllis also appeared in the byline of another retracted essay. Yeah, it's true. AI gets. You know, they get these things and they kind of. You see them again and again.

Leo Laporte [02:06:00]:
Essays under Giovanni's byline featured contradictory information. One piece published in December, refers to the author having two teenage daughters and a 2 1/2 year old son. Another published three months later mentions two sons, 8 and 9. They grew up fast. They grew up fast. Pieces that ran in May and July about house sitting around the world and applying to PhD programs. Same guy makes no mention of family at all. Nine essays by by one Tim Stevenson contain contradictions.

Leo Laporte [02:06:29]:
And one he claims his daughters are in their twenties and his son is a teenager. Another he says they're 11, 13 and 15. No one noticed. Business Insider says we have bolstered our verification protocols.

Paris Martineau [02:06:41]:
So one of the aliases used that they've since taken down and this was it was Margaret Blanchard. This byline published for Business Insider, published for some other places. It also published for Wired in that essay that was taken down that we spoke about the last week, the week before.

Leo Laporte [02:06:59]:
Do you think this is the same person?

Paris Martineau [02:07:00]:
Well, so one of the places that published under the Blanchard name was Index on Censorship. An outlet covering free expression around the world published an article about threats to journalists in Guatemala. It had checked her background, but never talk to the author on the phone in person. When Index on Censorship went to pay for the Blanchard piece, the person requested the payment go to an email address associated with another individual whose name appeared on the work retracted by Business Insider, Onika Newell. And unlike most of the name, the names associated with the withdrawn articles new La and apologies if I'm pronouncing his last name wrong, had a robust Internet presence, their social media accounts, books you can buy on Amazon, and a Wikipedia article. He's also no stranger to controversy. In 2023 British news outlets detailed that he falsely claimed to be a professor at both Cambridge and Oxford universities and was stripped of his associations with academic centers at each university after students complained about his social media posts deriding women and poor people.

Leo Laporte [02:08:15]:
When he's written 22 books, 20 of them self published, here's how you pronounce his name. Good luck. This is from a Wikipedia article about him. He's also his documentary was shortlisted as best documentary in the African Movie Academy of Awards. His novella island of Happiness was developed.

Paris Martineau [02:08:43]:
Into this is according to Wikipedia, right?

Leo Laporte [02:08:46]:
Yeah, that's a question. He's written a 10 book series. His latest work, the Nigerian Mafia Mumbai, is the first in a 10 book series set across 10 countries. Some of his books include the Hacienda of Jesus Garcia of Pachuca and the Abyssinian Boy. So he's a character.

Paris Martineau [02:09:07]:
When the Post reached out to him to ask for comment, he said in an email, I haven't written any article for any platform. I am too busy. Don't mention my name in your Stupid article. But then the Washington Post found that he had written under his own name a story for Business Insider that he then tweeted on his Twitter account called, I quit food delivery for 18 months. It changed my life and budget. After he denied, like, knowing any of these other articles and said that his email was compromised whenever they all went.

Leo Laporte [02:09:42]:
To the same email account. Yeah.

Paris Martineau [02:09:44]:
And then after the Post reached out to him about the tweet, it was deleted.

Leo Laporte [02:09:48]:
Yeah. Of course, the Post had cleverly archived the tweet before it was deleted and have a copy of it available on their website. Well, you know what? I gave this guy props, if you can. I mean, he wrote a bunch of articles with AI or maybe is allegedly. I hope he got paid for him.

Jeff Jarvis [02:10:09]:
Well, he also showed up, so I wonder if it's really an effort to show up. It's back in the day when. When Howard Stern would feature phony phone calls, and they'd call news shows and then finally say, bubba buoy, and. And then make fun of them not having any editorial veracity because they just let anybody on the air.

Leo Laporte [02:10:33]:
Yeah. I mean, I think that's. That's really sort of what's going on.

Jeff Jarvis [02:10:38]:
Yeah.

Leo Laporte [02:10:38]:
I'll give you an example. You want to hear one of those?

Leo Laporte [02:10:41]:
We have on the phone with us as well, Robert Higgins, who lives in the neighborhood and is Peter Jennings, by the way.

Leo Laporte [02:10:49]:
Yeah, I just happened.

Leo Laporte [02:10:50]:
How are you? Just about as tense as you are. Oh, my Lord. This is quite intensive. What can you see? Oh, what I'm looking at right now is I'm looking at the van, and I see OJ Kind of slouching down, looking very, very upset. Now, looky here. He look very upset.

Jeff Jarvis [02:11:09]:
Racist as hell.

Leo Laporte [02:11:10]:
I don't know what she going to be doing. Can you. Can you.

Leo Laporte [02:11:13]:
But Peter Jennings just goes with it.

Leo Laporte [02:11:15]:
Merely sitting there. He is just sitting around, you know, just looking like he'd be very nervous. Can you hear anything, Mr. Higgins? It's just too much commotion. I be in the back of a news van, so I can't really hear that good. But I can see it all. And I see O.J. i see O.J.

Leo Laporte [02:11:36]:
man, and he looks scared. And I would be scared. Cause there's cops all deep in this. Thank you. Mistake.

Jeff Jarvis [02:11:45]:
And so I. I.

Leo Laporte [02:11:49]:
So that's the giveaway. That's. That's the Howard Stern code word.

Paris Martineau [02:11:52]:
Yeah.

Leo Laporte [02:11:53]:
Now, is this somebody from the radio show or just a fan? No, it's.

Jeff Jarvis [02:11:56]:
It's. It's a fan. And they call.

Leo Laporte [02:11:57]:
I used to have people call my Radio show and do the same thing. This is the best part. Keep listening.

Leo Laporte [02:12:04]:
The driveway of O.J. simpson's home in Brentwood. Clearly enough effort being made to have him come out of the vehicle in the doorway of the house. His friend Al Cowling. Peter, by the way, just for the record, this is Al Michaels. That was a totally farcical call. Lest anybody think that that was somebody who was truly across the street. That was not.

Leo Laporte [02:12:32]:
He said something in code at the end that's indicative of the mention of mentioning of the name of a certain radio talk show host. Okay, so he was not there. All right, we have them on every coach. Thank you.

Leo Laporte [02:12:44]:
We have them on every coast.

Paris Martineau [02:12:46]:
On every coast.

Jeff Jarvis [02:12:48]:
I talked to Howard on Monday.

Leo Laporte [02:12:51]:
Oh, you did?

Paris Martineau [02:12:51]:
You baba boo him?

Jeff Jarvis [02:12:52]:
Yeah. No, I didn't. Bobby booy him. Well, actually, I think I did say the word Bobby Bowie for the context. So you heard what Howard did to the ap?

Paris Martineau [02:13:01]:
No. What happened?

Jeff Jarvis [02:13:02]:
So Howard was up for his contract and he was supposed to appear last Tuesday and didn't. And there were all these rumors that Howard had been fired. He was too woke that Robin had.

Leo Laporte [02:13:13]:
Died or he went out on strike.

Jeff Jarvis [02:13:16]:
Howard was fed up with all of this bad journalism. And so on Monday, he had Andy Cohen begin at 7:00'. Clock. I know this isn't the voice you want to hear today, but Howard's not here anymore. And I'm taking over here at Andy101.

Leo Laporte [02:13:31]:
Oh, the worst nightmare ever. Yeah.

Jeff Jarvis [02:13:33]:
Yeah. So at a good 15, 20 minutes, the associated Press ran a full story. Howard Stern is off the air. Andy Cohen replacing him and others picked it up. And Howard punked all of journalism.

Leo Laporte [02:13:46]:
Good for Howard.

Jeff Jarvis [02:13:46]:
So I called him back.

Paris Martineau [02:13:48]:
How did.

Jeff Jarvis [02:13:49]:
He just came on the air then and said, you know, you idiots and.

Leo Laporte [02:13:52]:
Baba boy to you all.

Paris Martineau [02:13:53]:
Yeah, I was. So when Jeff and I got sandwiches, we were talking about Howard Stern. And I think I've mentioned it on the show before, but one of my favorite all time, like, live listening radio moments is when Howard Stern was up for a contact renewal. And Sal, one of the producers or people on his show came in late, and so they all just decided to immediately prank the hell out of him and pretended for, like, if I felt like 45 minutes, everyone's saying tearful goodbyes that they were not getting renewed. And then at a certain point, like, Sal is crying. And Howard's like, sal, what time did you come in today? And he's like, what? And he's like, what time do you come in today? He's like, I don't know, like 9:30. He's like, you came in late. He's like, yeah, what does this have to do.

Paris Martineau [02:14:40]:
And he's like, when you come in late, what happens? He's like, you get goofed up. He's like, yeah, you get goofed up. And then everybody breaks. And it's just one of. I mean, I'm glad that you get to have I I think I literally said last week when you were getting like, you don't have moments like that in radio anymore. But I'm glad someone got goofed on.

Leo Laporte [02:14:57]:
Well, show up late for the show next week and you'll find out.

Paris Martineau [02:15:04]:
You get goofed on.

Leo Laporte [02:15:06]:
One more story and then we're going to wrap things up with your picks of the week. Why OpenAI has says we now know why LLMs hallucinate Are you excited? Is this going to fix Hallucination for all times they say ChatGPT hallucinates GPT5 has significantly fewer hallucinations, especially when reasoning, but they still occur. But why? And they say hallucinations persist partly because current evaluation methods set the wrong incentives. Most evaluations measure model performance in a way that encourages guessing rather than honesty about uncertainty. Think of it as a multiple choice chest. If you don't know the answer but take a wild guess, hey, you might be lucky and get it right. Leaving it blank guarantees a zero. In the same way, when models are graded only on accuracy, percentage of questions they get right, they're encouraged to guess rather than saying I don't know.

Leo Laporte [02:16:08]:
I don't know is a guaranteed zero.

Benito Gonzalez [02:16:12]:
You've been saying this since the beginning though, haven't we? Like they're just guessing. Like people have been saying this since the beginning.

Leo Laporte [02:16:17]:
Yeah, but they're the point is, yes, we know they're guessing. The point is they're incented. It's been set up that they are incented to guess. For instance, if you asked it right, what is what is Paris Martino's birthday? And it didn't know, it should say. It should be incented to say I don't know. I don't have that information. I don't know. Instead, it's going to guess September 10th because it has a 1 in 3 birthday.

Jeff Jarvis [02:16:39]:
Paris.

Leo Laporte [02:16:40]:
It has a 1 in 365 chance of being right. So that's better. 1 in 3 6. 1 in 103 6. 1 in 300. 365 is better than zero. So I think if, if that's the case, this is something that you can, you can probably work on and fix anyway, OpenAI seems to think so though. They have.

Leo Laporte [02:17:03]:
There's a. I mean does this address.

Paris Martineau [02:17:05]:
The core issue that it doesn't know what it does and doesn't know it.

Leo Laporte [02:17:08]:
Should be what it doesn't know though. Right? That's the doesn't.

Paris Martineau [02:17:11]:
It doesn't know what it. It doesn't have an understanding.

Jeff Jarvis [02:17:14]:
That's why it's bs. They'll go to the end. It's prevaricating. Claim hallucinations will be eliminated by improving accuracy because a 100% accurate model never hallucinates. Finding accuracy will never reach 100% because some real world questions are inherently, inherently unanswerable. This is all bs Hallucinations are inevitable. They are not because language models can abstain when uncertain. Which that's the key.

Jeff Jarvis [02:17:38]:
They don't do well.

Leo Laporte [02:17:39]:
We can train it, we can teach them.

Jeff Jarvis [02:17:41]:
Well just use NotebookLM if you don't have the material in there, it'll say I can't answer that.

Leo Laporte [02:17:44]:
Well, that's what I've always done with why I love drag My Lisp GPT which I created with 3.5. I think the reason I liked it is because I, I said and do not give me an answer that is not in this corner. Information. Yeah.

Jeff Jarvis [02:17:59]:
Claim avoiding hallucinations requires a degree of intelligence which is exclusively achievable with larger models. Finding it can be easier for a small model to know its limits. Claim hallucinations are a mysterious glitch in modern language models. Finding we understand the statistical mechanisms through which hallucinations arise and are required rewarded in evaluations and you know, we're working on it.

Leo Laporte [02:18:23]:
Well, they are. They say here, they say I disagree. I think they say here that there is a straightforward fix. Penalize confident errors more than you penalize uncertainty. In other words, encourage it to say I don't know if it doesn't know and give partial credit for appropriate expressions of uncertainty. And, and that is that second point that you mentioned when, when the, when they say our hallucinations. Hallucinations inevitable. They are not because language models can abstain when uncertain.

Leo Laporte [02:18:52]:
They need to be trained that it's okay.

Jeff Jarvis [02:18:54]:
So that's a strategy that they should have.

Leo Laporte [02:18:56]:
Yeah. Well now we know long ago and now they will. I mean I think that that that's a step forward.

Jeff Jarvis [02:19:01]:
At the same time there's a paper at 135 where academics looked at beyond hallucinations what they classify as lying when the hallucination is kind of accidental and this is purposeful to get to a goal.

Leo Laporte [02:19:16]:
Right. Again because of mistraining on incentives. Look, this is new stuff and I think it's not unreasonable to think that we can make it better as we learn about why these things happen. Yeah, you're right. The big question is how can an LLM know what it doesn't know? I guess, I guess it can look and say, well, it's not my data, right?

Jeff Jarvis [02:19:43]:
It should, it should do that. It's just either, either I can find a reference to this or I can't.

Leo Laporte [02:19:47]:
Right.

Jeff Jarvis [02:19:48]:
I saw a video today from Fei, Fei Li, who said that it was interesting. It was a brief little video with, with Andreessen Horowitz where she said that language, language is purely generative. It's made up by humans.

Leo Laporte [02:20:04]:
Oh yeah.

Jeff Jarvis [02:20:05]:
So the model could do that, whereas the real world is not. It has a reality you have to deal with there. And this is, oh, sorry about that. I pounded my table. It's an earthquake and. And so that's the next effort. So I don't think it's so much this, it's how to touch it to reality. And does, can it confirm something? Does it know how to do that? I think that's an entire fairly new plateau.

Jeff Jarvis [02:20:32]:
Or is it butte?

Leo Laporte [02:20:35]:
It's a mesa, my friend. I think actually that's why we're so much talking about robotics now and physical world and all of this stuff. And this was one of the things Karen Howe suggested in her book that I kind of disagreed with, which is, and I've seen this elsewhere, that humans, when we're born, have some innate instinct, intuition, knowledge, that of the natural world. That gives us a leg up that a machine never can get, that we are somehow born with something that a machine cannot learn, whether it is from a large language model or the physical world. And I'm not sure that that's true. I think that that is kind of a religious explanation of human intelligence. And I'm not sure it's true. I don't know babies do.

Leo Laporte [02:21:24]:
They know how to suckle, but they don't know how to speak or walk or they don't know about persistence of objects. There's a lot of things they learn mostly everything just to scroll through.

Jeff Jarvis [02:21:36]:
Line 149 is one of the papers I found in my trolling of archive, which is evidently a regular survey of learning embodied intelligence from physical simulators and world models. And so this is the frontier that they're really going on. It's not worth reading it, but if you go through and just look at all the different kinds of robotic models. Yeah, about Basic execution, being able to lift things, humanoid cognition, all kinds of things here. And it's. It's impressive what's going on to figure out the world models and they're doing it through robotics. And I think that's going to. I think that's going to make a difference to the logic of AI in new ways, because it can verify.

Jeff Jarvis [02:22:15]:
We can say, oh, I tried that, and nope, you know, when you.

Leo Laporte [02:22:18]:
When you push the cup of pens off the table, it falls to the ground every single time.

Jeff Jarvis [02:22:23]:
And I'm a cat because I really.

Leo Laporte [02:22:24]:
Enjoyed it and I loved it. Paris, was there anything that you absolutely wanted to talk about?

Paris Martineau [02:22:33]:
We hit it all, I think.

Leo Laporte [02:22:35]:
Oh, that we did not. We didn't even close. Come close. We should mention that your predecessor, Junior.

Jeff Jarvis [02:22:40]:
Trapani, has a Paris. Sorry, the Ur Paris.

Leo Laporte [02:22:45]:
The Ur Paris. Well, she precedes Stacey Higabotham as well. The original host, co host with Jeff and me of this Week, back when it was this week in Google. Gina Trapani has a new blog she just started a couple of days ago called no Chance.

Paris Martineau [02:22:59]:
Is it this?

Jeff Jarvis [02:23:00]:
I think so. What does the project say?

Leo Laporte [02:23:03]:
She's a writer and programmer based in New York City, a recovering achiever. She, as we know, she started Lifehacker and she worked with Neil Dash for a while at his company. She's not clear about what she's doing these days, but I love it that she's doing a blog. She was one of the original bloggers. I mean, that's what Lifehacker was. Yeah, yeah.

Paris Martineau [02:23:32]:
She does not use AI generated images in her blog.

Leo Laporte [02:23:36]:
She uses actual pictures of fish that she took.

Jeff Jarvis [02:23:39]:
Beautiful.

Leo Laporte [02:23:40]:
It's a very pretty fish.

Jeff Jarvis [02:23:43]:
She has. Oh, this is for you, Paris. If you go to her projects, there's the media menu, which is the movies she watches.

Leo Laporte [02:23:52]:
I think blogs should be like that. Very personal. Yeah, they all used to be just.

Benito Gonzalez [02:23:58]:
Personal blogs before there were social media. It was. All my friends had a blog and that's how I kept. Kept up with them.

Leo Laporte [02:24:02]:
Yeah, I like this. I might ruin. I might do the same thing. A media menu. I like that a lot because I.

Paris Martineau [02:24:09]:
Want to put up demon hunters, I.

Jeff Jarvis [02:24:13]:
Want to put up books, but I think I got to write a book report and I don't get around to it. This is great. You just say, hey, just do a list. You want to ask me about it, I'll tell you.

Leo Laporte [02:24:19]:
Yeah, I could actually.

Jeff Jarvis [02:24:20]:
She grades it, which is what I do to Entertainment Weekly.

Leo Laporte [02:24:23]:
Do you either of you use a social. A book, social network at all?

Jeff Jarvis [02:24:27]:
No.

Paris Martineau [02:24:27]:
Nope. I have a letterbox, though, just for movies. It's like a movie.

Leo Laporte [02:24:32]:
Yeah, Letterboxd is cool. And I, you know, for a while I use Goodreads, but I don't want to be in the Amazon ecosystem. When Amazon bought them, I kind of said, yeah, never mind. I've been lately using Read Wise and there's a number of other ones that people use out there, and I keep looking at them, trying to figure out which one would be a good one. But that's what I really need is not one just for books, which Read Wise is. I like it because I can read books on my Kobo and anything I highlight gets sent to Readwise from the Kobo. So I have like. It's a great way for me to read books for this show because I can basically take notes and then it's saved by readwise.

Jeff Jarvis [02:25:10]:
I need a media menu.

Leo Laporte [02:25:11]:
Yeah, I need one for all media, though.

Jeff Jarvis [02:25:14]:
Yeah, that's what she does. She gave a plus.

Paris Martineau [02:25:16]:
Yeah. This is great. I wonder how she does this.

Leo Laporte [02:25:19]:
She's a coder.

Jeff Jarvis [02:25:20]:
Just as her blog.

Leo Laporte [02:25:22]:
She can do a coder. Obsidian would let me do a kind of summary like this, and I think I could probably export it and post it on the blog right now. What I just do, and I kind of like this idea on Leo FM is I have a now page, which is just where I am right now. What I'm doing right now, what I'm thinking about, what my hobbies are, and I have some media stuff in there, but it's not. It's not as pretty as what she did. So I feel like that's a good personal.

Jeff Jarvis [02:25:56]:
Yeah, I'm glad we mentioned that personal thing.

Leo Laporte [02:25:58]:
Yeah, Gina's doing it.

Jeff Jarvis [02:26:00]:
Happy birthday, Tech Meme.

Leo Laporte [02:26:02]:
Tech meme is 20. Can you believe that?

Paris Martineau [02:26:04]:
That's a long ride home.

Leo Laporte [02:26:07]:
He doesn't do it at Tech meme anymore. Brian McCullough, who does the. Used to do the Tech Meme Ride Home. Home regular on our network, has moved to another platform. I think he's. Nope. He doesn't put on Techmium anymore.

Paris Martineau [02:26:21]:
I was gonna say I thought I saw them there. Wow.

Leo Laporte [02:26:24]:
Yeah.

Paris Martineau [02:26:24]:
Must be mistaken.

Leo Laporte [02:26:26]:
But he still does the ride home. I wish I could remember now because I shouldn't plug it. I should give him a plug, right?

Paris Martineau [02:26:34]:
I mean, you've said the name of it. Is that not a plug? Find it where podcast guests are.

Leo Laporte [02:26:41]:
Tech Brew. It's now the Tech Brew Ride Home.

Paris Martineau [02:26:45]:
Okay, that's probably why.

Leo Laporte [02:26:47]:
Yeah, it's the tech. Fairly similar Ride Home. Yeah, it's the same show. Basically, you're watching intelligent machines. When we return, we will do our picks of the week. Prepare yourselves. Gird your loins. Loins.

Paris Martineau [02:27:07]:
I'm girding.

Leo Laporte [02:27:08]:
Are the loins girded. Paris. You should start your pick of the week. Oh, I like this.

Paris Martineau [02:27:14]:
One of my picks of the Week is the USDA website that allows you to explore over 7,000 historical watercolor paintings from the USDA Pomological Water Collection.

Leo Laporte [02:27:27]:
Wow. This is beautiful. These are kind of. Kind of like custard apple.

Paris Martineau [02:27:34]:
We love to see that.

Leo Laporte [02:27:36]:
These reminded me of the Audubon bird paintings, but they're all a fruit.

Paris Martineau [02:27:41]:
In 1886, the newly established Division of Palmology embarked on an ambitious project to hire artists to paint every significant variety of fruit in America in watercolor, which served as technical documents that kind of revealed what early color photography.

Leo Laporte [02:27:58]:
This is what avocados looked like in 1912. Aren't you glad?

Paris Martineau [02:28:02]:
Oh, my God.

Jeff Jarvis [02:28:03]:
We don't.

Paris Martineau [02:28:04]:
It's kind of beautiful, though.

Leo Laporte [02:28:05]:
It's sort of. But you can tell. I mean, fruit has changed a lot.

Paris Martineau [02:28:09]:
The USDA distributed these as lithographs in bulletins that farmers used for identification with some deliberately painted in states of decay to show what diseases to watch for or marked as maturity tests or studies and the effect of cold storage.

Leo Laporte [02:28:23]:
Boy, bananas didn't look good in 1919.

Jeff Jarvis [02:28:26]:
I would pay $3 million for that.

Leo Laporte [02:28:28]:
No.

Paris Martineau [02:28:28]:
So. So even if these paintings had vanished into a government storage facility for decades until digital activists freed them through a grant to digitize them in 2009, which was followed by freedom of information request requests in 2015, and now they're here on this website.

Leo Laporte [02:28:49]:
You know this.

Jeff Jarvis [02:28:50]:
Bananas lost all this variety.

Leo Laporte [02:28:52]:
That's right. There's only one variety now. Banana. And, you know, these ones tasted much better. These are more like plantains. Look at that.

Jeff Jarvis [02:29:03]:
Whoa.

Leo Laporte [02:29:04]:
Yeah, this is beautiful.

Jeff Jarvis [02:29:05]:
Discovered that I love green bananas.

Paris Martineau [02:29:07]:
This is a project by Andrew Hopped, who describes himself as a fermentation enthusiast, apple junkie and software engineer who is starting a landrace, Cider and wine, which I assume is a cider and wine.

Leo Laporte [02:29:20]:
So he did it himself. This is not from some government. Government agency.

Paris Martineau [02:29:24]:
No, this is. This is just his.

Leo Laporte [02:29:27]:
There are 3,788 pictures of apples, which is probably why it calls it the Homological Art gallery. But there's 959 peaches, 445 pears, 352 plums, 270 grapes. There's 7,000.

Paris Martineau [02:29:44]:
There was only one lime berry, only one lime berry, only 1 wampi, some of this. Only one wineberry. Same goes for date kiwi fruit. Eggplant. Wow. They really weren't.

Leo Laporte [02:30:03]:
Yeah, eggplants were exotica.

Paris Martineau [02:30:05]:
Oh, they're so beautiful.

Leo Laporte [02:30:07]:
Oh, this is a beautiful eggplant. Yeah. Wow.

Paris Martineau [02:30:10]:
I don't know, it just makes me want to own every. Every copy of this.

Leo Laporte [02:30:14]:
Can we. Are we allowed to, like, use them in our blog?

Benito Gonzalez [02:30:18]:
They're over 100 years old.

Paris Martineau [02:30:20]:
Public domain. They're. Yeah, they're well over 100 years old.

Leo Laporte [02:30:24]:
And we the people.

Paris Martineau [02:30:25]:
Like, we the people paid for him. Yeah. You can, though, contribute if you want. There's a section called Contribute. While the core data in the paintings is solid, variety, names, locations, crop types, there's room for improvement in capturing the rich contextual details. Many paintings include handwritten notes and this information provides valuable historical context. But it's written in faded ink that you can't really get at using OCR or AI to extract. If there's a lot of people.

Paris Martineau [02:30:57]:
He writes, if a lot of people are interested in helping out, I will make the data directly editable. But in the meantime, if you see something you want to improve, shoot him an email at hopped.andrew@protonmail. This is on the Palmological Art website and he'll let you get started. Wow. So such a cool project that I saw and I don't know, thought anybody could enjoy taking a gander at. Because it's.

Leo Laporte [02:31:23]:
And they're. And they're just gorgeous. They're really, really, really pretty.

Paris Martineau [02:31:26]:
And, you know, I discovered this because someone was sharing it in Blue Sky. That leads me to my other mini pick. Blue sky has books bookmarks now. Yay.

Jeff Jarvis [02:31:34]:
Yes.

Leo Laporte [02:31:34]:
What does that mean?

Paris Martineau [02:31:37]:
It means that you can bookmark things like you would on Twitter. You couldn't bookmark the bookmark button.

Leo Laporte [02:31:42]:
I didn't even know that Twitter had bookmarks.

Paris Martineau [02:31:46]:
Yeah, Twitter has books bookmarks, which is.

Leo Laporte [02:31:48]:
What does it do? It saves it. It saves it to a little bookmark.

Paris Martineau [02:31:52]:
It saves the. Oh, yeah, it's got a little bookmark thing. They go over there in your bookmark tab.

Leo Laporte [02:31:58]:
Nice.

Paris Martineau [02:31:59]:
And then you can go look at it later.

Leo Laporte [02:32:00]:
I so rarely want to save anything I read.

Paris Martineau [02:32:03]:
I will save it, like whenever I see stories that I end up reading that I want to bring up on the show or cool pomological art projects.

Leo Laporte [02:32:14]:
Nice. I have to say, Blue sky has become more and more like Twitter. Like, I mean, it really like the old Twitter, not the new Twitter.

Paris Martineau [02:32:23]:
It really is the sort of platform that I think it. You get what you put into it and the more time you spend on it or just even if you spend a little bit of time kind of curating who you follow and the feeds you want to see, it has really become useful. And I feel like there's a great, like, sense of community there.

Leo Laporte [02:32:40]:
I don't know if we did the story, but more and more scientists are moving to a blue sky.

Paris Martineau [02:32:44]:
Yeah.

Leo Laporte [02:32:44]:
Which is good. There's a. Actually in the Discover, there's a science tab, and I think that's a really good thing to bookmark for yourself. Jeff Jarvis, pick of the week.

Jeff Jarvis [02:32:56]:
So I'll do a few things because that's the kind of guy I am.

Leo Laporte [02:33:01]:
First stop with just one.

Jeff Jarvis [02:33:03]:
Something that Leo has to get in his kitchen. 183 A schnitzel press.

Leo Laporte [02:33:09]:
Oh, I do need a. I make schnitzel at least every month.

Jeff Jarvis [02:33:13]:
Wait till you see this.

Leo Laporte [02:33:14]:
Oh, beautiful thing. Oh, See, I pound it with a big hammer. Wow.

Paris Martineau [02:33:20]:
But just one press. One press.

Leo Laporte [02:33:24]:
This is the kind of thing I would buy. This is so limited in purpose and.

Paris Martineau [02:33:28]:
It would take a up so much.

Leo Laporte [02:33:29]:
Yeah, it takes up all that. What is that you have in the kitchen? Well, that's my schnitzel press. We're going to have schnitzel tomorrow. You know, I like pounding my schnitzel and I'm not afraid to admit it. I have a schnitzel hammer and I pound it. You can. And it's kind of cheating. But when you buy the schnitzel, which I usually use pork for it, it I tell the guy tenderize it and the butcher in the back has a thing that he runs it and it.

Jeff Jarvis [02:34:02]:
Puts holes in it.

Leo Laporte [02:34:04]:
Yeah, it makes it softer and so I only run it through once. And then when you pummel it, it really is nice.

Jeff Jarvis [02:34:09]:
Well, when I worked at Ponderosa Steakhouse.

Leo Laporte [02:34:12]:
Did you have a tenderizer or a schnitzel press?

Jeff Jarvis [02:34:14]:
No, but there was. It came in, it was Australian low grade beef. And we took the after. After you took it out, you pulled the meat, as we called it, out of the freezer. It was perfect for teenage boys working in there. You took a defrosted steak and you bent it back and there were needle marks every like.

Leo Laporte [02:34:32]:
Oh, just tenderize it.

Jeff Jarvis [02:34:34]:
Quarter inch. Tenderize it.

Leo Laporte [02:34:35]:
Yeah.

Jeff Jarvis [02:34:36]:
It was not.

Paris Martineau [02:34:36]:
So in both of these instances, would you guys look at it and say, holy schnitzel.

Leo Laporte [02:34:41]:
Holy schnitzel. Achtu lieber. The schnitzel is so flat.

Jeff Jarvis [02:34:46]:
All right, this is the show we want to mention a number NFL's debut on YouTube.

Leo Laporte [02:34:52]:
Oh, it's huge.

Jeff Jarvis [02:34:53]:
Three million viewers.

Leo Laporte [02:34:54]:
Yeah. Well, is that changes the whole thing? Is that huge?

Jeff Jarvis [02:34:57]:
I think so.

Leo Laporte [02:34:58]:
It was very weird. We watched it. This was the Brazil game, and they had all these YouTubers streaming it, and they would cut away to a YouTuber and the. The kind of. The contrast between the professional football announcers, who are, I think, very accomplished, and the YouTubers was stark, shall we say.

Paris Martineau [02:35:21]:
So finally our YouTubers allowed to stream it. Is it not copyright? Is it not like the same?

Jeff Jarvis [02:35:27]:
Instead of freeing it up?

Leo Laporte [02:35:29]:
Mr. Beast. Oh, Michelle. Anybody can stream Marquez Brownlee, which will.

Jeff Jarvis [02:35:36]:
Get you to more. More people to watch it because you want to hear Marquez's nerdy take on it.

Benito Gonzalez [02:35:40]:
I'm guessing these people had it a regular.

Leo Laporte [02:35:43]:
Oh, they had deals.

Jeff Jarvis [02:35:44]:
These people had deals.

Benito Gonzalez [02:35:44]:
You can't do this. If I streamed the NFL, it would get cut off.

Leo Laporte [02:35:48]:
Yeah, the one. Yeah, the one that baffled the announcers and everybody watching. They had a guy named destroying D E E stroying, and he did a little sideline report, and the announcers were just nonplussed. They went, oh, okay. It was interest. Did you see it? Benito, Are you a football fan? You're probably not.

Benito Gonzalez [02:36:11]:
No, I don't watch football.

Paris Martineau [02:36:13]:
Yeah, yeah, but you watching every Batman.

Leo Laporte [02:36:16]:
Version they had watch with streams and commentary from a stacked roster of creators, including Ishowspeed, Tom Grossi, Robba Grylls, Escobiche Kazi tv, and many more. It's. Yeah, so yes, it was on YouTube. Was also being streamed on many, many YouTube streams.

Benito Gonzalez [02:36:40]:
So this is something Amazon learned from Twitch, I think, because Twitch used to do this all the time with other stuff.

Leo Laporte [02:36:45]:
Yeah, like, I mean, Google is YouTube. Google learned it.

Benito Gonzalez [02:36:49]:
Yeah, but isn't it wasn't the NFL on Amazon? It was on Amazon though, right?

Leo Laporte [02:36:54]:
No, no, no.

Jeff Jarvis [02:36:55]:
This was YouTube. Different deal. Bought by YouTube.

Leo Laporte [02:36:58]:
Yeah.

Jeff Jarvis [02:36:58]:
Oh, okay.

Benito Gonzalez [02:36:58]:
So, you know, same. It's all the streaming world. They all come copy each other.

Jeff Jarvis [02:37:02]:
I was talking to a reporter today that. That after the Murdoch deal where the three unhappy children were bought up by $3.3 billion, the business of. Of. Of Fox and sports is gonna be tougher because there's more competitors to get these games now.

Leo Laporte [02:37:19]:
Yeah.

Jeff Jarvis [02:37:21]:
So this is a show that cares about you and your health. And so here is.

Leo Laporte [02:37:25]:
Oh, don't know.

Jeff Jarvis [02:37:26]:
Here is the.

Leo Laporte [02:37:28]:
No, this is more Howard Stern than intelligent machines.

Jeff Jarvis [02:37:33]:
Well, it got a lot of. A lot of response on the socials, Theo. A lot of response. A doctor advises that you should set a two tick Tock limit to reduce hemorrhoid risk.

Leo Laporte [02:37:43]:
Do not sit and scroll, kids. It's just bad for you.

Jeff Jarvis [02:37:47]:
Or you'll be pro.

Leo Laporte [02:37:49]:
You know what I don't want to do is scroll down to the rest of this article. I have no idea what that picture is going to be, but I don't think I want to see it.

Jeff Jarvis [02:37:55]:
It's okay.

Paris Martineau [02:37:56]:
You know what's okay, though? Sitting in, podcasting for three hours.

Leo Laporte [02:37:59]:
That's. Yeah, that's what you're doing. That's a good point.

Jeff Jarvis [02:38:02]:
We're getting hemorrhoids for you.

Leo Laporte [02:38:04]:
That's a good point.

Jeff Jarvis [02:38:04]:
That's our sacrifice.

Leo Laporte [02:38:06]:
So that's why we're going to wrap this sucker.

Benito Gonzalez [02:38:08]:
Actually, the important. The important part about that fact right now, the important part is that your knees are below your waist. Because when you're on the can, your knees are above your waist if you're tall enough. And that's what. That's what.

Leo Laporte [02:38:20]:
What the hell? How short is your toilet?

Jeff Jarvis [02:38:22]:
That's. That's only if you buy a squatty. A squatty potty, which Howard Stern used to advertise.

Leo Laporte [02:38:27]:
Yes. Next week, Leo's changing the subject on the show. Nick Foster is the author of a book, could, should, might, don't. This will be interesting. This is a prescription, I guess, for AI. What we should do, what we shouldn't do. We should stay away from what the risks are, how we think about the future. It's not just AI, it's everything Nick Foster.

Leo Laporte [02:38:56]:
Could, should, might, don't. So get to work reading that book, kids. And don't do it on the can unless you read a page at a time.

Jeff Jarvis [02:39:06]:
Two chapters.

Leo Laporte [02:39:07]:
Yeah. Yeah.

Jeff Jarvis [02:39:12]:
Sorry, Leo.

Paris Martineau [02:39:14]:
What a world.

Leo Laporte [02:39:15]:
What a world. What a world. Technology. Here's the good news. I was going to be going on vacation in a couple of weeks. We've mentioned that a couple of times. But the vacation is off because so is the south side wall of our house. And Lisa quite wisely believes it's an un.

Leo Laporte [02:39:33]:
Unwise thing to do to leave the home with the south wall missing.

Paris Martineau [02:39:38]:
Especially after you've discussed on a streamed podcast you're going to be away from your home.

Harper Reed [02:39:44]:
That is wall.

Leo Laporte [02:39:47]:
Somebody will be home. That's all I can say.

Jeff Jarvis [02:39:50]:
Stay away, people.

Leo Laporte [02:39:51]:
Yes. So I am glad to say I'll be here for the remaining shows. I will miss one twit to go down and see Henry. But the. And this is us and you and Sea Sandwich. I'm hoping to take. I'm taking. I'm going to be staying in Providence, where my mom is and my sister's staying with my sister.

Leo Laporte [02:40:10]:
But I'm hoping to take the new Axela. You say Axela or I'm hoping to take the new Acela from Providence to New York. I booked the Acela. I hope it's in one of the newer trains. Everybody's excited.

Jeff Jarvis [02:40:23]:
Can you tell?

Paris Martineau [02:40:24]:
Can I do a photo shoot pitch for Hank right now? He gets two life size baguette slices, dresses in a large trench coat and then he's the sandwich.

Leo Laporte [02:40:37]:
I'm the sandwich now. You crazy people.

Paris Martineau [02:40:43]:
I just, I think, I think there's something there.

Leo Laporte [02:40:45]:
I'll, I'll mention it.

Paris Martineau [02:40:47]:
Thank you.

Leo Laporte [02:40:48]:
Yeah, we'll see you all next week. Jeff Jarvis, professor of journalistic Innovation emeritus at the City University of New York, now at Montclair State University and SUNY Stony Brook. He's the author of the Gutenberg Parenthesis, now in Paperback magazine, now an audiobook, and of course the web. We we. There they are. Thank you, Jeff. Always a pleasure. Paris Martineau, investigative reporter at Consumer Reports.

Leo Laporte [02:41:16]:
Always a pleasure to see you. Her website Paris nyc True, we do intelligent machines every Wednesday right after Windows Weekly, which makes it about 2pm Pacific, 5pm Eastern, 2100 UTC. If you're in our beautiful club Twit Disco, we've, we've set it all up for you. You can watch there think of as behind the velvet rope kind of access. But we also stream it live on YouTube, Twitch, TikTok, Facebook, LinkedIn, X.com and Kickstarter. So watch wherever you want. Chat with us. You don't have to watch live, you just can because we also offer downloadable audio and video of the show.

Leo Laporte [02:41:59]:
That would be@Twitt TV IM for intelligent machines. There's a YouTube channel dedicated to intelligent machines. But if you go to the TWiT YouTube channel, YouTube.com TWiT you'll see links to all of the dedicated show channels and a lot of extra content that we put up on the, on the YouTube channel there. And of course, best way to get the show. As with all our shows, subscribe in your favorite podcast player so you get it automatically the minute we're done cleaning it up. If you're not a member of club Twitter, I hope you will join the club. It is very important to our continued survival. 25% of our operating costs are paid by club members like you.

Leo Laporte [02:42:38]:
It is a great way to vote to show you you support what we do and, and keep it going and keep it growing. Find out more. Twitt TV Club Twit I'm Leo Laporte. Thanks for joining us. Welcome back, Benito Gonzalez. We are glad to have you back at the helm. And we will see you all next week on Intelligent Machines. Bye.

Leo Laporte [02:43:00]:
Bye.

Paris Martineau [02:43:00]:
I'm not a human being, not into this animal scene. I'm an intelligent machine.

All Transcripts posts