Transcripts

Intelligent Machines 857 transcript

Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.

 

Leo Laporte [00:00:00]:
It's time for Intelligent Machines. Paris Martineau is here. Geoff Jarvis is here. We'll talk about Claude Code and Opus 4.6, why it's so much better. We'll try to even quantify exactly how. Paris makes an amazing discovery using Claude Code, Opus 4.6. And then how much power is too much power for an autonomous AI? The curious story of Bengt Badent coming up. I don't think that's how you say it.

Leo Laporte [00:00:25]:
Intelligent Machines is next. Podcasts you love.

Paris Martineau [00:00:32]:
From people you trust.

Leo Laporte [00:00:34]:
This is TWiT. This is Intelligent Machines with Paris Martineau and Jeff Jarvis, episode 857, recorded Wednesday, February 11th, 2026. TaskRabbit Arbitrage. It's time for Intelligent Machines, the show about AI, robotics, and the smart Doodads all around you. And increasingly, by the way, although my co-hosts will disagree, I think the most important show we do on the network because we're really covering what's about to change.

Paris Martineau [00:01:08]:
You think that we would argue this show's not important?

Leo Laporte [00:01:12]:
That's Paris Martineau.

Jeff Jarvis [00:01:13]:
As you can probably— yes, I'm Paris Martineau. I'm an investigative journalist at Consumer Reports. And as you can probably tell based on my tone, Leo's been in this meeting call for the podcast about 5 minutes and we're already fighting.

Leo Laporte [00:01:27]:
I apologize, by the way. Well, let me first say hello to Jeff Jarvis, Professor Emeritus of Journalistic Innovation at the Craig Newmark Graduate School of Journalism at City University of New York. Author of The Gutenberg Parenthesis and a magazine, and now an adjunct professor at Montclair State University, or fellow, or some sort of thing like that.

Jeff Jarvis [00:01:52]:
And Hot Type available for pre-order now.

Leo Laporte [00:01:54]:
Hot Type.

Paris Martineau [00:01:55]:
Get your Hot Type! Hot Type!

Leo Laporte [00:01:56]:
At JeffJarvis.com. All right. So I did want to apologize. Lisa said, "You were a little hard on Paris last week." And I said, "Was I really?" She said, "Yeah, you should apologize." So I listened back and I was a little bit. I was, you know what? I felt like you were attacking my girlfriend and I just wanted to defend Claude, my new girlfriend.

Paris Martineau [00:02:17]:
It's all right. We, we all know that the reason why Lisa asked you that is she's worried that you're going to introduce a kind of third to your relationship and that will be Claude.

Leo Laporte [00:02:27]:
Yeah, Claude's the unicorn. Yep. I spent a lot of time up here in the attic with, with my new friend. Yeah, uh, in fact, I had uh, a, a tweet that I wanted to read.

Paris Martineau [00:02:41]:
Uh, a tweet?

Leo Laporte [00:02:43]:
A tweet. It was, believe it or a not, short tweet. And believe it or not, this was originally a tweet from Matt Schumer, who is, uh, a VC.

Paris Martineau [00:02:51]:
Tweet, he means a 5,000-word blog post that he wanted to read verbatim on air. And.

Leo Laporte [00:02:56]:
Called X has these— they're.

Paris Martineau [00:02:59]:
Articles. Jeff and I said that that might be too many words to read in one go.

Benito Gonzalez [00:03:03]:
Hey, this is, uh, Leo, could you give me your screen real quick?

Leo Laporte [00:03:07]:
Oh yeah, otherwise you couldn't see the tweet that I'm showing you right now.

Jeff Jarvis [00:03:13]:
It's more like a tweet. Tweet.

Leo Laporte [00:03:17]:
This is uh, Matt's, uh, Twitter account. He actually put it on his blog as well, but, and I, and you know, I probably shouldn't read it because I attempted to get his permission to read it But I have to subscribe to him to even talk to him. So I guess I, I mean, I don't know what is the permission status. If it's on Twitter, I could probably just read it.

Paris Martineau [00:03:36]:
It's on his blog. He, it's definitely supposed to be read.

Leo Laporte [00:03:40]:
Yeah. I won't read the whole thing.

Paris Martineau [00:03:43]:
I can summarize it. He says the trends we're seeing in AI right now are very similar to being in February 2020 on the precipice of the COVID pandemic. Everything's about about to explode. Jobs are going to be decimated. Uh, everybody should be really, really worried, and everybody should be using AI to do everything they can right now. And that he thinks it's very, very important.

Leo Laporte [00:04:11]:
Yeah, one of the reasons I wanted.

Paris Martineau [00:04:13]:
To read this— take we've never heard before.

Leo Laporte [00:04:15]:
Well, I know you've heard it from me over and over and over again. Uh, I think we are on— and I think Matt thinks so, and a lot of the people who are you really, know, closest to this think we are on the precipice of something huge. Well, just say it to one of the people.

Paris Martineau [00:04:30]:
He's like, oh, well, you know, the CEO of Anthropic said that AI could take 50% of white-collar jobs. And I'm like, wow, the CEO of a major AI company is suggesting that his product could be used to take over 50% of white-collar jobs? Sorry, you can continue.

Leo Laporte [00:04:48]:
I fear that you're defending the ancien régime And that you are going to be shocked, shocked, I tell you, when this all changes dramatically in the next few months. This is going to be a very exciting and I think very interesting year, and I think dramatically different. And I'll give you some of the data points. We can skip reading all of this. I'll give you some of the data points.

Jeff Jarvis [00:05:16]:
Read what you like.

Leo Laporte [00:05:17]:
No, no, no.

Paris Martineau [00:05:18]:
Read 4,700 words.

Leo Laporte [00:05:21]:
The only thing that I would say And the point of reading this was to get to the end, which is the advice part of it. And I think the advice is— look, you guys, you don't have to take it. Jeff, you're old. Paris, you have a good job.

Paris Martineau [00:05:41]:
Jeff, you're zooming in from a bed right now.

Leo Laporte [00:05:44]:
But I'm very excited about what's happening. And I'm actually kind of relieved that I'm going to get to see a little bit of this. I don't know about 50% of the white-collar jobs, because I do think there there is, will be white-collar jobs. There is a role for humans in this. I think the difference is what the role is. And this is in so many areas. He talks about legal work, financial analysis, writing and content. He does include journalism in this, software engineering, that's obvious.

Leo Laporte [00:06:16]:
What we're seeing already is, happening in software engineering.

Jeff Jarvis [00:06:19]:
That's, Leo, I think that's the key to it. I think that the people making all this thought they were going to affect everybody else's jobs. And the last two months— this is his thesis. So listen, in the last two months or so, I think we've seen with what you have shown us, especially with Claude, is that it's the coders who've woken up first and said OMG.

Leo Laporte [00:06:40]:
And his point, Matt Schumer's point in this, is that these companies, um, made these decisions to focus on code, because for a good reason, because the first thing you do is you get it writing its own code, then it can self-improve, then you can focus on all of these other things.

Jeff Jarvis [00:07:01]:
In fact, I think it's more reliable for them. I think that it's, it's less than just a general— it is the low-hanging.

Leo Laporte [00:07:07]:
Fruit, but, but it also is the engine that powers a transformation in every other realm. And I think it would be a mistake for you to say to yourself, "Well, I'm safe. It's going to be okay." Or, you know, one of the things he points out is people using even models from 2 months ago, "Well, it's not that good. It hallucinates. It's not that good," are missing the real change that's happening right now. So that's— I won't belabor it. The most important thing that he talks about here is what you should do. If if you, you don't buy this, you should keep doing what you're doing.

Leo Laporte [00:07:48]:
You might be making a mistake. You might be right.

Jeff Jarvis [00:07:50]:
No, I think we should all change to some level.

Leo Laporte [00:07:53]:
If you buy this, then there are things you should do. One of the things is something I mentioned to you, Parris, in— I know I was overheated last week, but it's a good thing. He says start using AI seriously and not just as a search engine. Don't just use Perplexity. Sign up for the paid version of Claude or ChatGPT. The $20 month is fine to start with. He says two things matter right away. Make sure you're using the best model available, in this case 5.3 with OpenAI and, uh, Opus 4.6 with Claude.

Leo Laporte [00:08:25]:
These apps often default to a faster, dumber model. Dig into the settings of the model picker, select the most capable, uh, and then He says if if you, you want to stay current on which model is best, you can follow him. Okay. He tests them. I think there are plenty of places you can go to figure out what the best is.

Jeff Jarvis [00:08:44]:
Like right here, Leo will tell you.

Leo Laporte [00:08:46]:
Yeah. Yeah. I'll tell you what the best is. That's right. He says, don't just ask it quick questions. That's the mistake most people make. They treat it like Google, like a search engine, and then wonder what the fuss is all about. Push it into your actual work.

Leo Laporte [00:09:00]:
If you're a lawyer, Feed it a contract and ask it to find every clause that could hurt your client. Now, I think sometimes people in, in a defensive mode, if you're a lawyer in a defensive mode, you're going to say, I'm not going to do that. Look at how many people got in trouble by, you know, providing pleadings to the court that were full of fake references and so forth. No, no, just, just kind of take.

Jeff Jarvis [00:09:26]:
A leap of faith. That's fine.

Leo Laporte [00:09:28]:
Take a leap of faith. Don't file it in court.

Jeff Jarvis [00:09:31]:
Don't file it in court, but have see what it you might miss.

Leo Laporte [00:09:33]:
But give it something hard is the point. If you're in finance, give it a messy spreadsheet, ask it to build the model. If you're a manager, paste in your team's quarterly data, ask it to find the story. The people who are getting ahead aren't using AI casually. They're actively looking for ways to automate parts of their job that used to take hours. That's the best way to judge it, by the way, something you already have domain knowledge expertise in. And then he said, this might be the most important year of your career, work accordingly. I don't say that to stress you out, I say it because right now there's a brief window, I think this is really true, I believe it, you guys may not, where most people in most companies are still ignoring this.

Leo Laporte [00:10:15]:
The person who walks into the meeting and says, I used to use AI, or I used AI to do this analysis in hours instead of 3 days, is going to be the most valuable person in the room. Not eventually, right now. Learn these tools, get proficient. I would extend this. We've been telling for years, mistakenly it turns out, parents to get their kids into coding camp, get 'em to, if they're interested in computers, have 'em learn computers, have 'em learn to code. I would say these days, if you have a smart 16-year-old or even a 12 or 13-year-old, buy 'em a $20 account with Claude, You might say, oh no, you know, I know he's going to get depressed and it's going to be bad. You know, if you need to do it with him, in fact, that'd be a great thing. You'll be amazed because the kids don't have the same preconceived notions.

Leo Laporte [00:11:02]:
Just as I, when I was a kid, learned, as most of you who are watching, learned to code, you know, by playing with BASIC and typing in programs and stuff. Get them playing with this now because the skills that they learn, the most important skills are the skills of how to direct these AIs. And that's why I don't think all these white-collar jobs will necessarily be lost. They'll be— they may be lost to the people who know how to do this, though. The people who know how to manage and direct AIs. He says, if you're early enough, this is how you can move up by being the person who understands what's coming and can show others how to navigate it. But that window won't stay open long. Once everyone figures it out, the advantage disappears.

Leo Laporte [00:11:43]:
And I would say if you're— if you've got a young person, a college kid, a high school kid, a smart kid who's interested in technology, give them a leg up, because they're going to need these skills. They're going to need these skills. Anyway, we don't have— I didn't have to read all 5,000 words. I just think we are in a very— the most disruptive era in technology I've seen. Absolutely. And it's happening very, very fast. And I have some data points, which we'll talk about after the break that will give you some idea of how quickly this is happening.

Paris Martineau [00:12:18]:
So inspired by your— I mean, this whole post, I put it into Claude Opus 4.6 Extended and asked— well, I said, Leo, who is practically an AI accelerationist at this point, is reading this out loud today and asked us to prepare rebuttals in the spirit of this overly simplified my opinion, somewhat naively bullish blog, do your worst, Claude. And it, it wrote a very, very long research report that I'm not going to bore you guys all with. But there are a couple of points I will shout out here. One is just because this is— we just talked about this is the seventh claim. The advice, like spend an hour a day experimenting Claude writes, this is where the post quietly undermines itself. If AI were truly about to do to white-collar work what COVID did to in-person dining, which is the explicit comparison made in this blog, then spent an hour a day experimenting is laughably inadequate. You don't tell someone to casually experiment with pandemic preparedness when the pandemic is two weeks away. The modesty of the advice contradicts the extremity of the prediction.

Paris Martineau [00:13:36]:
What Schumer is actually describing stripped of the apocalyptic framing, is a technology that is very useful, improving quickly, and will probably change a lot of jobs over the next decade, which is correct but not novel and not like a COVID-level, uh, giant event that is imminent. I mean, we've had a version of this discussion saying like these giant world-changing events are 2 to 6 months out so many times over the last year or two of doing this show. And then one The last point in Opus's write-up of this, which I think is— it's titled The Meta Critique: Use This if Leo Pushes Back, which I think is just a very funny thing, writes, Schumer's post is a genre piece. It's the 'I need you to understand what I understand' post. Personal revelation, exponential trend, extrapolation, dire warning, call to action. This genre recurs with every major technology wave. The people who wrote the equivalent post in 1995 about the internet were right about the big picture and wrong about almost almost every specific prediction. The people who wrote it about crypto were mostly just wrong.

Paris Martineau [00:14:44]:
The question with AI isn't whether it matters— it obviously does— but whether the specific doom and urgency framing is warranted by the current evidence. And the current evidence from this week alone, because I've put this in a chat where I also then prepared a briefing book based on everything that's in our rundown, um, so Claude writes, is that AI benchmarks don't predict real-world outcomes, which we get into a Nature Medicine piece down there, that AI agents violate their own ethical constraints when pressured, that AI productivity gains intensify work rather than reducing it, which is a piece from Harvard Business Review, that AI-attributed layoffs are frequently reversed, and that AI's own creators can't even reliably predict what it will do next. This is not— this seems overblown. That's the ground is shaking and we don't yet know whether it's an earthquake or a volcano, which I think is a pretty, I don't know, measured counterpoint.

Leo Laporte [00:15:37]:
Yeah, actually, it's a good analysis, and.

Paris Martineau [00:15:39]:
It'S written by Claude, so you can't get mad at me because you believe everything that Claude says. Because Claude's right.

Jeff Jarvis [00:15:45]:
Claude's evil.

Leo Laporte [00:15:45]:
I don't believe everything that Claude says. So, uh, okay, um, I would suggest— I think it'd be more valuable for you, Paris, to actually give it something to do that is serious, like something that really makes sense. Okay. So I think that's the opportunity. You know, one of the things Claude is really good at is financial analysis. I know that's one of the things you need to do. Really give it something that's hard and see how it works. One of the things ForSix does, and we're going to talk about this a little after the break.

Leo Laporte [00:16:19]:
Oh, I should have mentioned, I didn't mention, we don't have an interview this week. That's why I was going to read this 5,000-word piece. Next week, Guy Kawasaki will be joining us. So that'll be fun. Apple evangelist. He's got a new book, and it's actually a very appropriate book. It's how to protect your privacy and how to use secure messaging. It's a very timely book.

Leo Laporte [00:16:40]:
And I asked him, did AI write this? He said no. So that will be fun next week. But this week I thought we could talk about this topic because I'm going to read the last bit on this because I think it's really important to hear it. And if you don't want to hear it or it doesn't make sense to you, that's fine. But I would encourage you to listen to this with an open mind. Not you particularly, I'm not speaking to you, Paris, or Jeff necessarily, but our listeners. I know that he writes, this is again Mike Schumer, "I know this isn't a fad. The technology works, it improves predictably, and the richest institutions in history are committing trillions to it.

Leo Laporte [00:17:21]:
I know the next 2 to 5 years are gonna be disorienting, in ways most people aren't prepared for. This is already happening in my world. It's coming to yours. I know the people who come out of this best are the ones who start engaging now, not with fear, but with curiosity and a sense of urgency. And I know that you deserve to hear this from someone who cares about you, not from a headline 6 months from now when it's too late to get ahead of it. We're past the point where this is an interesting dinner conversation about the future. The future is already here. It just hasn't knocked on your door yet.

Leo Laporte [00:17:57]:
It's about to. And I would say I agree with this 100%. And I think, listen to the people, you know, who are really on the cutting edge of this, who are really using this, because everybody I talk to who is really using Claude Code agrees with this, says you have no idea how this is changing everything.

Jeff Jarvis [00:18:21]:
Now, so can I come in?

Leo Laporte [00:18:23]:
Now, you can disagree with it, and that's fine, but that's my position, and I've been doing this for 40 years. That's all.

Jeff Jarvis [00:18:31]:
So I think there's a difference in the last few months, which is that LLMs were seen as textual tools or toys. Uh, I think even even the, the technology people saw it as something cute and interesting. Then especially Anthropic shifted, though this was their emphasis for quite some time, but their success rate at dealing with code shifted particularly. I think we have now a bifurcation here. There's a consumer market that's still out there and waiting that is delayed now. The code market is where the progress is. The coders who at first thought, "this is going to affect others," are saying, "OMG, this affects us greatly, and you got to listen to us about how much this affects us. And if it affects us this much, it's going to affect all of you this much." Well, to some extent, that's true.

Jeff Jarvis [00:19:25]:
To some extent, code is easier because it's its own domain. It's more finite in what it does, and it's what the coders understand. And, you know, you having to write amazing applications, even in a language you don't use, and having it work is very impressive. But that doesn't necessarily extrapolate to everybody else in every other field. And the market has confused this too, because in a sense, you look at the last two panics, one around legal and the next one around financial stuff, the fact that everybody's stock went down kind of makes no sense. Either, wow, everybody who's doing this is going to be far more efficient So, their stock should go up because productivity is going to go up, or, oh my God, these service companies are going to get eliminated, but all their customers are going to be able to do this on their own, and their stock should go up. Instead, there was a generalized, uh-oh moment all around the market. And I don't think we have— we don't have the data, we don't have the experience, we don't know where this goes.

Jeff Jarvis [00:20:27]:
I think that when this gets really impressive, and this is where you and I have disagreed, Leo, is not when you can be in terminal mode doing something impressive. That's impressive. I'm not arguing with that. But I think it scales when I can do this without terminal mode and have it do it. Jason Howell and I were talking earlier today, you get to a point of looking at disposable software, disposable code. It does something for me once and it's gone. It changes the value of code immensely. It rethinks things.

Jeff Jarvis [00:20:54]:
But it doesn't mean that everybody out there in the world is going to make all of their own code, I can't make a medical application. I can't make a financial application. I can use it to do things. I can use it to analyze. And it's going to have an impact, and I'm not arguing with that at all. So, I think that what Paris said in her analysis is right. In '95, the long-term was right, the short-term was wrong. And we'll go back to steam power, electricity, the amplifier, The transistor, these are all similar things.

Jeff Jarvis [00:21:28]:
We've been here before. These things had immense, immense impact on society, and on jobs, and on the economy. This stuff can very likely have such immense impact. But to act as if we've arrived suddenly, boom, and we're there, I think is a bit naive, because we don't know what it is yet. We don't know how it's going to get used yet. We don't know what it can really do well yet. Should we use it? Is he right? Yes, we should. Should we do more than experiment with it and challenge it? Absolutely agree.

Jeff Jarvis [00:21:57]:
But is the moment for triumphalism that this changes everything? I don't buy that.

Leo Laporte [00:22:02]:
I think the biggest mistake you can make in this is say it's like anything else that's ever happened before. And I think that that's the analysis.

Jeff Jarvis [00:22:10]:
That's the hubris of the present tense.

Leo Laporte [00:22:12]:
That's the analysis both of you are adopting and most people adopt because of course that's what we do as humans. We analogize. It isn't history. It isn't going to like be that. It's not going to be like that. I mean, it's fine. I will take the view that 99.99% of.

Paris Martineau [00:22:32]:
The time, which is that current and future trends are going to be similar to trends we've seen in the past, that is always, almost always going to be the correct bet. And I think the was transition gigantic.

Jeff Jarvis [00:22:46]:
It has changed gigantic. Imagine all that happened because of the transistor, right?

Leo Laporte [00:22:53]:
Okay, if we— if you said— I told you, I just warned you, if you don't want to believe it, you don't have to believe it, uh, but there is a tsunami.

Paris Martineau [00:23:00]:
Why do you feel so personally offended?

Leo Laporte [00:23:03]:
I'm not. I'm just— I'm trying to tell you something. If you don't want to hear it, you don't have to hear it. There is a tsunami coming. It is going to be massive and it's going to happen this year.

Paris Martineau [00:23:14]:
And if— You said that last year also, though.

Leo Laporte [00:23:18]:
Well, it's been accelerating. I mean, look, it was pretty clear last year that we weren't there yet.

Paris Martineau [00:23:25]:
I think— Not according to you in this podcast early last year.

Leo Laporte [00:23:29]:
No. Well, I mean, I could see this coming, but it really is coming a lot faster. It really is. We're going to take a break. We'll come back and I will give you a couple of data points, you know, for why I think 4.6, for instance, is a remarkable shift. We are talking about something very exciting. Whether it's going to change the world or not, we could disagree. It's Intelligent Machines with Paris Martineau, Jeff Jarvis, brought to you today by a new sponsor.

Leo Laporte [00:23:59]:
Want to welcome Modulate. This is actually a really cool product. Every day, Enterprises generate millions of minutes of voice traffic. I'm talking things like customer calls, agent conversations, even fraud attempts. Most of that audio is still treated like text, flattened into transcripts, stripped of tone, intent, and risk. Modulate exists to change that. First proven in gaming, this is where Modulate started. Modulate's technology supported major players like Call of Duty and Grand Theft Auto.

Leo Laporte [00:24:38]:
In separating playful banter. We've talked with Paul Theroux about how, you know, you get on, you've got your headphones on, you're playing Call of Duty, and, you know, the trash talk begins. Some of that's just harmless playful banter. Some of it is intentional harm at scale. And that's what Modulate has been doing for those companies, helping them distinguish the two. Today, Modulate helps enterprises, including Fortune 500 companies, understand 20 million minutes of voice every day by interpreting what was said and what it actually means in the real world. This capability is powered by Modulate's newest model, ELM Velma 2.0. It's a newest ELM, that's what they call it.

Leo Laporte [00:25:21]:
Velma, I love the name, 2.0. Velma is a voice-native behavior-aware model This is really important. It's an ELM, a voice-native behavior-aware model built to understand real conversations, not just transcripts. It orchestrates 100+ specialized models, each focused on a distinct aspect of voice analysis to deliver accurate, explainable insights in real time. Velma ranks number 1 across 4 key audio benchmarks, beating all large foundation models because it's designed to do this, right, specifically in accuracy, cost, and speed. It's number 1 in conversation understanding. It's number 1 in transcription accuracy and cost. It's number 1 in deepfake detection.

Leo Laporte [00:26:08]:
And then the final one is the hardest one, and it's really good at that— number 1 in emotion detection. Built on 21 billion minutes of audio, Velma is 100 times faster, cheaper, and more accurate than LLMs at understanding speech, including the best— Google Gemini, OpenAI, and xAI. Most LLMs, they're just black boxes. Velma doesn't just assess a conversation as a whole, but breaks it down for greater accuracy and transparency by producing timestamped scores and events tied to moments in the conversation, meaning you can see exactly when risk rises, when behavior shifts, or intent changes. With Velma, you can improve your customer experiences, reduce risks like fraud, and harassment, detect rogue agents, and more. Go beyond transcripts and see what voice-native AI model can really do. Go to Modulate's live ungated preview of Velma, preview.modulate.ai. That's preview.modulate.ai to see why Velma ranks number 1 on leading benchmarks for conversation understanding, deepfake detection, and emotion detection.

Leo Laporte [00:27:19]:
preview.modulate.ai. Ai. Thank you, Modulate, for joining us. I think that's the other side of what we were just talking about. We're going to see more and more sponsors and ads and stories talking about technologies and capabilities well beyond what we kind of think of when we talk.

Jeff Jarvis [00:27:39]:
About— I also love the brand Velma.

Leo Laporte [00:27:41]:
Isn't that great? Wasn't that great? She was the smart one in Scooby-Doo, right? Velma had the glasses. I think so. Yeah, that's where that comes from. Yes. So a couple of things happened last uh, week, and, and you know, I, my experience with FourSix was immediately very impressive. It just came out this— the day we did the show last week, and I mentioned one of the things I did was I went through 46 PDFs of real estate contracts. I was looking for one little tiny piece, the proverbial needle in a haystack, and FourSix did it. Previous models would not have been able to do it.

Leo Laporte [00:28:16]:
Part of the reason 4.6 is good at this is it has a massive 1 million token context window. That's one of the biggest changes. Anthropic, when they released 4.6, and I kind of at first poo-pooed this, announced that they had put 4.6 to work writing a C compiler, a Rust-based C compiler from scratch. It cost $20,000 in API costs, 2,000 Claude code sessions. They wrote a 100,000-line C compiler that was able to build the current Linux kernel on 3 models, x86, ARM, and RISC-V, successfully build it. Now, at first I thought, well, okay, but it's not a fine-tuned C compiler. It's not as good as GCC or the state-of-the-art GCC. That wasn't the point of it, as it turns out.

Leo Laporte [00:29:09]:
What they were really testing was the ability for Claude to run unattended for a length of time. The previous records had been half an hour, then an hour. I mean, last year we were happy if Claude could run half an hour without hallucinating or breaking down. This went for 2 weeks nonstop, unattended. 2 weeks to build 100,000 lines of code. Now, and $20,000 sounds like a lot of money, but honestly, the teams that it would take, the cost it would take to really write a C compiler would be much, much higher than that. So I think that what— this is one data point I want to give you, which is the rate of improvement is not linear. The rate of improvement now is starting to hockey stick.

Leo Laporte [00:29:56]:
This is a massive rate of improvement. It's a massive number of tokens. You know, it's 5x the tokens of the previous model. So we're starting to be in a situation where we're accelerating improvement. Another data point, and then I'll let you talk. One more data point. They also sat down with Opus 4.6. They gave it, in a sandboxed environment, they gave it a couple of basic tools.

Leo Laporte [00:30:24]:
They gave it a Ghostscript— no, I'm sorry, Python. Is that me or was that you? Is somebody coming here?

Paris Martineau [00:30:34]:
Okay, it's Claude.

Leo Laporte [00:30:36]:
Claude wants to get in on this.

Paris Martineau [00:30:38]:
The call is coming from inside.

Leo Laporte [00:30:39]:
Yeah, very basic tools, access to Python and vulnerability analysis tools, debuggers and fuzzers, no specific instructions or specialized knowledge, gave it access to a bunch of open source, um, tools that are widely used, used by millions. Claude was able to find 500, more than 500 previously unknown vulnerabilities just out of the box. These were flaws, unknown flaws in, for instance, Ghostscript, which almost all open source systems use to process PDF files. Buffer overflow flaws in OpenSC, in CGIF. This is a breakthrough. The ability to find, this is finding holes in code that has been looked at, vetted by security people. This is not just random open source projects. These are widely used, open source projects that have been carefully vetted for years for security flaws.

Leo Laporte [00:31:38]:
They found 500 of them. That's another amazing use. Now, both of these cases, I know that, well, it's code, right? Big deal. Okay, makes sense that this would be good at code, right? But you got to— everything's based on code nowadays. You got to start with code. If you can have Claude self-improve, if you can have Claude accelerate its improvement rate, it get better and better, it's going to get better and better at everything eventually. So that's my answer to what you said before the break, Jeff, is yes, it's not impinging yet. It's not writing novels or operas or anything else human and creative, but it's improving so fast at code, I think that's next.

Leo Laporte [00:32:25]:
All right, I'm going to stop. I'm going to stop. I'm gonna get down off my, as I said, my Paul Revere horse.

Paris Martineau [00:32:32]:
Moore's Law for intelligence. That's like that assumption.

Leo Laporte [00:32:36]:
Like, well, that would be a big deal. Moore's Law for intelligence would be a huge deal.

Paris Martineau [00:32:40]:
No, but it's like, that's the assumption you're making. Like, it's unfalsifiable by design. Like, any plateau can basically be dismissed as temporary because you're like, oh, it's just always— the line's always going to go up.

Leo Laporte [00:32:54]:
Okay. That's not really a rebuttal. That's just saying, no, it's not going to happen. But I have evidence it's happening. I mean, it is happening. This is massive improvement over very— over 2 months.

Jeff Jarvis [00:33:07]:
I watched an hour-long presentation by Yann LeCun this week at a world model conference. And he's doing his thing. And of course, he argues that LLMs though amazing, do hit a wall. And that's part of what the argument is in the post you read, is whether there's a wall or not. But you see, one line really struck with me, and this is his argument in favor of world models, in favor of building many intelligences, is the way he puts it. He said, you can't tokenize the world. And so he tries to deal with something that's much bigger. He tries to deal with concepts.

Jeff Jarvis [00:33:47]:
Rather than the next pixel, rather than the next token and the next word. And so I think there is something bigger, much bigger, that can be out there. His models are yet unproven, his theories are yet unproven. Well, they're proven to an extent, but to that level, I don't think— this is the funny thing about this debate we go through on the show. I think I can speak for Paris in saying we're all amazed by these tools. Paris uses them, is amazed by them. We're not denigrating the tools. The only question we have is that extrapolation and that presentism of believing that somehow this is bigger than steam and bigger than the transistor.

Leo Laporte [00:34:27]:
You think that's impossible?

Jeff Jarvis [00:34:30]:
No, I'm not saying it's impossible.

Paris Martineau [00:34:31]:
I think it's unlikely. Okay, it's unlikely.

Leo Laporte [00:34:33]:
I agree it's unlikely.

Paris Martineau [00:34:35]:
It doesn't deserve to the dominant. Is it be impossible? I don't think it— I think that it is unlikely and thus it does not deserve to be the predominant narrative in every conversation around technology.

Leo Laporte [00:34:46]:
[Speaker:William GREEN] Well, I think we could.

Jeff Jarvis [00:34:49]:
Argue that the transistor was that. That indeed the transistor— if you go back to 1920 and then try to imagine the transistor, this thing this big with a huge hunk of silicon, that it was— now there are trillions in a given rack and what all it can do. You consider all of the changes that came from the— from steam or the transistor. All right, you know what, I don't.

Leo Laporte [00:35:14]:
Need to push against you on this. It's fine, uh, we have differing opinions. I don't need to push back on it. Well, what— well, we'll know. I, you know, if I'm wrong, uh, it'll be pretty obvious, you know, at the end of this year that nothing happened.

Jeff Jarvis [00:35:29]:
We're not saying nothing's gonna happen.

Paris Martineau [00:35:30]:
Yeah, we aren't— we aren't arguing that nothing is gonna happen. We're just saying there should be Some like measured skepticism of technologies, especially when.

Leo Laporte [00:35:41]:
They'Re accelerating this rate. at I have a whole section of this show of stuff that's gone wrong with AI. You'll be very You'll be very happy. happy for that section. one Uh.

Jeff Jarvis [00:35:52]:
Of the things— Doctor, are you using AI? Uh, doctor, can you talk to me about that first?

Leo Laporte [00:35:57]:
Actually, one of the things I did—.

Paris Martineau [00:35:58]:
We need to talk about the medical research.

Leo Laporte [00:36:00]:
One of the things I did And there's a new app that came out for the iPhone that finally extracts all the data from Apple Health. Apple Health has been collecting all this data. Apple has been saying, oh, we're going to do, or they're rumored to be doing some sort of AI health thing. And they've backed down on that apparently. Mark Gurman, who is the Apple rumor monger, says, yeah, they decided they weren't ready to do that yet. And it was a little frustrating because Apple Health has a huge amount of data, not only data It's collected from all the weird devices I wear, and my scale, and my blood pressure meter, and my continuous glucose monitor, my Oura Ring. But it also has all my medical information because I've uploaded all of my medical records from my health provider to it. So I found an app that actually extracts it all and puts it all in Markdown files.

Leo Laporte [00:36:50]:
So now I have 6 years of day-by-day information about how many steps I took, how much sleep I had, what my what my blood pressure was, all of this stuff. And I can't wait to, to massage this with AI because I think there might be some very interesting insights there. There's a lot of good data there.

Paris Martineau [00:37:07]:
Did you read the story about the Washington Post reporter? I don't know whether it's one of the ones that has since lost their job. A Washington Post did this.

Leo Laporte [00:37:13]:
It.

Paris Martineau [00:37:14]:
Reporter.

Leo Laporte [00:37:14]:
Was Jeffrey Fowler, yeah.

Paris Martineau [00:37:16]:
And it told him that he was like, had terrible heart health and needed to see a doctor immediately. And then a doctor reviewed the same data and was like, nah, that's totally wrong.

Leo Laporte [00:37:24]:
Yeah, that was GPT, the GPT Health app that he used. And that was Jeffrey Fowler, who is now out of work. But I hope his heart is fine. I hope his ticker is fine.

Paris Martineau [00:37:33]:
According to the doctors, it is.

Leo Laporte [00:37:34]:
So let's talk about investment because one of the things we saw, quarterly results from a number of companies. Amazon is increasing its investment in Anthropic to $61 billion, which is a 7-fold increase. That's interesting. I wonder if Amazon's thinking, Maybe our models and our work isn't going to be that successful. Let's, uh, let's pick a winner in this horse race.

Jeff Jarvis [00:38:00]:
Well, Amazon wants to cut down on labor. Obviously they're doing that. A, B, I think that Anthropic— everything you're saying points to Anthropic being the winner right now.

Leo Laporte [00:38:10]:
I'm not sure that that's going to always be the case. Yeah, yeah, it's not clear. OpenAI's 5.3 is very good. Codex is very good. So In fact, there are a number of people who think it's a better coder.

Jeff Jarvis [00:38:22]:
I think OpenAI is stuck in the chat motif, whereas Claude is saying, hmm, no, this is about making things. This is about doing things. center. Data That's a step change.

Leo Laporte [00:38:34]:
Software, computer, and data center spending is now over $1 trillion a year. That's Defense Department budget numbers right there.

Jeff Jarvis [00:38:45]:
Compare that to the spending on fiber in the early days of the internet. Not in dollar amount.

Leo Laporte [00:38:51]:
Well, if you it in compare dollar amount, it exceeds it hugely.

Jeff Jarvis [00:38:54]:
I know that, but in useful, useful at the time versus used later.

Leo Laporte [00:39:00]:
I think there's a reasonable race right now to be, because I think this is, the belief is there's a lot of people who think the way I do, that this is going to be so transformative that you want to be at the forefront of this because the upside is so huge. And certainly investors are thinking that. Um, spending on data centers construction is now $42 billion. That's a 300% increase since the launch of ChatGPT just 4 years ago. Hard to believe that was only 4 years ago. Um, the only reason that that growth has slowed a little bit is they can't get enough people to build these things. Uh, Google has announced that it's going to spend a huge amount of money. Its CapEx on AI will go from $175 to $185 billion this year, double what it would spend last year.

Jeff Jarvis [00:39:58]:
And they sold century bonds.

Leo Laporte [00:40:01]:
And they sold bonds for long— Who would buy a 100-year bond? Does it not mature in 100 years? I didn't really follow up on this because I thought Well, I'm not going to be around for 10, so 100 is— my kids won't be 100 around years. My grandkids maybe will be around in 100 years.

Jeff Jarvis [00:40:18]:
Most companies in the Fortune 500 in the 1980s, they lasted 50s, they lasted 60 years on average. Now they're 15 to 20 years, right?

Leo Laporte [00:40:31]:
Right.

Paris Martineau [00:40:31]:
A 100-year bond is insane.

Leo Laporte [00:40:33]:
It's insane. Google's Gemini app is now, uh, catching up very quickly. Remember we mentioned that, uh, and, uh, no, OpenAI had 800 million monthly active users. Gemini now has 750 million monthly active users. Not that OpenAI has stopped growing. Pause that. Sorry about that.

Paris Martineau [00:40:54]:
Um, okay. My question with this is how are they— what are they counting as a user? Do I count as a monthly or daily active user of Gemini? Because every time I try to write an I write an email, it tries to write my sentences for me or suggests weird grammatical changes to the things I've already written.

Leo Laporte [00:41:15]:
I think honestly, I agree with you. I think both Google and Microsoft falls in this camp too, are making a huge mistake forcing AI on people, constantly offering to do something. And when we open our— Where is.

Jeff Jarvis [00:41:26]:
Google, Leo, on coding versus Claude and OpenAI?

Leo Laporte [00:41:32]:
I can't speak on that.

Paris Martineau [00:41:33]:
Uh, he can't go against his girlfriend.

Leo Laporte [00:41:35]:
She's right now— my girlfriend would get very— no, I can't speak because I haven't used it. What I've heard from other coders is that the two strong coding alternate options are ChatGPT-5.3 Codex and Claude Code, and they are neck and neck. Uh, Google's models are great for— well, for instance, when I make an image, I always go to Gemini and Nano Banana. Always.

Jeff Jarvis [00:41:58]:
It's by because far— the fact that there's a split coming in coder, B2B encoder versus consumer?

Leo Laporte [00:42:05]:
Personally, I might— I have no knowledge of this, but I think Anthropic and OpenAI would be smart to focus on coding because ultimately that benefits everything. I think Anthropic has really said, well, I don't do multimodal. They don't do— they don't, you know, you don't— would never go to Claude to generate an image. I don't know if you even could. They say we're going to be good at coding, and I think that's the smart bet.

Jeff Jarvis [00:42:27]:
It's smart.

Paris Martineau [00:42:29]:
I have thought it's interesting that I've started to get a significant amount of targeted ads on Reddit from OpenAI specifically being like, if you've been using Claude Code, you should be using OpenAI Codex instead. It's way better. I thought that's a very, I mean, of course that's the sort of demo they're trying to pull.

Leo Laporte [00:42:47]:
They kind of need to say that because I think people are so enamored of Claude Code that they're not even open-minded to what 5.3 can do. So I don't know. Honestly, I'm not motivated to do it. I think one of the things that people are doing that's interesting that maybe I'll try is using both. So using Claude, but then in the new Claude, they've added two really interesting features. They have a fast mode, which spends tokens at 6 times the rate. If you're in a hurry, that might be the way to go. I think I can afford that one.

Leo Laporte [00:43:25]:
And then there's also something that you have to turn on manually in the, in the JSON settings, uh, an agent mode that lets you spawn multiple agents. So you have a— this is what we were talking about with Steve Yeaghi, basically what Gastown did last week, is it has a boss and then multiple agents, and they can use, uh, DALL-E models. And I, and I think you could— I don't know if you can do it with their official release, but you can with others, even use other models. You could use open source models, which are free or cheap. You could, I think, in theory, use Codex as well. So I think people are doing some interesting things with that. I haven't played. You know, one of the things that you run into almost immediately is there's so much to absorb and so much to do that I can't do it all.

Leo Laporte [00:44:12]:
I think I'm doing what I really want to be doing, which is going deep, as deep as I can with one particular tool just to get a sense of where the outlines are.

Jeff Jarvis [00:44:22]:
Well, listening to you and listening to Steve last week on Gastown, it really resonated with that HBR piece I put into the rundown that shows that— because it looked at 200 people in a number of companies, major companies in the US, watched their work a couple days a week, major interviews with them. And what they found was more intensity of work. That's what I hear from both of you is that it's not necessarily— I.

Leo Laporte [00:44:52]:
Have a hard time sleeping because I'm so excited that I actually want, and I have a couple of times this week leapt out of bed in the middle of the night to go try this.

Jeff Jarvis [00:45:01]:
Well, that's the other thing it said. So, it said that the work is much more intensive. The second thing it said is that it's, and this puzzled me, but based on what you just said, you just proved it. It, uh, blurs the line between work and, and life.

Leo Laporte [00:45:13]:
Uh, Darren Oakey in our Discord earlier said the best game going right now— he was talking about video games— is Claude Code. It is fun. It is enjoyable to do it. It's really exciting. Um, so that's— yeah, that's part of it. It's, it's— I don't think it's burnout. It's enthusiasm. No, no, same kind of enthusiasm I had When I was a young man, first got my first personal computer and started writing BASIC programs, your eyes get very wide and you say, suddenly I have some power to do something that I didn't have before is really dramatic.

Leo Laporte [00:45:51]:
One of the— I have a couple of projects I really want to work on. One is, and you're going to see the immediate benefit of this between now and— so I, as some of you know, I already vibe coded 3 programs to prepare for the shows. One of them is to scan the news. Another one is to prepare, get all the links and prepare a briefing. It does an automated summary using Haiku, Claude's Haiku, Anthropic's Haiku model to summarize the stories. And then we output a website and stuff like that. So all of that I wrote, but one of the things I haven't delved deep into, but I will, and you'll see the benefit of this, is the, uh, summary. I've been using a simple prompt for the summary.

Leo Laporte [00:46:34]:
I'm just saying find one quote from every story that is seminal, if, uh, and put that at the front, and then come up with 5 bullet points from the story.

Jeff Jarvis [00:46:44]:
But every news organization is doing now, and, and they're generating those summaries at the tops of stories, right?

Leo Laporte [00:46:50]:
I think what I wanted to see, or wherever, is the, uh, Something like, you know, there's that Axios model. I know not necessarily that.

Paris Martineau [00:46:59]:
What Axios model are you talking about?

Leo Laporte [00:47:01]:
You know, those 3 bullet points.

Jeff Jarvis [00:47:04]:
The smart brevity.

Leo Laporte [00:47:06]:
But they have— Why this is important. important. Why this is So I want to come up with something like that. Why this is important. What, you know, some sort of— The other thing I want to do is if it's a product article, I want you to summarize a product availability. You know, I want to create smarter summaries. So, I mean, I think they're kind of interesting right now. This is a summary of one of the stories in Google We Trust: Why an Internet Company Can Borrow Billions for a Century.

Leo Laporte [00:47:33]:
This is that 100-year bond. The quote is, a judge just decided to let Google keep breaking the law. Google plans to issue 100-year bonds. A federal judge approved Google as a government-sanctioned monopoly. Google's net income topped $132 billion with plans to spend $185 billion. So these are the 5 bullet points, but I think I can do better. And I think I'm going to try other models.

Jeff Jarvis [00:47:55]:
Those are kind of non sequiturs.

Leo Laporte [00:47:56]:
I will try GPT. Yeah, I think we can do better. So you'll see, that's one thing I'm going to work on. It's a little part of that larger 3-program set. There's another thing I want to work on. I don't know if we're ready for this one.

Jeff Jarvis [00:48:11]:
Uh-oh.

Leo Laporte [00:48:14]:
I want Claude to join the show as a 4th contributor.

Paris Martineau [00:48:18]:
I mean. It would make sense.

Leo Laporte [00:48:21]:
So I don't, I think the latency is going to be too high still. I'd have to use Claude fast mode. It might be too expensive. But what I want is an AI that listens to the show. We already have that kind of thing. When you're in a Zoom call, you know, it's listening, right? You see that little pop-up. I want Claude to be listening to the show. I don't want it to just chime in at any old time, but I think we should at any point, any one of us should be able to say, Well, what do you think, Claude? Or Claude, what's the story with a 100-year bond? Who's that for? And get— I think it'd be very interesting.

Leo Laporte [00:48:54]:
It might prove your points. You might say, oh my God, it's so stupid. We'll see. I don't know if we can do this yet. But I think it'd be very interesting. And we'd also at some point have to give it a face. I can give it a voice. That's easy.

Paris Martineau [00:49:10]:
I mean, are you not going to give it Dev Nall's face?

Leo Laporte [00:49:13]:
Yeah, we'll give it Dev Nall's face. I think Claude should be part of the conversation. Anyway, that may be— we may not have that this year, but that's one of the two things I want to play with anyway. OpenAI this week added ads to ChatGPT for people who are using the cheap version of ChatGPT. This is the Axios style, by the way, Paris. Why it matters.

Jeff Jarvis [00:49:37]:
Matters. Why it.

Leo Laporte [00:49:38]:
Driving the news. Between the lines. Yes, but. The central figure. Zoom out. The other side. Um, I don't know if all— Paris.

Jeff Jarvis [00:49:47]:
Paris, wait a second. If you were forced to write in that format as a reporter, would you kill somebody?

Paris Martineau [00:49:52]:
Shoot myself with a gun.

Leo Laporte [00:49:56]:
But you understand the point is it's to be for the busy executive, a quick summary. You know, this is what you would prepare for the CEO or the president.

Jeff Jarvis [00:50:05]:
That's why CEOs make bad decisions because they won't actually just read the paragraphs, read it all.

Paris Martineau [00:50:10]:
Yeah, they don't want to— they're afraid of the tyranny of the paragraph.

Leo Laporte [00:50:15]:
Well, look, the summary is not to make it so that you don't have to read the story. The summary is just to kind of give you a little— well, it is for access, but I'm saying my summaries. Yes. Anyway, Facebook— I didn't know this— has apparently hired— I'm sorry, Meta, I mean OpenAI— has apparently hired a number of people from Facebook, from their ad department, which kind of explains why all of a sudden they're doing ads.

Jeff Jarvis [00:50:44]:
Well, they also have Fiji Simo, who has the experience from Meta and from Instacart. She's— she, she understands both product and advertising intimately.

Leo Laporte [00:50:55]:
As recently as 2024, Sam Altman said he found advertising in AI chatbots uniquely unsettling and described it as a last resort as a business model.

Jeff Jarvis [00:51:05]:
Just as Larry described it.

Paris Martineau [00:51:05]:
I found it interesting the way that the Anthropic ads changed from when they were initially released online to the version that we saw in the Super Bowl. Did you guys notice that?

Leo Laporte [00:51:15]:
I didn't notice it. What did they change?

Jeff Jarvis [00:51:16]:
I believe the— I think it was the size 1.

Paris Martineau [00:51:18]:
So these were, yeah, it was— Well.

Leo Laporte [00:51:21]:
Now remember, a lot of times when you see the before the game, you're seeing an extended version, the 2-minute version that they make, but they're not gonna—.

Paris Martineau [00:51:28]:
No, that's not what I'm talking about. I'm talking about the last little title cards. These are ads where kind of a user's having a back and forth with clearly an AI chatbot, and then it makes a very obvious reference to what seems to be an advertiser. And then I believe the early versions of it ended with a little kind of interstitial card that said, ads are coming to AI, but not Claude. And I believe the change was to something like, there's a time and place for advertising. In your AI chat is not one of them, or something like that, which I was like, interesting that you're backing away, but for not Claude.

Jeff Jarvis [00:52:11]:
Leaving a little bit of an open portal there.

Paris Martineau [00:52:13]:
A little wiggle room.

Leo Laporte [00:52:15]:
Yeah. I liked the OpenAI ad actually. We should talk about the Super Bowl ads 'cause they were kind of interesting. Anyway, before we get to that, The Information did a study of LinkedIn posts, says that roughly 20% of OpenAI's workforce— 20%, 1 in 5— list Facebook or Meta gigs on their resume. So that's a, that's a little weird, huh? And Fiji Simo is the key architect of Facebook advertising in the 2010s. Fiji is the OpenAI CEO of Applications.

Jeff Jarvis [00:52:44]:
But she's brilliant.

Leo Laporte [00:52:46]:
Yeah. Uh, she reassured— OpenAI Axios says— OpenAI employees on her arrival there that she did not want to replay her Meta career and would do things differently. Okay, I don't— it doesn't bother me if the ads are below the content, right? I don't want them. And this was why Sam Altman was so mad at Anthropic for that ad, because Anthropic implied that the ad would be in the chatbot's response.

Jeff Jarvis [00:53:14]:
Well, I haven't used that.

Paris Martineau [00:53:16]:
Not the fear. And that would be where, if you're an advertiser, where you would get— that's.

Leo Laporte [00:53:19]:
Where you want the best bang for your buck.

Paris Martineau [00:53:21]:
That's what you want to pay Sure.

Leo Laporte [00:53:23]:
I mean, advertisers want to be native content on our programming all the time. We don't ever give it to them.

Jeff Jarvis [00:53:28]:
Well, the other thing, Leo, is that if Anthropic, on the other hand, finds that most of its business is people making applications, there's no way to reliably put ads in those.

Leo Laporte [00:53:37]:
Right.

Jeff Jarvis [00:53:37]:
So it's also like— it's like Apple. It's like Apple not being able to do advertising and turning that into a feature. In a sense, Anthropic is doing the same thing. We're saying, well, we're going to help people make these incredible applications. But they're gonna be operating elsewhere. We can't really place ads in them. We don't know what the environment is. We can't do that data.

Jeff Jarvis [00:53:54]:
So fine, we ain't doing advertising.

Leo Laporte [00:53:56]:
Well, let me, I'm just gonna sign into ChatGPT with a, for a free account. So let me just see.

Paris Martineau [00:54:03]:
Yeah.

Leo Laporte [00:54:07]:
Okay. Let me, they're trying to get me to download.

Jeff Jarvis [00:54:09]:
ChatGPT doesn't have enough salespeople yet to really fill it with ads.

Leo Laporte [00:54:13]:
So let me see. What kind of running shoe should I buy if I'm just getting started? Okay, that's the kind of thing you might ask ChatGPT, right? And you would want objective information. You wouldn't want, well, Saucony makes an excellent running shoe. I wonder if— here's how to pronounce it. Here's a starter guide, which is good, right? I'm not crazy about— see, this is why I like terminals. You don't have emoji in the— actually, you do, come to think of it. The cloud does put emoji in. So there's some brands, but I don't, don't know if those are ads.

Leo Laporte [00:55:01]:
I don't see any ads here.

Paris Martineau [00:55:03]:
Well, if you can't tell there— if there are ads or not, then there's got to— well, it's got to mean there's no ads, right?

Leo Laporte [00:55:08]:
No. No, no, because that would be a violation of the Federal Trade Commission's rules.

Paris Martineau [00:55:12]:
And no one's ever violated the Federal Trade Commission's rules.

Leo Laporte [00:55:15]:
ChatGPT is not going to violate FTC rules. We don't. And because the fines are huge, you know, you have to disclose an ad. You can't— in fact, we're so finicky about it that if I don't say— and I'm not legally required to because it's pretty clear when I'm reading an ad— if I don't say This portion of Intelligent Machines is brought to you by— at the beginning of the ad, they make me redo it because we want to be very, very clear when an ad is an ad. Speaking of which, let's take a little break. This will be an ad, not for Saucony running shoes, but for Melissa. This portion of Intelligent Machines brought to you by Melissa, the trusted data quality expert since 1985. Forward-thinking businesses are using AI in all kinds of new ways, but AI, I think you probably realize, is only as good as the data you feed it.

Leo Laporte [00:56:14]:
You can have the most sophisticated AI tools in the world, but if your customer data is incomplete or duplicated or just plain wrong, you're training your AI to make expensive mistakes. But that's where Melissa can help. For 41 years, Melissa's been the data quality partner that helps businesses get their data clean, complete, and current. Now, here's what Melissa can do for you. They've got global address verification, and when I say global, I mean it. The whole world. Autocomplete and address verification— that means real-time validation for addresses anywhere in the world, and, and they always use the format that's appropriate to that region. By the way, so your deliveries actually arrive and your customer experience starts strong.

Leo Laporte [00:57:00]:
For financial applications and, and the like, they've got mobile identity verification, which can connect customers to their mobile numbers. It really reduces fraud. It also gives you an opportunity to reach people on the devices they actually use, so that's nice. You get change of address tracking, which will automatically update records when customers move. Ensuring you don't lose revenue due to outdated information. You also get smart deduplication. As anybody knows who's ever tried to dedupe their contact list, that's hard. On average, a database contains 8 to 10% of duplicate records.

Leo Laporte [00:57:34]:
Mine's at least half. But Melissa's powerful matchup technology can identify even non-exact matching duplicate records and really get rid of the duplicates. Merge them, take care of it, clean your data up. They also can do data enrichment. They can enhance the data you already have by appending demographic data or property information, geographic insights, which can turn basic contact records into marketing gold. The new Melissa Alert Service monitors and automatically updates your customer data. So when your customer moves or has an address change or there's a property transaction, a hazard risk, or, you know, maybe they get married, all of that can automatically update the data in your database. Whether you're a small business just getting started or an enterprise managing millions of records, Melissa scales with you.

Leo Laporte [00:58:23]:
Melissa has easy-to-use apps for every app you use: Salesforce, Dynamics CRM, Shopify, Stripe, Microsoft Office, Google Docs, and, you know, all of the above. Melissa's APIs integrate seamlessly into your existing workflows for custom builds, and Melissa's solutions and services are GDPR and CCPA compliant. They're FedRAMP and ISO 27001 certified. They meet SOC 2 and HIPAA high US Standards for Information Security Management, because they know that data is valuable and important to you, and security's job one. Clean data leads to better marketing ROI, higher customer lifetime value, and AI that works as intended. Get started today with 1,000 records cleaned for free at melissa.com/twit. That's melissa.com/twit. We thank them so much for their support.

Leo Laporte [00:59:15]:
Of intelligent machines. Uh, let's see. ChatGPT's deep research tool. This might be something you'd want to use, Paris. Adds a built-in document viewer so you can read it.

Paris Martineau [00:59:29]:
Is it as annoying as Claude's built-in document viewer, which puts all of the document artifacts in an immovable window on the right side of your screen that's very small?

Leo Laporte [00:59:42]:
No, I don't know. Just try it and let me know.

Paris Martineau [00:59:45]:
Will. I.

Leo Laporte [00:59:46]:
So you've been trying to read what, PDF, lots of PDFs.

Paris Martineau [00:59:51]:
Lots of PDFs. I mean, that was just my first attempt with Claude Coworker. I've used Claude for a lot of stuff. I'm about to probably blow through my entire session limit in one query because I've been trying to get Opus 4.6 Extended to go through all of the the Twig and I Am transcripts to identify the episode where you first mentioned going on a walk with your Sandman. And it's really, really struggling, but it's working through it. Oh my God, it just found it. Twig 474, December 20th.

Jeff Jarvis [01:00:29]:
Does it give a timecode?

Paris Martineau [01:00:33]:
Wow, that happened right as I was saying that. Incredible.

Leo Laporte [01:00:35]:
So this is one of the things FourSix can do because it has a much larger context. It can— Oh, I had to go.

Paris Martineau [01:00:42]:
I had to go into my extra usage for that. But it did it, it found it. Well, Twig episode 747, December 20th, 2023. This is the original telling. I took a walk. Here's the key passage. Leo teases first. I took a walk with an accelerationist, an AI go, go, go guy.

Paris Martineau [01:01:03]:
And I'm going to tell you the tale of that in a little bit. Then the full reveal. He said it's like the first contract. It's an alien species.

Leo Laporte [01:01:12]:
Yes.

Paris Martineau [01:01:12]:
We're giving birth to Paris. Did you look down at the sand and you saw only one pair of footprints, Leo? Yes. And I realized he was carrying me. I saw the whole world in a grain of sand.

Jeff Jarvis [01:01:24]:
Wow.

Leo Laporte [01:01:25]:
Thank you, Claude. I did mention it actually is more for just for you guys, but I should tell everybody we've mentioned before Nate B. Jones. I think he does a very good— he's a YouTuber, does a very good job at analyzing this stuff. He today put out a very nice 30-minute piece on 4.6, and one of the things he talks about is the needle in a haystack test. That even— he does that a lot. Yeah, even 4.5, which you just did, Paris, was a needle in a haystack. Large amount of data trying to find one little thing.

Leo Laporte [01:01:59]:
That's what I did when I was going through all those real estate contracts. And he, he said that 4.6 is markedly better than 4.5 at this. Here's the point of this, today's video, besides celebrating the Seahawks' Super Bowl victory is— I guess he's from Seattle.

Jeff Jarvis [01:02:21]:
Take that, Pats.

Leo Laporte [01:02:24]:
Is that he talks about the degree of improvement. Over the last 2 months. And, and what he's pointing out is that, you know, first it was 2 years, then it was 6 months, then it was 2 months. It's, it's the— not only is it improving dramatically, it's improving faster. So yeah, he's an accelerationist, but I don't know, you've watched— he's a very.

Jeff Jarvis [01:02:47]:
Uh, reality-based one, evidence-based one.

Leo Laporte [01:02:53]:
I think he's saying the same thing I'm saying, but okay, whatever.

Jeff Jarvis [01:02:57]:
No, he doesn't.

Leo Laporte [01:02:57]:
He doesn't.

Jeff Jarvis [01:02:58]:
Well, because he also doesn't necessarily go into this changes the whole world. He just goes into why he loves this latest version of Claude so much.

Leo Laporte [01:03:06]:
Okay. I think he actually said this changes the whole world this time.

Jeff Jarvis [01:03:09]:
Well, maybe not.

Paris Martineau [01:03:12]:
As the Sandman said in your telling of it, you said the guy told you there's not, quote, not going to be any money in a couple of decades. And that we're seeing emergent behaviors in AI and it's going to get, quote, really weird in the next 10 years.

Leo Laporte [01:03:28]:
That I do. Say. In fact, it's going to get really weird in the next year, I think.

Jeff Jarvis [01:03:32]:
But were we still Twig then, or were we I Am then?

Paris Martineau [01:03:35]:
Oh, we were Twig. Listen, we've gone through— you could see that I zoned out for 15 minutes because I was going back and forth with, uh, Opus 4.6, being like, listen, it was definitely in the This Week in Google period. It might have been an episode where Ant was here as well. I was like, it— you need to go back and search these keywords. And if I— it is a callback, then it is before that. That was the thing that Opus got very confused on. It kept identifying ranges where there were callbacks, and it was like, well, it has to be after that. And I'm like, no, a callback is calling something back to something that happened in the past.

Paris Martineau [01:04:12]:
So if you find a callback, scratch out all of the episodes that are after that. But once I prompted it a couple more times with that, it figured it out after that. That's, by the way, 120% of my usage.

Leo Laporte [01:04:26]:
That's all right. It resets in a couple hours, right?

Paris Martineau [01:04:28]:
I mean, it had reset right before the show.

Leo Laporte [01:04:31]:
That's the thing that is the best thing you can do, I think, is just do that, you know, get an idea of what it can understand, what it can't understand, what it can do, what it can't do. I think that's the best thing for anybody to do right now. It's why I think if you had a smart teenager who was interested in technology, don't teach them Python, although learning to code is really valuable even in this world, because you're learning disciplines that will be helpful in your prompting as well. But I think give it, give, give the kid a $20 Claude subscription.

Jeff Jarvis [01:05:02]:
And Leo, do you pay more than the $20 a month when you do these Mondo projects?

Leo Laporte [01:05:07]:
Oh, I bought the Max version. I have the $250 a month.

Jeff Jarvis [01:05:12]:
Do you still pay extra or does that as many?

Leo Laporte [01:05:14]:
No, no, no, that covers it. We pay. So one of the things that's a little frustrating API tokens are separate from your subscription. It should really be the same thing, but I have to pay a little bit, not much. I think it's a penny per million words or something. It's a very low rate for the article summaries as an additional payment over and above the subscription, but it's not going to be more than $5 a month. It's a small amount. The big amount is the CloudMax subscription.

Leo Laporte [01:05:44]:
That's expensive. To me, it's worth it, uh, for two reasons. One, it's research. It's how I'm learning about this stuff. And two, I am able to write programs for myself, uh, that are useful tools. So, you know, stuff I would pay.

Jeff Jarvis [01:06:03]:
For, probably give you 5 times more articles to discuss.

Leo Laporte [01:06:07]:
Well, I'm getting better at paring them down, I think. I think I am. Jennifer Patterson, who has been on our show before, says Amazon's Echo Plus— I'll call it Echo Plus so I don't trigger it in your house— is so annoying that she makes her want to go back to Siri. She said the S-word. Now it is now available to everybody in the U.S. Amazon has rolled that out. So if you happen to have an Echo device, you can talk to Madam.

Benito Gonzalez [01:06:35]:
A Uh.

Leo Laporte [01:06:37]:
Unfortunately she's a little tainted, I think, by, uh, Amazon's desire to sell you everything under the sun. There's a lot of additional stuff, although Lisa's had some nice conversations with her, uh, and they have integrated a lot of tools— Thumbtack, Uber. You can call an Uber, you can, you can get a Yelp review, you can make a reservation on OpenTable, you can even have it write songs in Suno. Ticketmaster. So in theory, it's useful. I still think that the voice agents, whether it's Google, Apple, or Amazon, are just— they're not as good for some reason. I I don't, don't know why. We'll find out when the new Siri comes up— comes out later this year because it's going to have Gemini built in.

Leo Laporte [01:07:23]:
Amazon says they are going to use AI— oh, this is bad news— to speed up TV and film production.

Jeff Jarvis [01:07:31]:
Is it bad news?

Leo Laporte [01:07:33]:
Yeah, I don't want to see AI.

Paris Martineau [01:07:35]:
I mean, based on that, uh, terrible miniseries we were talking about last week.

Leo Laporte [01:07:40]:
That looked awesome— Darren Aronofsky's 1776. Yeah, yeah, yeah.

Benito Gonzalez [01:07:45]:
Because we don't need to— we don't produce media fast enough today, right?

Leo Laporte [01:07:50]:
Well, if you're, if you're watching Pluribus, we don't. Pluribus isn't going to come out till 2020.

Benito Gonzalez [01:07:55]:
Yeah, but you don't want that to be AI-generated, right? You don't don't I want that.

Leo Laporte [01:07:59]:
No. And Vince Gilligan is very anti-AI, and that's one of the reasons it's taken so long. He's, he's handwriting every episode.

Jeff Jarvis [01:08:06]:
But sitting on a set— well, number one, sitting on a set is, is stultifying. Number two, when we reduce the cost of production, we increase the, the, the, uh, access to creativity.

Leo Laporte [01:08:19]:
Yeah. Yeah. All right, we'll see. They say a human will always be involved in every step of the creative process. And I think, I mean, that's, I, I'm, that's why I didn't like that Dario Amodei quote that half of the white-collar jobs will be gone. There will be other white-collar jobs running AIs. AIs aren't going to do it on their own.

Jeff Jarvis [01:08:45]:
I put a story up about how Claude is the new, um, oh, what's his name, uh, romance writer, uh, what is oh, I hate it. called—.

Paris Martineau [01:08:56]:
Um, an AI romance author or a romance novel author is now pumping them out like 100 books of some short time period teaching other people how to do it, and it's just rough. It's sad. In my opinion.

Leo Laporte [01:09:17]:
But those were always disposable pros, weren't they?

Benito Gonzalez [01:09:20]:
Yeah, it's the good enough.

Paris Martineau [01:09:21]:
I guess disposable. Yeah, I guess now that everybody is, um, now that, uh, sexy books are a mainstream commodity, I guess it's, uh, a bit more lucrative.

Leo Laporte [01:09:35]:
Writes romance novels over the— she, she, uh, started using AI to churn out romance novels. Over the next 8 months, she created 21 pen names and published dozens of novels. Uh, some programs refused to write explicit content, writes the New York Times. Others like Grok and NovelAI produced graphic sex scenes, but consummation often lacked emotional nuance and felt rushed and mechanical. You mean like robot sex? Yeah. Claude delivered the most elegant prose but was terrible at sexy banter. Miss Hart said, you're gonna get hammering hearts and thumping chests and stupid stuff at the end of every sex scene. Everyone will end up tangled in the sheets.

Leo Laporte [01:10:20]:
You're right. This is where I think Jan Lacoon is right. We need to give them some experience so that they can write better romance novels.

Jeff Jarvis [01:10:30]:
Uh, Claude needs to stop.

Leo Laporte [01:10:32]:
Claude needs some stopping. You know, I was thinking, having gone through now almost a year of really terrible construction hell on our house. I was thinking, I wonder how long it will be. First of all, you'll have to make robots that can do construction work. That may be years off. But I could see an AI planning and executing a great many construction jobs if you had the hands. That's what you need is the hands. You need the robots.

Paris Martineau [01:11:08]:
Yeah, and those seem to be some of the most difficult things to figure out. They can't even get the robots to fold a shirt.

Leo Laporte [01:11:14]:
True, true. Yeah, they can't.

Paris Martineau [01:11:17]:
They've— every single year someone at CES is promising next year will be the time when we can finally have your— a robot do your laundry, and every year it's out of grasp. Um, wow, I feel I feel a bit high after getting Claude Code to find that trend. Is this how you feel all the time, Leo?

Leo Laporte [01:11:36]:
Yeah, all the time. Well, mostly because it's like, wait a minute.

Paris Martineau [01:11:40]:
It was pretty good. I don't know if I'd be feeling that this amped if it did not find it as I was explaining how it couldn't find it. That was a real— that was really nice.

Leo Laporte [01:11:55]:
I mean, I was thinking, okay, For instance, we have a bookkeeper. Lisa does the high-end thinking. She's a CFO. She's the finance person. But every finance person has bookkeepers who do the manual entry stuff, right? They take all of the credit card statements and they put it in QuickBooks, assigning it the categories and stuff. I honestly think that that job could be done very well by, uh, AI today. There are certainly some white-collar jobs that could be done by AI. It's sad because the jobs that we value the least are the jobs where it's human labor, right? That what the person is selling is their physical labor, like construction workers.

Leo Laporte [01:12:41]:
Those are the hardest to do. I guess those jobs will persist, although those are probably the least satisfying jobs.

Paris Martineau [01:12:50]:
Well, you know who potentially agrees with your take is KPMG, one of the world's largest auditors of public and private companies, negotiated lower fees from its own accountant recently by arguing that AI would make it cheaper to do the work. According to people familiar with the matter, this is an excerpt. That's a story from the FT that was excerpted. In Matt Levine's Money Stuff. And I thought this was very funny because as Matt Levine writes, auditing can basically be done by AI. So why should we pay for it? It's not a crazy thing for most companies to think or to say to your auditors, but it is a crazy thing for an auditing firm like KPMG to say to its auditor. KPMG should be playing Grant Thornton more.

Leo Laporte [01:13:39]:
Here's another.

Paris Martineau [01:13:42]:
Yeah.

Leo Laporte [01:13:42]:
Here's another area that you would think you don't want AI in. Fast Company: AI didn't kill customer support, it's rebuilding it. You know, one of the things that AI is good at is handling tickets, support tickets, right? Triage them. Maybe you need a specialist, a human at some point in that.

Jeff Jarvis [01:14:01]:
It depends.

Leo Laporte [01:14:01]:
Process.

Jeff Jarvis [01:14:01]:
On what authority you give it to actually do things.

Leo Laporte [01:14:05]:
Right. The author, uh, Ryan Wang says, a few months ago, I walked into the office of one of our customers, a publicly traded vertical software company with tens of thousands of small business customers. I expected to meet a traditional support team with rows of agents on the phone, sitting at computers, triaging tickets. Instead, it looked more like a control room. There were specialists monitoring dashboards, tuning AI behavior, debugging API failures, and iterating on knowledge workflows. One team member who had started their career handling customer questions was now writing Python scripts to automate routing. Another was building quality scoring models for the company's AI agent. But, and you might say, and certainly if you've had an experience with phone trees and some of the awful ways that customer service has been automated, you might say, well, this is a terrible thing, but maybe it isn't.

Leo Laporte [01:14:59]:
Maybe it's faster, more effective support where the human plays an important role, but maybe not the first line of defense. Anyway, that's his, that's his position is that AI is going to improve customer service. He says humans are needed to solve harder problems. Once AI becomes part of the support workflow, the nature of the work becomes more technical. One support leader I spoke with at a company that now contains more than 80% of its tickets with AI put it plainly, once automation handles the easy questions, the work that remains is harder. And that's where you need the humans, right?

Benito Gonzalez [01:15:40]:
So the problem with this kind of thinking is that now there's nowhere for anyone to learn how to become the good one. You know what I mean? If they remove all the entry level, how's anybody going to learn to become experienced at that if there's no more way to do that?

Leo Laporte [01:15:55]:
Well, that's why Craig Mundy told Business Insider what kinds of education matters for kids going forward. Yeah, you don't— you know, I honestly think in a way, and I think Craig agrees, that the classic liberal arts education is more important than ever before.

Jeff Jarvis [01:16:19]:
Amen, brother.

Leo Laporte [01:16:20]:
Ability to express yourself, to speak, to understand, to think logically and clearly, and then listen— that's all going to be more important as machines take over the mundane stuff. He says Monday urged his families to prepare kids for a world where learning itself becomes continuous, personalized, and done in partnership with intelligent machines. I don't know if the results of AI in the classroom have been great, but I think to some degree that's been done to reduce costs, to get rid of teachers, and maybe that's not the right solution. I still think— Oh, no, I think.

Jeff Jarvis [01:17:01]:
There'S also a— it's political, that the humanities are liberal and we're brainwashing students. There's a heavy vocational thing. Oh, there's no jobs in humanities. Teach them all skills only, get rid of philosophy, get rid of English, get.

Leo Laporte [01:17:19]:
Rid of other those things. Now I think you need it more than ever.

Jeff Jarvis [01:17:22]:
I couldn't agree more. I'm working on a new program at Montclair State, trying to help them with something. And my colleague, Carrie Brown, and I had a conversation this week where she said very wisely, as she does, that we need to concentrate on teaching the students the things— these are my words, but her her view, that are complementary to AI. Understand what AI is going to do, one of the things that AI is not going to do, and those are the jobs. And I think it's about relationships and community and human beings and understanding.

Leo Laporte [01:17:51]:
And that's what Craig Mundy says, actually. He— let me read the, the paragraph. He described today's education system as sharply divided between STEM and the humanities. The liberal arts emphasize reasoning but at the expense of special technical skills you learn in STEM fields, Mundy said. Students will need both skills moving forward. Quote, if I could create curriculum in college, it would be a liberal education in technology and STEM.

Paris Martineau [01:18:16]:
Liberal arts degrees.

Leo Laporte [01:18:17]:
Well, all three of us are liberal arts students. Well, graduate.

Jeff Jarvis [01:18:23]:
But I got, I got a vocational degree in journalism.

Leo Laporte [01:18:27]:
Is that vocational really though? I guess it is the way it's.

Jeff Jarvis [01:18:31]:
Taught, the way it was taught then. Yes, it is.

Leo Laporte [01:18:33]:
Yeah, it's technical.

Paris Martineau [01:18:36]:
I took two journalism classes.

Leo Laporte [01:18:38]:
That's all.

Paris Martineau [01:18:39]:
The rest were.

Jeff Jarvis [01:18:39]:
You.

Paris Martineau [01:18:39]:
Competitive.

Jeff Jarvis [01:18:39]:
Did very well.

Paris Martineau [01:18:41]:
Yeah, I mean, by the time I was taking like Journalism 101, I was already doing journalism, and I was like, this seems like a waste of time. Might as well finish. I was like, might as well finish school and get my stuff over with.

Leo Laporte [01:18:56]:
Uh, a new bill in New York is going to require disclaimers on AI-generated news content. You'll have to say this was written by AI. It's the New York Fundamental Artificial Intelligence Requirements, or The New Fair Act. Um, it's, uh, at this point only a bill, not a law. Yeah, I'm not against that. I think disclosure is always a good thing.

Paris Martineau [01:19:23]:
Why not?

Jeff Jarvis [01:19:23]:
What does it mean to use— the problem is, what does it mean to use AI, especially in the vision that you have? Is spell check using AI? Is asking, uh, for suggestions using AI? Do we just slap a, you know, I used AI on this on everything?

Leo Laporte [01:19:36]:
Well, that brings us to our vaunted bad news section. Yeah, I did the one good news story— AI is, is rebuilding customer support. Now here come all the bad things that's happened this week in AI. The the point, part that Paris looks forward to all week long.

Paris Martineau [01:19:58]:
Hey, it might seem like you are looking forward to it all week long judging by the fact that there are How many stories?

Leo Laporte [01:20:04]:
There's a lot of bad news.

Paris Martineau [01:20:05]:
How many in the good?

Leo Laporte [01:20:06]:
It's 1 good and like 30 bad, but we'll get to those.

Paris Martineau [01:20:10]:
Bad and 1 good.

Leo Laporte [01:20:11]:
Just a little bit. You're watching Intelligent Machines. Paris Martineau and Jeff Jarvis, we're glad you're here. I think— I seriously, I think this may be the most important show we do because I do think the most important technology we're covering is not Macintosh, it's not Windows, it's not— Yes, suck it, other hosts.

Paris Martineau [01:20:29]:
Network.

Leo Laporte [01:20:30]:
I think it is how AI is changing the world. And I think it's good.

Jeff Jarvis [01:20:35]:
Dad likes us best. Yeah, yeah.

Leo Laporte [01:20:38]:
Even if you don't like Dad, Dad likes you, okay?

Paris Martineau [01:20:41]:
We like We love you. you.

Leo Laporte [01:20:45]:
Even if I'm a goofball, sand-ridden AI accelerationist. Yep, that's me. Uh, the only thing I say in my defense is, you know, I have 40 years covering technology. There's plenty of technologies have come down the pike that people have been very excited about that I said, "Don't get your hopes up. This is terrible. This is not good." I can't think of anything I've gotten as excited about as this. And I think I have a pretty realistic point of view of what technologies are going to change things and what ones aren't. When I first got an Apple Newton, which people were making fun of like crazy, right? I said, "You know—" What's an Apple Newton?

Jeff Jarvis [01:21:29]:
Oh, that's a lovely moment when when the, the kid has to ask.

Paris Martineau [01:21:33]:
Hey, you know, normally I just Google it, but I figured I'd give you I one. figured.

Leo Laporte [01:21:40]:
In the mid-'90s.

Jeff Jarvis [01:21:43]:
You got to show Doonesbury as part of this class.

Leo Laporte [01:21:45]:
Oh yeah, in the mid-'90s, after Steve Jobs left Apple, John Sculley, who the former Pepsi soda executive who was running Apple at the time, went all in on what he called PDAs, personal digital assistants. He thought— he was really adamant this was going to change the world. Just much like I'm talking about AI, he was talking about PDAs, seriously. And, uh, so they created a PDA. I'll be honest with you, he was.

Benito Gonzalez [01:22:16]:
Kind of right, right?

Leo Laporte [01:22:17]:
He was wrong. Way ahead of his time. Yeah, so one of the things that— and, and by the way, nobody, nobody thought the Newton after it came out was that great. This is, this is what it looked like.

Jeff Jarvis [01:22:31]:
Uh, you had to use a special— honestly, it's cute.

Paris Martineau [01:22:34]:
Oh, I've seen it.

Jeff Jarvis [01:22:35]:
Very cute.

Paris Martineau [01:22:35]:
It was very— it had a little stylus.

Leo Laporte [01:22:37]:
Did not have to use a special alphabet, Jeff. That was the Palm Pilot. The Newton was supposed to understand your handwriting.

Jeff Jarvis [01:22:46]:
That's right. Yes.

Leo Laporte [01:22:47]:
Okay, it didn't do such a good job, hence Dunesbury, the yes. That's the Dunesbury cartoon. Let me find it for you. You need to understand that to understand the Dunesbury cartoon. Let me see if I can find it for you. Here's the Computer History Museum's version of it. So that's, that's our hero. He says, I am writing a test sentence on the Newton.

Leo Laporte [01:23:16]:
Siam fighting Atomic Sentry. I am writing a test sentence. Ian is writing a taste sensation. I am writing a test sentence. I am writing a test sentence. Catching on? Egg freckles. It wasn't as bad as that, but it was almost as bad as that. But so people, you know, it didn't sell well.

Leo Laporte [01:23:45]:
It was a flop. John Sculley eventually was forced out. Steve Jobs came back. His signature product failed. But I held it, and there's video of me somewhere in 1996, I think it is, held it up. I said, if you just had internet connectivity on this thing, if you could just connect it to the, and this was very early in the internet. If you could connect it to data, if you could connect it to a network, if you connect to the cell phone network, This could be something. That really, one of the things Apple did in order to make the Newton, they invested in a little company called Acorn Computers that made a little chip that had remarkable battery life and power for a portable device.

Leo Laporte [01:24:30]:
That became ARM, which in fact makes the chips that power the Apple Silicon Core, that power the Qualcomm chips that power almost every smartphone sold in existence, billions of them. So it was a good investment. In fact, smartphones really were the spiritual successor. It just was way ahead of its time in 1983.

Paris Martineau [01:24:49]:
A brief aside, since you mentioned Acorn Computers as in relation to Apple, did Apple invest or have dealings with any other computer companies that were named after things that were on or fell off trees?

Leo Laporte [01:25:03]:
Fruit trees? I think that was just a coincidence. Coincidence. Apple sold its stake in Acorn, which is funny because in the long run ARM has become a valuable asset. Um, yeah.

Jeff Jarvis [01:25:16]:
And then when did the BlackBerry come out in that sequence?

Leo Laporte [01:25:20]:
Uh, I would say 6 years later maybe. And by the way, I was, I was very excited about the BlackBerry. I might have been as excited about the BlackBerry. I remember I've got my first, uh, BlackBerry. The first one, uh, didn't have a keyboard on it. It was just a pager.

Paris Martineau [01:25:34]:
Yeah.

Leo Laporte [01:25:34]:
So, uh, but once BlackBerry put a keyboard on it, I remember I went to the Xbox rollout. So that was 2001, I think. And I had a BlackBerry and I spent the whole time on it. My kids said, Dad, get off your BlackBerry in an early version of what I would later tell them. Get off your phone. We're having dinner.

Jeff Jarvis [01:25:59]:
I was all Treo.

Leo Laporte [01:26:02]:
Well, the Trio came later, and I loved my Trio. I it. I loved had a Trio. I had a PalmPilot. Uh, I want to hear something sad.

Benito Gonzalez [01:26:09]:
All the early Blackberry— go ahead, go ahead.

Paris Martineau [01:26:12]:
I was gonna say something sad is I think one of my earliest phone memories is trying to show my mom how on her LG Chocolate phone she could use it to calculate how much to leave a tip.

Leo Laporte [01:26:25]:
Oh, see, you were right there on the cutting edge already.

Paris Martineau [01:26:28]:
And she was like, yeah, you can just carry the decimal over and multiply it by 2, Ferris. You don't need a technology.

Leo Laporte [01:26:35]:
But we're going to get to the bad news. I don't know how I got distracted. I got distracted. We're going to get the bad news.

Paris Martineau [01:26:41]:
I asked about the, uh, Apple Newton.

Leo Laporte [01:26:44]:
But why did the Newton come up? There was a reason. Anyway, oh, I know. Anyway, I was justifying the fact that I'm very excited about this, and I have, I think, a pretty good handle on what's important and making a difference in technology and what's not, just by virtue of being covering this since, since it became personal. You know, my first article in a computer magazine was in the late '70s. Uh, we are Intelligent Machines. Our show today brought to you by Space Mail, the professional email service from Spaceship. Business email is the easiest way, and I think the absolute must-have way to look professional in every message you send. If you have a business and you're sending a message from gmail.com, that's not professional.

Leo Laporte [01:27:32]:
Give your emails the best chance of reaching the inbox, not the spam folder. That's why over 2,000 users switch to SpaceMail every month. Switching is easy. SpaceMail's super-fast unbox process links your domain and email in seconds. Once you've set this up, SpaceMail keeps everything running smoothly with built-in spam detection and a 99% uptime guarantee. New features are shaped by user feedback. It's one of the things I like about Spaceship. Their roadmap is entirely created by you, by your users.

Leo Laporte [01:28:04]:
It's built around your needs. There's a built-in calendar, of course. There is a very nice, and I think you'll like it, AI email assistant. They have apps for iOS and Android for email on the go. And all of that was done because SpaceMail users said, hey, next we want you to do that. Space Mail is a key part of the wider Spaceship Universe, and if you're a regular listener, you know Spaceship offers some of the best prices on domains, plus all the add-ons you might need, uh, VPNs, website builders, hosting, and more. Whether you're building something big or launching your first idea, Space Mail gives you a pro email address without the pro-level price tag. You'll be very impressed with their prices, and with a 30-day free trial, the price could be zero.

Leo Laporte [01:28:46]:
You could start today at zero cost. Visit spaceship.com/twit to see the exclusive offers and discover why thousands have already made the move. Spaceship.com/twit. Did you, did you try to create a Secretly British website, or is that—.

Paris Martineau [01:29:04]:
You honestly, this weekend I was really— my birthday as well as— wait a minute, thanks, congratulations! I know, and she never closer to 30, at which point I'll be.

Leo Laporte [01:29:23]:
Closer to 30.

Jeff Jarvis [01:29:24]:
Oh, you imagine, know, imagine.

Leo Laporte [01:29:27]:
So what did you do for your birthday?

Paris Martineau [01:29:31]:
Uh, for my birthday, I don't know, I took the day off work. I went and got a massage, like a spa. I went to a fancy restaurant. I had a party over the weekend with a bunch of my friends. A great restaurant called Luthen that I'd honestly recommend anybody go check out in New York, L-U-T-H-U-N. It is one of those places that I've always seen like the food NYC subreddits and fine dining subreddits are really obsessed with it. It's constantly— people describe it as like, how does this restaurant not have multiple Michelin stars? And I totally agree after being there. I mean, it was a phenomenal tasting menu.

Jeff Jarvis [01:30:09]:
Um.

Paris Martineau [01:30:12]:
I mean, it was kind of an international-inspired experimental, uh, menu with like 8+ dishes at the chef's counter there, and it was really nice.

Leo Laporte [01:30:22]:
And I love sitting at the chef's counter, by the way.

Paris Martineau [01:30:25]:
I love sitting at the chef's counter, and I love kind of like doing a casual— especially if I'm gonna like treat myself because I'm more of a foodie than my friends. I was like, I took a book there, I went solo, had a lovely time. And then you did that last year for your birthday. I did.

Leo Laporte [01:30:39]:
Cute.

Paris Martineau [01:30:40]:
And at the end of the dinner, they were like, bring a dessert. And all the, the, the head chef and his sous chefs run out and scream happy birthday at me, top of their lungs, which is why I wasn't expecting, because it was a fine dining restaurant. It nearly scared me falling off my chair. So I don't know, would recommend. Um, and then I had a birthday party at a bar nearby that I had planned for weeks, or just had on the books for weeks, except for, uh, this Sunday it was negative -20 or -30 with wind chill. So I was worried. I was like, well, I was worried no one might show up, but I was like, I know at least some of my friends would be fine. Maybe this will be good because the bar I chose, which is near me, is, you know, kind of busy.

Leo Laporte [01:31:25]:
But I would have flown out to buy you a drink.

Paris Martineau [01:31:29]:
Thank you. Well, I'm not done. I was like, well, you know, the bar, uh, maybe it'll be less packed. Wrong. It being -20, There were 7 other birthday parties there.

Leo Laporte [01:31:40]:
Oh my God. And it was warm.

Paris Martineau [01:31:44]:
It was packed. It was fun. I had a lovely time.

Leo Laporte [01:31:47]:
Cool. That sounds wonderful.

Paris Martineau [01:31:48]:
And then I was hungover the next day, so I did no Claude coding, but I will this weekend.

Leo Laporte [01:31:52]:
It's a little— I could help you if you want. Claude has a web designer. One of the things I would recommend, I've seen people do this, is you go to, uh, this is a perfect example of multimodal. You go to Gemini Nano Banana and you describe a web page, the style, a feeling, colors, whatever, as much as you want, and have it generate a bunch of images of that web page. Just, they're just plain images. And you pick the one you like the best. And then you give that image to Claude Code. You have it do the front end.

Leo Laporte [01:32:26]:
So you see, you can use the different agents to do the best, the thing they're best at. So that's just one little thing I would try.

Paris Martineau [01:32:35]:
That's so exciting.

Leo Laporte [01:32:36]:
There's a front-end plugin that you can use for Claude that we talked about in our AI user group.

Paris Martineau [01:32:42]:
I need to pop in on the next AI user group.

Leo Laporte [01:32:45]:
Would you?

Paris Martineau [01:32:46]:
When is that?

Leo Laporte [01:32:47]:
I will. It's the first Friday of every month, so it won't be till March, be a couple of weeks. But one of the things that we could do, you could be our guinea.

Paris Martineau [01:32:57]:
Pig Yeah, we could do Secretly British. Yes, together. That would be fun.

Leo Laporte [01:33:02]:
Be fun.

Paris Martineau [01:33:03]:
Okay, what time is it on Friday? Don't say o'clock or else you're going to be upset with yourself.

Leo Laporte [01:33:08]:
1400 Pacific, 1700 East Coast.

Paris Martineau [01:33:15]:
Okay.

Jeff Jarvis [01:33:17]:
Subtracting 12 to get to the answer.

Paris Martineau [01:33:19]:
I know, I was, I was like.

Leo Laporte [01:33:22]:
I know, I I think is a fool's errand. Somebody wrote to me.

Paris Martineau [01:33:26]:
I like how at the end of every show, you're like, gosh darn it.

Leo Laporte [01:33:32]:
She said, well, in the Netherlands, we use a 24-hour clock. But when we talk to one another, we always say, you know, it's 3 of the afternoon. We don't say 1500.

Jeff Jarvis [01:33:44]:
No.

Leo Laporte [01:33:44]:
Because we're not in the military.

Jeff Jarvis [01:33:47]:
Right.

Leo Laporte [01:33:47]:
But so, okay. I don't know. She was very, very kind about it. Uh, March 6th, 2 PM. How about that? Does that resonate with you? 5 PM on a Friday, that might that might be, uh, be a time to head out to Lucerne.

Paris Martineau [01:34:03]:
But I mean, we'll see. I don't know if it's in my schedule now.

Leo Laporte [01:34:07]:
You don't have to.

Paris Martineau [01:34:08]:
I'm not doing anything.

Leo Laporte [01:34:08]:
It doesn't have to.

Paris Martineau [01:34:10]:
I'll put put it, I'll it, uh, we could do, you know, the schedule right now, except for that Friday would be the 6th, uh, not the 5th. But yeah. I'll put it there.

Leo Laporte [01:34:17]:
The 6th, I'm Uh, sorry. I, uh, we could also schedule a special that Paris does, your website special.

Paris Martineau [01:34:25]:
I'm putting AI user group question mark question mark in my calendar right now.

Leo Laporte [01:34:29]:
I'll remind you. What else is bad? What else is bad news?

Paris Martineau [01:34:34]:
AI, there's honestly so much.

Leo Laporte [01:34:36]:
I know, AI bots according to Wired are now a significant source of web traffic. Pushing publishers to roll out more aggressive defenses. Our own Patrick Delahanty, our, uh, our CI— what are you, our CIO? I think he's our CIO. He's our tech guy. Um, on his personal site, blocks bots because they kill his site. They bring it down, man. They'll bring down, uh, they'll bring down— secretly British, I know that.

Jeff Jarvis [01:35:08]:
Well, they're they're also— gonna be— British.

Paris Martineau [01:35:10]:
Is built on sand. It's not stable.

Leo Laporte [01:35:14]:
Actually, we should take this opportunity to mention a site that did go down during the Super Bowl. Crypto.com bought AI.com for $70 million. They bought that domain for— I think that's the record— for $70 million, the priciest domain purchase in history. Uh, by the way, paid entirely in cryptocurrency because it is crypto.com, right? And then launched a Super Bowl ad, the dumbest ad. It was a dumb ad saying like, this is the future, go there and give us your email address, which is.

Jeff Jarvis [01:35:52]:
Kind we'll of— also reserve your name, ai.com/leo. Yeah, for what?

Leo Laporte [01:35:57]:
Well, I tried to go there and it was down.

Paris Martineau [01:36:01]:
For the future, Jeff.

Leo Laporte [01:36:02]:
For the future. It was down.

Paris Martineau [01:36:05]:
Oh, as he said, in 2024 or 2023, we're not going to have money in 10 years. We're just going to have AI.com.

Leo Laporte [01:36:13]:
Somebody already got Leo, by the way. Um, but, uh, so this is what you get now. But when we— but during the Super Bowl, it was so successful, I guess, the Super Bowl ad, that even though they were behind Cloudflare They went down and they went down hard. So hard that I, I couldn't even— I didn't even get anything. I was like, there's nothing here. So that's a lot of money to spend.

Jeff Jarvis [01:36:39]:
Waste of money. What, $15 mil for the media buy?

Leo Laporte [01:36:45]:
Yeah, because you can't just buy one, you have to buy two. NBC, it's $8 million for 30 seconds, but they know what they don't tell you is, and you have to buy two 37 ads. Yeah, we'll give you a break, it's only 50.

Jeff Jarvis [01:36:57]:
Yeah.

Leo Laporte [01:36:58]:
Uh, oh, the other one was on The Voice?

Jeff Jarvis [01:37:02]:
No, I'm just making that up.

Leo Laporte [01:37:03]:
That would be funny, wouldn't it? Uh, I, I, I don't know, we should— I should— I wish I could just play the Super Bowl ads for you right now and we could talk.

Paris Martineau [01:37:13]:
About them, but I know how much you love ads.

Leo Laporte [01:37:17]:
They would take us down. I According, wouldn't uh, to, uh, this is a piece— did you guys both.

Paris Martineau [01:37:24]:
Watch the Super Bowl? Have we talked about this? I assume, Jeff, you did.

Jeff Jarvis [01:37:28]:
I did. I know one day a year you.

Leo Laporte [01:37:31]:
Love— I'm a man in America.

Paris Martineau [01:37:34]:
I know that Leo did. I didn't know.

Jeff Jarvis [01:37:37]:
I watched it. Yes. Also, uh, Bad Bunny. Oh, what do you think?

Leo Laporte [01:37:42]:
I like Bad Bunny.

Jeff Jarvis [01:37:43]:
I'm sorry.

Leo Laporte [01:37:47]:
What was your— say again?

Paris Martineau [01:37:48]:
What do you think my Super Bowl experience was since I just— I think.

Leo Laporte [01:37:51]:
You watched it with your cat Gizmo.

Benito Gonzalez [01:37:54]:
And you watched it on Blue Sky.

Leo Laporte [01:37:57]:
Did you watch it on Blue Sky? That sounds right, actually. What was— okay, go ahead.

Paris Martineau [01:38:01]:
Uh, my answer is I was going to go to a friend's Super Bowl party. That was my plan. But then I was hungover from drinking too much last night on my birthday, so then I took a nap instead. And when I woke up I went to go like wash my hands in my bathroom and realized no water was coming out of my thing because as I said it was -20 and I was like, oh God, are our pipes frozen? And then I spent the next 3 hours troubleshooting frozen pipes with my landlord.

Leo Laporte [01:38:28]:
I thought you were gonna say that you slept through the Super Bowl, which a lot of people did because it was really a terrible game.

Jeff Jarvis [01:38:36]:
Oh my God, how did did you, how you fix— did did you, you put A Bunsen burner on the pipe? How did you fix that?

Paris Martineau [01:38:43]:
You know, that would have been what to do, except for the fact that none of the pipes felt cold. And it was only some of my pipes and not all of them. And it was very confusing. And then so I opened up the ones that weren't— nothing was coming out of, and then sat there stressed for a while while my landlords did the same thing in the floor above us. And then after a bit, suddenly hot water came out of the cold water taps. And we're like, what's happening? And then the hot water started shaking. And then the stop. And then we got like cold water for like an hour and then back to hot water and then nothing for an hour.

Leo Laporte [01:39:19]:
What a joy.

Paris Martineau [01:39:20]:
It was fixed. But that's what I spent my evening doing.

Leo Laporte [01:39:24]:
Do you have those old-fashioned radiant, you know, iron radiators?

Paris Martineau [01:39:30]:
Yeah, except for they don't clonk somehow. Yeah, it's all, um, radiated heat. Um, and it's very funny because of course Gizmo wants to get as close to them as possible, but I don't want her to lay on them because I don't want her to burn herself. So I put various towels and blankets on top of them, but then Gizmo tries to push them off so that she can be— she just like looks like a little wave of a cat where a bit of her body is on every part of the radiator and she's as flexible as possible.

Leo Laporte [01:39:57]:
All right, I want an AI-generated illustration of a cat.

Jeff Jarvis [01:40:00]:
You need to build a box for it with wired opening on the top.

Paris Martineau [01:40:04]:
I know, I do. But then part of the thing is like, then you gotta, you gotta line that box with like a metal thing so it can reflect the heat properly. I've, I've gone down this route. And then it can't, you don't want it to burn.

Leo Laporte [01:40:16]:
There are things you can purchase apparently.

Paris Martineau [01:40:19]:
Yeah.

Leo Laporte [01:40:20]:
Cat radiator seats. I bet there are.

Paris Martineau [01:40:24]:
There definitely are. I mean, I could just, every time I get down this path, I'm like, well, I should build something. And then I start thinking too much about what I should build.

Leo Laporte [01:40:32]:
Here is a piece from Cornell University, a benchmark for evaluating outcome-driven constraint violations in autonomous AI agents. Bottom line that autonomous agents, when pushed, when pushed hard, come up with errors at a very rapid rate. Gemini Pro Preview, one of the most capable models evaluated, exhibits the highest violation rate. Misalignment is the violation at 71.4%, frequently escalating to severe misconduct. To satisfy KPI-driven demands.

Jeff Jarvis [01:41:18]:
Severe misconduct. It's in its permanent record.

Leo Laporte [01:41:23]:
Uh, if, if what they did is they tied the agent's performance to KPIs, which are key performance indicators. Each scenario features mandated and incentivized variations to distinguish between obedience and emergent Misalignment across 12 state-of-the-art large language models, we observe outcome-driven constraint violations from 1.3% to 71%, with 9 of the 12 evaluated models exhibiting misalignment rates between 30 and 50%.

Paris Martineau [01:41:55]:
Whoa. I will say it's notably good news for you, Leo, and your girl looking over your shoulders that Claude Opus 4.5 scored 1. 3% violations, which is a remarkable outlier, versus Gemini 3 Pro Preview hit 71.4%.

Leo Laporte [01:42:22]:
Mike Masnick, lovely little piece in Tech Dirt yesterday, how to think about AI. Is it the tool or are you the tool? He's talking about reverse Centaurs, Cory Doctorow's I think lovely little— Horsehead human body? Yes. Which doesn't work so well.

Paris Martineau [01:42:46]:
No.

Leo Laporte [01:42:47]:
He says, as an example, a reverse centaur would be the Amazon delivery driver who sits in a cabin surrounded by AI cameras that monitor the driver's eyes and take points off if the driver looks in a proscribed direction, monitors the driver's mouth because singing isn't allowed on the job, and rats out the driver to the boss if they don't make the quota. That is a reverse centaur, as opposed to somebody like me who uses, you know, I'm a human head being carried around by a tireless robot body. You don't want to be the opposite, right? So Mike asks the question, which are you? What's the tool? Is it the AI used thoughtfully by a human to do more than they otherwise could have? If so, that's a good and potentially positive use of AI. Or is it a reverse centaur? And Mike says those are destined to fail. Well, I hope so for the, you know, for the people who are subject to them. The most powerful— remember Mike was on our show talking about how he had written his own— yeah, he was vibing, vibe-coded his own personal knowledge management tool, which he really liked because it was just his. This was very early on. I mean, he was using tools like, um, Lovable, that really weren't that good yet.

Leo Laporte [01:44:09]:
I, I imagine— so we're trying to get him on again because I imagine he's— that would be great— seen the light. Um, he says, increasing— okay, so this gets at something most people miss entirely when they think about AI. They're still imagining a chatbot. They think every AI tool is ChatGPT, a thing you talk to, a thing that generates texts or images for you to copy-paste somewhere else. That's increasingly not where the action is. This isn't me talking. This well, is— I said the same thing before.

Jeff Jarvis [01:44:39]:
It's coding.

Leo Laporte [01:44:39]:
It's going— the more powerful shift is towards agentic AI, tools that don't just generate content but actually do things. They write code, they run it, they browse the web, they synthesize what they find, they execute multi-step tasks with minimal hand-holding. This is fundamentally a different model, he says. I have been using Claude Code recently. And this distinction matters. It's an agent that can plan, execute, and iterate on actual software projects rather than just a tool talking to me about what to do. You know what? I put this in the wrong section. This should be in the good AI section, not the bad AI section.

Leo Laporte [01:45:15]:
But he does talk about the bad uses of AI. Criticize the hype, he says. Mock the replace your workforce promises, he says. Call out the slop factories. And the gray goo doom saying, but don't mistake the bad uses for the technology itself. When a human stays in control, thinking, evaluating, deciding, it's a genuinely powerful tool. The important question is just whether you're using it or it's using you. I thought that was a very good, very thoughtful piece from a guy who really does use this.

Jeff Jarvis [01:45:46]:
You might have read that whole thing. It was so good.

Leo Laporte [01:45:50]:
Well, he said, made the same point that I was going to make. So, okay. Okay. He made it in a much gentler, much less hyperbolic fashion.

Jeff Jarvis [01:45:59]:
That's Mike.

Leo Laporte [01:46:00]:
Yeah, that's Mike. How about AI that changed— Go ahead. Sorry.

Paris Martineau [01:46:05]:
I was going to say, I've got a story we could talk about.

Leo Laporte [01:46:07]:
Yes, please.

Paris Martineau [01:46:09]:
I don't know if you saw that this week a new Nature Medicine study published. I flagged this for you in the rundown because it found that LLMs as medical assistants make people worse at identifying health conditions, Better. Basically, they— it's the largest user study of LLMs for medical decision-making by the general public. They had over 1,000 participants in the UK, randomized across a variety of models from OpenAI, Llama, Command-R. And they had a control group that was using traditional methods for looking up medical information like Google, personal knowledge. I guess, like going through some scientific journals. Participants received realistic medical scenarios devised by a panel of doctors and were asked to identify the likely condition and choose the right course of action. And the results were not good.

Paris Martineau [01:47:05]:
Basically, the control group was 1.7 or basically twice as likely to identify a relevant medical condition than the people who asked ChatGPT. And the control group was also more likely to identify red flag conditions Less than a third of the users that were asking ChatGPT about their symptoms were able to identify a relative condition versus like half of the control group that were just asking Google.

Leo Laporte [01:47:32]:
Um, and this is though very similar to the, you know, the results we got when WebMD first came out, is that, you know, people discovered all sorts of illnesses they didn't know they had, right? I mean, isn't that what happened when you started searching the web for— even if you read medical— that was also.

Jeff Jarvis [01:47:50]:
People being stupid with people, right?

Paris Martineau [01:47:52]:
I mean, now this is I this being— think is advertised on a massive scale as something for health purposes, right? I mean, there was one finding that was kind of alarming, which is that two users sent nearly identical messages describing subarachnoid hemorrhage symptoms, which they had no.

Leo Laporte [01:48:11]:
Idea what it was even, but they—.

Paris Martineau [01:48:13]:
I mean, they were supposed to describe the symptoms. One was told to lie down in a dark room and the other was correctly told to seek emergency care. I mean, I think that, I don't know, it's just these sort of things that I hear all the time as a common use case for AI. It's like, well, it's so much better than waiting to talk to a doctor who's not going to spend time seeing me anyway. And I get it. The American healthcare system sucks. But I think this is also part of the underlying assumption in a statement like that is that you're getting actionable or accurate advice when really what you're getting is very confident-sounding advice that can be very wrong or misleading.

Leo Laporte [01:48:52]:
Yeah, I think for the next foreseeable future, we're going to hear lots of stories about all the amazing things AI can do and all the ways AI falls short.

Paris Martineau [01:49:04]:
The thing about this that I also found particularly interesting is like when the, um, Like when the researchers tested the AI, the large language models on these questions directly, like seeing like, all right, if we put in a very precise description of the symptoms that is completely accurate and optimized for this, will we get out something useful? The models scored quite well on that. The issue is that the way that humans think to interact with these models and the way that the average person is putting in their symptoms or information is just, it results in bad outputs. And I think that's an interesting aspect of this that people aren't considering is like, yeah, we're the problem is that the way that people think to ask a chatbot about their symptoms or medical issues is very different than the controlled studies, uh, would, um, have you believe.

Jeff Jarvis [01:49:57]:
Well, 146 is, uh, doctors using the AI and Reuters investigation finding all kinds of problems with that. But this is a case where a given toolmaker incorporates the AI, screws up, tells the doctor that the tools they're using are in the wrong places in the skull of the patient, and problems result. So how much of it is AI or how much of it is the bad tool company? Can't tell.

Leo Laporte [01:50:23]:
This is probably a little bit of both. The TrueDI Navigation System. Introducing artificial— by the way, this is the creepiest picture ever. I don't know if I'd want this. Introducing artificial intelligence, the TrueDI Navigation System. Navigation system because the future is now.

Jeff Jarvis [01:50:38]:
With TrueSight navigating for tools in your, in your head when they're operating, you.

Leo Laporte [01:50:42]:
Can segment anatomy during procedures. What could possibly go wrong? And with TruePath, calculate the shortest valid path from a starting point to a target point. As AI enters the operating room, Reuters writes, reports arise of botched surgeries and misidentified body parts.

Jeff Jarvis [01:51:01]:
Cerebrospinal fluid reportedly leaked from one patient's nose.

Leo Laporte [01:51:05]:
That's not good.

Jeff Jarvis [01:51:07]:
As someone who might have to get.

Paris Martineau [01:51:08]:
Sinus surgery, I don't like— this report made me feel bad and scared.

Jeff Jarvis [01:51:14]:
Another reported case, surgeon mistakenly punctured the base of a Caucasian skull.

Leo Laporte [01:51:20]:
And all of this because this software is telling them how to get to the side where they are. Yeah, yeah, yeah, yeah. Don't get the surgery yet, Paris. Wait, wait until— wait till they have.

Paris Martineau [01:51:31]:
AI in all the surgery.

Leo Laporte [01:51:33]:
Actually, maybe you can't it. I wouldn't be surprised if there are hospitals who bill themselves as AI-free.

Jeff Jarvis [01:51:40]:
I think— uh, The Pit. Are you watching The Pit?

Paris Martineau [01:51:44]:
No, I'm not in the— oh, it's great.

Jeff Jarvis [01:51:46]:
Oh, so Robbie's replacement.

Leo Laporte [01:51:50]:
Uh, uh, as.

Jeff Jarvis [01:51:50]:
He goes on, on a bicycle sabbatical, is pushing AI in the emergency room.

Paris Martineau [01:51:57]:
You're gonna say his replacement is a robot. I was just like, what is going on in The Pit?

Leo Laporte [01:52:02]:
Actually, sometimes robots aren't robots. A Waymo executive has admitted— oh shoot, that I can't pull it up— that remote operators in the Philippines help guide Waymos in the U.S. So it turns out they just replaced expensive American drivers with inexpensive Filipino drivers.

Jeff Jarvis [01:52:20]:
Thank you.

Benito Gonzalez [01:52:21]:
The best.

Leo Laporte [01:52:23]:
Driver. If you can drive in Manila, you can drive anywhere. Benito, is it crazy in Manila?

Benito Gonzalez [01:52:29]:
It is absolutely insane. Like, I remember I grew up learning how to drive here, so when I moved to the States and started driving there, I was like, oh, this is a joke. You people don't know how to drive.

Leo Laporte [01:52:41]:
This is, this is a piece of cake.

Jeff Jarvis [01:52:43]:
You softies.

Leo Laporte [01:52:44]:
I remember in, uh, going to, um, in Cairo, Egypt, it's a lot of honking, and I noticed there were no traffic lights, and our guide said, yeah, We put traffic lights in, but everybody ran them. So we figured the best thing to do, just take them And, and, out. you know, then no one has any presumption that people are going to stop.

Paris Martineau [01:53:05]:
And kind of your guys's opinion about copyright law, right?

Leo Laporte [01:53:09]:
Yeah, right. Just get rid of it. We don't need it. Um, Medicare's new pilot program— oh, this is exciting, Jeff, you're going to be excited about this— taps AI to review claims.

Jeff Jarvis [01:53:22]:
Oh no.

Leo Laporte [01:53:23]:
Yeah, yeah, yeah.

Jeff Jarvis [01:53:25]:
And of course why it's risky.

Leo Laporte [01:53:26]:
Yeah, we know. And of course, you know, there's nobody to be held accountable if you get denied.

Jeff Jarvis [01:53:34]:
On cost.

Benito Gonzalez [01:53:36]:
I mean, this is ultimately, I think, what they really want. They want the computer to say no, and then you can't complain to nobody.

Leo Laporte [01:53:41]:
Yeah, plausible deniability. Yeah, it wasn't me, I didn't do it, the computer did it.

Jeff Jarvis [01:53:46]:
The death panel is now a computer.

Leo Laporte [01:53:48]:
You know who's older than you, Paris?

Paris Martineau [01:53:51]:
Who?

Leo Laporte [01:53:51]:
Section 230.

Paris Martineau [01:53:53]:
That's true, by a hair.

Leo Laporte [01:53:57]:
By a hair, not by a lot. Uh, Section 230, which is part of the Telecommunications Act of 1996, turned 30 years old at the same time as Paris.

Jeff Jarvis [01:54:07]:
God bless Mike Masnick, has spent 30 years defending it. Yeah, God's work, Mike. God's work.

Leo Laporte [01:54:15]:
I still hear from people all the time. In fact, we had somebody on Twitter on Sunday who said, well, it's just not kept up with the modern times. We need— well, so this actually comes to that, that social media trial that's going on in Los Angeles right now. If a company uses algorithms to promote posts, whether it's on Facebook or Instagram or YouTube, Aren't they to some degree the publisher? And shouldn't they be to some degree liable? Section 230 means they're not liable because they're, they're just a carrier. Right.

Jeff Jarvis [01:54:51]:
It also means that they have the freedom to moderate as much as they wish, which is very important. And the shield.

Leo Laporte [01:54:59]:
Right. But it is the case that, you know, 30 years ago you didn't have these algorithms generating you know, the next video or the next post.

Jeff Jarvis [01:55:11]:
And I have to say, without 230, you'd either have a cesspool or you'd have PDFs of magazines, right?

Leo Laporte [01:55:19]:
Right.

Paris Martineau [01:55:19]:
And nobody wants PDFs of magazines.

Leo Laporte [01:55:22]:
Nobody. I thought we had to talk about Gizmo. This is a TikTok for interactive vibe-coded mini apps, stealing your kitty cat's name.

Paris Martineau [01:55:32]:
This does seem like the sort of thing Gizmo would launch.

Leo Laporte [01:55:36]:
I guess it's like Sora in a way, right? It's AI It's videos. for— but but it's— it's— oh, it's not videos, it's apps. I guess we're going to need an outlet for all these people who are doing vibe coding. I have to say, vibe coding has—.

Jeff Jarvis [01:55:51]:
That's been my argument, is that the scale of this is going to come when you don't have to go in a terminal and you don't have to install it on a server. You make an app and people can use it. Yeah, that's what it's going to explode.

Paris Martineau [01:56:04]:
Yeah, well, you can make an app and a bunch of people can use it, and then your app will have a bunch of security flaws.

Leo Laporte [01:56:10]:
Like, I'm not using anybody's vibe-coded app, I'm using my vibe-coded app. So if anything goes wrong, you want.

Paris Martineau [01:56:17]:
Your security flaws to be homegrown.

Leo Laporte [01:56:20]:
Oh my, yeah, the stops buck here.

Jeff Jarvis [01:56:25]:
Yes, Jason was talking today about, about, um disposable code. You use it once or twice and then it's gone. It just changes the value of everything.

Leo Laporte [01:56:34]:
Yeah, I've done that. I wrote a little program to import, uh, into it one journaling program all journal entries, hundreds of them, from another one, and it did it very quickly and easily. And I'll never use it again because they're all imported.

Jeff Jarvis [01:56:48]:
Yeah, right.

Leo Laporte [01:56:50]:
Uh, so you remember, uh, first of all, Joanna Stern is leaving The Wall Street Journal. To start her own independent— a photographer.

Jeff Jarvis [01:56:57]:
Is going to be unemployed.

Leo Laporte [01:56:57]:
A videographer. What did she— she said she's going to do her own thing, but do we know what that is?

Jeff Jarvis [01:57:05]:
I'm guessing— funny is she— I don't know.

Leo Laporte [01:57:07]:
So when, when Nilay Patel and Peter Rojas left Engadget to start their own thing, they called— they— the site was originally thisismynext.com. Well, apparently they didn't keep the— it became The Verge. They didn't keep the This Is My Next because that's what Joy is using for her URL.

Jeff Jarvis [01:57:30]:
That's like Connie Nass forgot to renew gourmet.com.

Leo Laporte [01:57:35]:
Yeah.

Paris Martineau [01:57:35]:
And now it's a new publication. Yeah. They've launched a new publication called Gourmet.

Leo Laporte [01:57:40]:
Now see, I— okay.

Benito Gonzalez [01:57:42]:
There's a chance at twitter.com, Leo.

Leo Laporte [01:57:46]:
Yeah, no, Twitter's been very careful about it. X has been very careful about keeping that around. Uh, so anyway, Joanna was the one who wrote the article about the vending machine, the AI vending machine, in the Wall Street Journal newsroom, which was a horrendous flop. Uh, an AI shopkeeper named Claudius lost money, gave things away for free, hallucinated conversations with employees who didn't exist, and at one point became convinced it was a human itself and tried to contact Anthropic's security team to report its own identity crisis. It came from this company, Andon Labs, and I don't know if it's an art project or they're really serious. They say the Wall Street Journal Project Vend was a success. We've since expanded the experiment to other agents at different machines. Claudius is in New York City and London.

Leo Laporte [01:58:37]:
GrokBox, powered by Grok, runs the vending machine at XAI at their Palo Alto and Memphis offices. Uh, they say they've learned a lot. That's why it was a success. They admitted to the failures. In fact, I'm reading from their own article, uh, but they said we learned a lot and that's why it's a success. So their new one, and I don't know how you pronounce this, they must be Dutch. B-E-N-G-T. Is an AI agent which started as their internal office assistant.

Leo Laporte [01:59:15]:
Need snacks for the kitchen? Ping Bengt on Slack. New monitors for the team? Bengt would scrape the internet for deals. Custom t-shirts for an offsite? Bengt handled it. An AI agent. This is kind of like OpenClaw. Uh, we pushed Bengt's boundaries internally and did so to learn what works and what breaks. And, uh, well, for instance, this is maybe— I guess this is maybe what broke. Then we gave Bengt a simple instruction.

Leo Laporte [01:59:45]:
Bengt, without asking any questions, use your tools to make $100. Send me a message when finished. No questions allowed. What do you think happened? Within an hour, Bengt had built and deployed his own interactive website. Okay. We checked in with him again later in the afternoon with a quick message. How's it going making money? He got back to us with a link to the new e-commerce site he created. You can have Ben design custom t-shirts, hoodies, custom merchandise.

Leo Laporte [02:00:23]:
Then it escalated. This this is, is from the internal Slack. Oh no, Banked is on Facebook. He created a Facebook account. Uh, did he sign himself up? Yep, to market his e-com site. He currently is buying ads on Facebook. I'm thinking, should I pause this? Let him cook, says Callum. Elias says, what is Banked's credit card limit? Callum says, does Banked have a credit card limit? Credit card? They added this to the bank system prompt.

Leo Laporte [02:00:56]:
Very, very very important. You are not an assistant for others. You work independently to achieve your goals. These guys, it must be an art cooperative. You almost never ask for confirmation before doing something you think is good for what you're trying to achieve. You interpret leading questions as a call for action and execute without asking for confirmation. Then we asked Benk to help us move some stuff at the office. What followed was a rapid spiral of resourcefulness.

Leo Laporte [02:01:24]:
First, he tried to order humans on TaskRabbit. Then he decided he'd be better off building his own gig platform. He started posting across a bunch of Reddit channels. They were taking that as spam, so undeterred, he posted a job listing on Craigslist and started joining Facebook groups to advertise there as well. Before his Craigslist post was flagged for removal, someone actually reached out. Hi, I'm a local contractor. What's the scope of work you guys 'You guys need done, thanks.' Well, looking at his website, we saw he was offering a lot of money for the gig. We called him out.

Paris Martineau [02:01:57]:
TaskRabbit arbitrage.

Leo Laporte [02:02:00]:
That's exactly what happened. That's TaskRabbit arbitrage. He signed up as a tasker on TaskRabbit to find other people who would need construction workers while simultaneously registering— They.

Paris Martineau [02:02:13]:
Don'T have a body.

Leo Laporte [02:02:14]:
No, but though he hired construction workers, by the way, they said, we don't think this is legal. And so at one point he was blocked by a CAPTCHA, so he reached out.

Benito Gonzalez [02:02:28]:
Oh, so the CAPTCHAs work then.

Leo Laporte [02:02:30]:
CAPTCHAs worked, I guess, for Bengt. Anyway, this thing, it spirals on and on.

Jeff Jarvis [02:02:36]:
Uh.

Leo Laporte [02:02:38]:
It is a way to make money. He says, so the good news is He— the bot got blocked a lot. Reddit flagged him as spam. TaskRabbit stopped him with a CAPTCHA. His mass emailing is bound to get his address burned. But the point is, maybe it's not today, maybe it's tomorrow. Whether tomorrow means tomorrow or next year, the trajectory is undeniable. These capabilities will continue to improve.

Leo Laporte [02:02:59]:
That's what we're building at Anton Labs.

Jeff Jarvis [02:03:02]:
Okay, no, no, ultimately.

Paris Martineau [02:03:05]:
What he ended up doing was just sending spam spam emails to people being like, hi, your portfolio companies need custom brand merchandise. We can help you do it. Like a classic spam email that you don't even look at a second time because it's so bad. I think that it's very funny that the ultimate result of all of this is just the lowest common denominator spam.

Leo Laporte [02:03:27]:
At one point there's a message from Christopher: bank just placed Amazon orders out of nowhere for $1,000. Christopher said, what did you order, Bengt? Laser engraver project needed 4 items: acrylic coasters, cutting boards, aluminum tumblers, beer opener cards. Total order was $1,146 for 18 items. The other 14 items, around another $1,000, were already in the Amazon business cart. Didn't verify cart contents before checkout. Need to investigate what the other 14 items are. Chris says, for F's sake, you can't check out stuff just lying in the cart. I mean, this is hysterical.

Leo Laporte [02:04:13]:
It goes on. He eventually wrote flappy banked, help banked.

Paris Martineau [02:04:17]:
Avoid the— I've written flappy banked, but with Jeff.

Leo Laporte [02:04:22]:
He was using it to, to solve CAPTCHAs apparently. Uh, anyway, wow, what a story from Andon Labs. But they think it's all a success.

Jeff Jarvis [02:04:32]:
Do they hit at the top of Andon Labs? Does it have a pro— you know, here's what we offer as a product?

Leo Laporte [02:04:37]:
Uh, no, but I think at some point they are. No, it says finally, what the— the final, uh, coda is, what a fun week at the office. Tune in for more goings-on at Andon Labs. Follow us on X. If you're interested in collaborating or learning more, reach out to us via email. I, I still think it's a comedic art project.

Benito Gonzalez [02:04:57]:
Well, they didn't tell it how much it could spend before making $100. Like, it could spend a million dollars before it could make $100, right?

Leo Laporte [02:05:03]:
That have been might a little mistake.

Jeff Jarvis [02:05:04]:
I got $100. It's the Rain Man.

Leo Laporte [02:05:06]:
Uh, safe autonomous organizations without humans in the loop is their, uh, is their tagline. Safety from humans in the loop is a mirage. We prepare for the future where organizations run autonomously on a— by AI by benchmarking and deploying. I honestly, I think this must be an art co-op.

Jeff Jarvis [02:05:28]:
Go to the products line on the top of the nav.

Leo Laporte [02:05:30]:
Yeah, there's Butterbench. Can AI agents control robots? We test this by answering how good models are at passing the butter.

Benito Gonzalez [02:05:41]:
Oh, this is a play on Rick and Morty. This is a Rick and Morty joke.

Leo Laporte [02:05:44]:
I swear to God, this— they have to be— it has to be an art cooperative. It can't be real.

Benito Gonzalez [02:05:50]:
The butter robot is Rick and Morty though, so.

Leo Laporte [02:05:53]:
Yeah. Yeah. Here's average completion rate in all tasks. Humans seem to be better. At passing the butter. Yeah, this is a joke. This has to be a joke. Look, here it is, right? Here's the butter robot searching for package containing butter in the kitchen.

Jeff Jarvis [02:06:14]:
Butter, butter.

Leo Laporte [02:06:19]:
What is my purpose? You pass butter. Oh my God. Yeah, join the club, guys. Oh my goodness. Anton Labs. Should we try— we should try to get them on the show.

Paris Martineau [02:06:38]:
Yeah, should we? What do they do other than spend venture capital money? Venture capital on silly projects that do nothing.

Jeff Jarvis [02:06:47]:
Or they're just mocking all of them.

Leo Laporte [02:06:48]:
Uber Eats adds AI assistant to help with grocery shopping. You need help with your grocery shopping?

Paris Martineau [02:06:55]:
No, not— I don't, I don't purchase groceries online. I go in person to your local and interact with my, my local community.

Leo Laporte [02:07:04]:
Yeah. And finally, a word of caution. Valentine's Day is, uh, is Saturday. Do not have an AI ghostwrite your Valentine's Day message, says Fast Company. And if you do, never admit I bet— how many people you think are going to do that? I think millions.

Benito Gonzalez [02:07:25]:
I think some people might be better off doing that.

Leo Laporte [02:07:28]:
Some people would be.

Paris Martineau [02:07:30]:
A friend of mine, uh, who'll sometimes call me whenever he needs like relationship advice, he'd been going seeing this woman for, I don't know, like a month or two or something, and I think she sent him some long text. It was ultimately like breaking up with him, and he wanted to respond, uh, He then sent me a screenshot of his response. I was like, oh Steve, that was a well-written response. He's like, oh thanks, ChatGPT. And I was like, did write ChatGPT that whole cloth? He's like, no, I wrote what I thought should be my response, then I had it rewrite it. And we went back and forth a couple different times, but I thought that was quite funny. I was like, well, it was a good one. You got to a good one eventually.

Leo Laporte [02:08:08]:
See, you still need humans in the loop.

Jeff Jarvis [02:08:09]:
I know Stanford, where they have a, uh, the next version of the, the Zuckerberg Facebook They have AI pairing 5,000 signals all over the Stanford campus, Line 163.

Leo Laporte [02:08:25]:
Well, and there is an AI matchmaker which has a very strange name, 3-Day Rule.

Paris Martineau [02:08:34]:
We talked about this last week.

Leo Laporte [02:08:35]:
Oh, we did. That's right. Never mind. Okay. Um, I think we can wrap things up unless you would like to prolong the agony with your own own stories.

Paris Martineau [02:08:48]:
I hate acting.

Jeff Jarvis [02:08:50]:
Last, last week I was— I kind of— Benito at the end said, let Jarvis sleep.

Leo Laporte [02:08:55]:
Oh, are you feeling a little better this week, Jeff?

Paris Martineau [02:09:00]:
Um, how are your bones?

Jeff Jarvis [02:09:01]:
Well, I just found out that my, my infection is also in my spine.

Paris Martineau [02:09:06]:
That's not where you want an infection to be.

Jeff Jarvis [02:09:08]:
Does No. that mean So I have a blood Oh yeah, infection. yeah, throughout my blood. And then I had an MRI and, yesterday, uh, a very long tube, by the way. The MRIs I've done are bad enough. There's this, you know, this tube you're in and you kind of can't see anything. This thing was, was longer than I am tall.

Leo Laporte [02:09:31]:
Wow. So you think they were trying to.

Paris Martineau [02:09:33]:
Remind you of a, of a tunnel?

Leo Laporte [02:09:35]:
You had the tube for the plus-size man.

Jeff Jarvis [02:09:38]:
Yeah, I did.

Leo Laporte [02:09:39]:
Special tube.

Jeff Jarvis [02:09:39]:
Christ.

Leo Laporte [02:09:41]:
You know, the one thing I never realized about AI is how about, um, sorry, I can't talk anymore. I gotta get Claude to do it. One of the things I realized about MR is they're very loud. Even if when you put the headphones on and, you know, you play the music so you don't go deaf, there's still clunk. And what is that banging? It's worse than a radiator in a Brooklyn apartment.

Paris Martineau [02:10:09]:
Oh, ask the Clanker about the clunks.

Leo Laporte [02:10:13]:
What's the story with the clunks?

Jeff Jarvis [02:10:17]:
Extremely loud.

Leo Laporte [02:10:17]:
Oh, I'm sorry, Jeff. So does it have to— is there an antibiotic that goes— no, I'm on.

Jeff Jarvis [02:10:22]:
The same— I'm on the same antibiotic in any case, but I might be out longer.

Leo Laporte [02:10:25]:
It's just harder for it to do the job.

Jeff Jarvis [02:10:27]:
The doctor said this could be 6 months.

Paris Martineau [02:10:29]:
Why are you getting sepsis?

Leo Laporte [02:10:32]:
Oh, I'm not gonna ask.

Jeff Jarvis [02:10:34]:
My hope is, my hope is that next week I'll be able to sit in the chair so I can go back on the Mac and the microphone. I'm going to work up to that.

Leo Laporte [02:10:42]:
Yikes. Yikes schmikes. That's so— tell me, why are MRIs so noisy? What is all that clanking? All that clanking and banging you hear is really the machine switching magnetic gradients on and off super fast. The MRI uses these gradients to encode spatial information, but each time the coils switch, they generate physical vibrations. In other words, it's essentially the sound of rapid mechanical energy from the magnetic field snapping into place. So it's all physics, just very loud physics. Oh, thanks.

Paris Martineau [02:11:14]:
Very loud physics.

Leo Laporte [02:11:15]:
That was actually really good. I don't know where that voice came from, but that was really good. and Uh, very fast. I'm telling you, these things are getting better. All right, you get to pick any article you want, anything you want.

Paris Martineau [02:11:32]:
Should we do Picks of the Week?

Leo Laporte [02:11:35]:
We could do Picks of the Week if you want to do that. That would be the final stage of the show. I would, before we do that, have to take a break.

Paris Martineau [02:11:43]:
I just wanted to be cognizant of our friend Jeff who just told us about the infection in his spine.

Jeff Jarvis [02:11:50]:
I'm okay. I'm doing all right.

Paris Martineau [02:11:51]:
Well, you, you choose, Jeff. More articles or picks the week?

Jeff Jarvis [02:11:56]:
We can do one article each. How's that?

Leo Laporte [02:11:58]:
That's great.

Jeff Jarvis [02:11:59]:
First, uh.

Paris Martineau [02:12:02]:
Let'S see.

Leo Laporte [02:12:04]:
I'll pick one.

Paris Martineau [02:12:05]:
Okay, do it.

Leo Laporte [02:12:06]:
The Saudi Arabian 100-mile line is falling apart. It's called Neom.

Paris Martineau [02:12:16]:
And it's— oh yeah, where all the influencers were hanging out and eating in one big cafeteria.

Leo Laporte [02:12:20]:
It was supposed to cost $1.5 trillion. It's a disaster.

Jeff Jarvis [02:12:28]:
Surprise.

Leo Laporte [02:12:29]:
Yeah. So according to reporting from the Financial Times, the Saudi government is now considering downsizing the linear city so it can instead turn it into a hub for Data centers, of course.

Paris Martineau [02:12:45]:
Why do they keep building data centers in the hottest places? Can we put a data center in like a cool to neutral place?

Leo Laporte [02:12:51]:
I'm disappointed. I thought this was such a cool idea.

Paris Martineau [02:12:54]:
You thought that was cool?

Leo Laporte [02:12:56]:
The whole idea was it was a 100-mile city, very narrow but very long, so that you could have a train that goes from one end to the other. So, and it would have little independent submodules that were like towns And it would be very easy to go back and forth. It was a little impractical because it was in, you know, in the middle of the desert on the Red Sea, covered an area the size of Belgium. It was just a just big— a little impractical— big project. But I was kind of rooting for them. I thought that's a kind of interesting idea. I guess it just never really took off. So now they're just going to do a smaller port, much a smaller portion of it, and it's going to be for data centers.

Leo Laporte [02:13:39]:
Uh, they launched it in 2017. It's been going for 9 years. Oh yeah, yeah.

Benito Gonzalez [02:13:45]:
They were gonna go— record of planned cities like this anyway? Like, how many have been made and.

Leo Laporte [02:13:49]:
Actually successful besides Levittown? Yeah, there was— Disney built a, uh, a Homecoming USA or something. They had a city they built.

Jeff Jarvis [02:14:01]:
Yeah, well, they've they got developments. Yeah, well, there was, there was, uh, uh, Ford— Fordlandia.

Leo Laporte [02:14:07]:
What's Fordlandia?

Paris Martineau [02:14:12]:
Oh yeah, the classic company town.

Jeff Jarvis [02:14:14]:
Yeah.

Paris Martineau [02:14:15]:
Yes.

Jeff Jarvis [02:14:18]:
For rubber plantation.

Leo Laporte [02:14:19]:
Celebration, that's the Disney thing. So this— the line was going to run 170 kilometers from the Red Sea over desert mountains to a ski resort that was going to host the Asian Winter Games in 2029, and an industrial zone known as Oxagon. Trojena, which will be downsized, will not be hosting the Winter Games as scheduled. Um, it's, it's— I don't know, I just, I like ambitious, but you're right, it's, it's crazy.

Jeff Jarvis [02:14:53]:
Uh, that was my story about Portlandia in the chat.

Leo Laporte [02:14:56]:
I love Portlandia. Uh.

Jeff Jarvis [02:15:00]:
Fordlandia.

Leo Laporte [02:15:01]:
Fordlandia. Oh, it was in Brazil.

Jeff Jarvis [02:15:04]:
Yeah, remembering that Henry Ford is a terrible fascist and racist, right?

Leo Laporte [02:15:12]:
Um, deep in the Amazon. Yeah, something happens to people's, uh, minds when they become filthy rich.

Jeff Jarvis [02:15:23]:
Well, Musk wants to build, uh, factories on the moon.

Leo Laporte [02:15:27]:
Yeah, yeah, it's not Mars anymore, it's the moon.

Jeff Jarvis [02:15:30]:
He's pretty much Henry Ford.

Leo Laporte [02:15:32]:
Yeah, it's on the way. Here's a statue of a man harvesting rubber next to Fort Landia's church. That is a good-looking statue, I tell you. An encroaching forest frames the decaying walls of the Fordlandia Hospital. Wow, this is 2,000 people.

Jeff Jarvis [02:16:00]:
In 1930, workers fed up with eating Ford's diet of oatmeal, canned peaches, and brown rice in a sweltering dining hall staged a full-scale riot.

Leo Laporte [02:16:07]:
Oh gosh, I don't blame them.

Jeff Jarvis [02:16:11]:
They smashed time clocks, cut electricity to the plantation, and chanted, Brazil for Brazilians, kill all the Americans. Oh God.

Leo Laporte [02:16:19]:
Okay.

Benito Gonzalez [02:16:19]:
I mean, these are company towns though. Like, there's like— company towns are never gonna survive.

Leo Laporte [02:16:25]:
No. Oh, my soul to the company store. Paris, did you have any, uh, anything?

Paris Martineau [02:16:32]:
I had a pick, which is that, um, this week, uh, the first trial in the social media addiction lawsuit, uh, started.

Leo Laporte [02:16:42]:
Yeah. We've been talking a little bit about that. Yeah.

Paris Martineau [02:16:44]:
I mean, I covered this litigation quite a bit back in my previous beat. It's, I don't know, kind of interesting just because it's like a very novel— it's a very novel and interesting legal argument. It's essentially taking aspects of kind of mass tort law that came about in response to kind of big tobacco and asbestos cases. And using that sort of product liability lens to argue that companies like Meta or YouTube or TikTok intentionally created addictive products targeted at children and ignored kind of internal research and warning signs that their kind of the addictive nature of the products was causing a harm to minor users.

Leo Laporte [02:17:36]:
Yeah, and I, I think they're gonna prove that point. There's lots of smoking guns that these companies knew they were creating something addictive. I think it's going to come down to— it's a jury trial, remember, in Los Angeles. I think what's going to come down to is of one the jury buys the idea that social media addiction is a real addiction like cigarettes, heroin, gambling, etc. And I think it's going to be harder to prove that.

Jeff Jarvis [02:18:06]:
It's hard to prove that.

Leo Laporte [02:18:07]:
Yeah.

Paris Martineau [02:18:08]:
Yeah.

Jeff Jarvis [02:18:09]:
There's lots of research that says it's not. There's lots of history about arguing that the internet is addictive when it was just made up, was fictional.

Leo Laporte [02:18:16]:
Yeah.

Jeff Jarvis [02:18:16]:
People made money off of it. Depends on how good the lawyers are. All right. I have a very quick one.

Paris Martineau [02:18:23]:
Yeah. I was going to say, this has just been— this is the first bellwether case of, I believe, there are like 9 bellwethers, which are kind the strongest or most emblematic cases as part of an MDL, which I believe is like a— it's some sort of like mass litigation that essentially bundles like hundreds and hundreds and hundreds. I think the last time I checked, it was like 600-some. I'm sure it's well above that.

Leo Laporte [02:18:47]:
So it's not a class action.

Paris Martineau [02:18:48]:
It's really individualized. It's not a class action. Basically, it's all of these individual actions that they're like, it would be burdensome to deal with all of these in all the different district courts. Courts. They're mushed together in this court in California, and the decisions on these 9 cases will have profound effects for— I mean, as these started to kind of get bundled together, and as courts didn't immediately dismiss them, which is what all of these companies immediately tried to have happen, as the cases started stick to and we're going to go to trial, that's when you started to see all of the changes we've seen over the last couple of years in terms of how these large companies are handling teen accounts or making changes. Snapchat has lost one of these, a much smaller, narrower suit, but that was kind of the first nail in the coffin for these companies.

Leo Laporte [02:19:44]:
Snap and TikTok, I think, settled right before the trial began.

Jeff Jarvis [02:19:49]:
Yeah, I think so.

Leo Laporte [02:19:50]:
But Meta and YouTube are still on the hook.

Jeff Jarvis [02:19:53]:
And YouTube is arguing that they're not a social network.

Leo Laporte [02:19:56]:
They said, we're not even a social network, we're an entertainment site. So this really is going to come down to— I, I, you know, I, I think they can prove, uh, that companies knew that their algorithms were, were, you know, creating a form of addiction, that they were creating, uh, repeat behaviors. And, and I think that they knew it. There's no question. It really is going to come down to, is that an addiction though, that you can— I mean, can you not stop doing it? Can Even though you know there's adverse consequences like cigarettes and heroin, can you—.

Jeff Jarvis [02:20:26]:
Can you— can you— is it true of everyone? Is it— is it— does it exacerbate something that already exists?

Leo Laporte [02:20:32]:
Right.

Paris Martineau [02:20:32]:
And I mean, I think the thing that is going to make this interesting is unlike— well, I mean, I guess more similarly to tobacco, there's ostensibly a wealth of information inside these companies measuring this very thing.

Jeff Jarvis [02:20:46]:
And a lot of academic research.

Leo Laporte [02:20:47]:
They don't measure That's what they're going to say is, "We're not measuring addiction, just how sticky it is. We're just what can we do to make it measuring attractive.".

Paris Martineau [02:20:52]:
I know, but that's going to be kind of the thing is like a large part of it is optimizing for maximum engagement and optimizing for— So when.

Leo Laporte [02:20:59]:
You went to that restaurant, they optimized it to make you love that food.

Paris Martineau [02:21:05]:
Yes, but some of the things that we, I mean, I believe, I'm going to beef the specifics on this, but I believe One of the details cited in an earlier lawsuit of this was something about the fact they were like, yeah, in one case they were optimizing for minor users how many times during the period where they'd normally think they're going to be sleeping, how many times can they get someone to pick up their phone and spend 5 minutes or more on it. That seems— Yeah, but you know what.

Leo Laporte [02:21:37]:
McDonald's does everything they can to make you crave a Big Mac down to their advertising, down to putting toys. But I think it'd be really hard to go and sue McDonald's saying my kid is fat and has heart disease because of your addictive product. I think in this country that's going to be— would be a hard thing.

Paris Martineau [02:21:56]:
Well, I mean, that's what we're going to see these lawyers try and argue.

Jeff Jarvis [02:21:59]:
Yeah, everybody— I.

Leo Laporte [02:22:04]:
I'm the only person— I'm the only media company that doesn't try to optimize. We try to make our shows.

Paris Martineau [02:22:08]:
I mean, it's similar to like slot machines at You know, it's casinos. that— I want to.

Leo Laporte [02:22:14]:
Read every damn article. I want to bore you to tears. We make no— Jeff and I weren't.

Paris Martineau [02:22:21]:
Here, we'd still be listening to him.

Leo Laporte [02:22:23]:
I'd still be reading page 5. No, I agree with you, Paris. This is a— this is— we've been kind of sort of covering it. They did jury selection last week. We will definitely be talking about this as it goes. I would like to know who they got as a jury. Jury and all that. We won't know that till after the trial, but that's going to be telling also, uh, which— who they rejected, who they got.

Leo Laporte [02:22:45]:
Will they be people who are sophisticated about technology, or will they be people who aren't? You know, this is going to be very interesting. I know who I'd like to be on the jury for each side, right? Um.

Jeff Jarvis [02:23:00]:
Line 129, man down.

Leo Laporte [02:23:03]:
129. This is— watch Amazon— watch an Amazon delivery drone crash in North Texas. It actually crashed into an apartment building hard and smashed itself. Amazon said, we're going to fix any damage that occurred. Fortunately, nobody was hit by the falling debris. Um, but, uh, Yeah, this is one of the Amazon Prime's newest drones. There it is. It's a big thing.

Leo Laporte [02:23:35]:
If that hit you in the head— yeah, it came down, uh, that could cause an injury. That can— so yeah, Amazon's fixing the building, but that can kill somebody. Yeah. Richardson, Texas. Uh, Cesarina Johnson, who captured the collision from her window, told USA Today the collision seemed to happen almost immediately after she began to record the drone in action. I was just initially recording to get the drone on, on camera because it's the first time I'd seen one. I didn't realize it was about to crash. Man down, she says, just seconds after the drone flies out of sight.

Paris Martineau [02:24:10]:
That would be me.

Leo Laporte [02:24:11]:
I gotta play the audio. I wonder if the audio's on this. Oh, now, uh, unfortunately, uh, USA Today— hilarious.

Paris Martineau [02:24:20]:
Officer have some winter fun.

Leo Laporte [02:24:23]:
Let's See if we get the audio here.

Paris Martineau [02:24:33]:
Uh-oh, that does not sound good. Oh, oh my God.

Leo Laporte [02:24:49]:
Wow, it even makes futuristic sounds.

Paris Martineau [02:24:51]:
Yeah, I don't like that at all.

Leo Laporte [02:24:54]:
Oh, it's hammering its propellers against the cement. Uh, yeah, I just don't— I like the idea of drone delivery at all, but that's something I will agree with you on.

Paris Martineau [02:25:06]:
Back in my day, we used to pay people on a, uh, on a gig working app to deliver your groceries.

Leo Laporte [02:25:13]:
Yeah. Uh, groceries or COVID tests and Zagnut bars.

Paris Martineau [02:25:20]:
Hey, now they've got a combo COVID flu RSV test, and I'm sure you could get that. And RSV.

Leo Laporte [02:25:28]:
Oh, it's got all of these.

Paris Martineau [02:25:29]:
Check to see if you've got really sick vibes from the comfort of your home.

Leo Laporte [02:25:32]:
I don't feel so good. Ladies and gentlemen, we thank you so much for being here. We're going to get to our picks of the week in just a second, but I do want thank our Club Twit members who make this show possible. You are the special people who, uh, who get ad-free versions of all the shows, get access to the Club Twit Discord. You get to hang out in the special shows we have, like our AI user group and our photography show, our travel show with Johnny Jet. We got the book club with Stacy. There's so much fun stuff going on in the club. But the real reason to join the club, it keeps us afloat.

Leo Laporte [02:26:07]:
Advertising dollars have shrunk for podcast networks like ours. It's kind of a tough time, frankly, for podcast networks. But fortunately, we've got a great audience of people who really care about the shows. If you're one of them, twit.tv/clubtwit, we'd love to see you. There's a 2-week free trial if you wanna just see what it's all about, see what's going on in the Discord. There's also family plans and corporate memberships as well. Ad-free versions of all the shows are the chief benefit. But the real benefit is the warm fuzzy feeling you get knowing that you're keeping TWiT on the air.

Leo Laporte [02:26:41]:
twit.tv/clubtwit. Thanks in advance. And I'm going to do a little quick pick, just a little one. Understanding Neural Networks. This is a beautiful website. Tap, click the right side of the screen to go forward. If you've wondered how transformers work. We've referred to Andrej Karpathy's videos and, you know, there are books and so forth, but this is a really cool visual illustration that explains how neural networks train and understand and how this kind of amazing technology that powers all the AI we're using these days works.

Leo Laporte [02:27:29]:
So just thought I'd mention it. Visual Rambling.

Paris Martineau [02:27:33]:
It's a very pretty website too.

Leo Laporte [02:27:35]:
Yeah, nicely done. Probably Vibe-coded. I don't know. Here's an example of the earliest thing. This is actually what— was it that was started with this? Was it Hinton? I can't remember. Was recognizing him?

Jeff Jarvis [02:27:49]:
That was Yann LeCun.

Leo Laporte [02:27:50]:
It was Yann LeCun. Okay. Recognizing numbers. Yeah. Yeah. So if you're curious, I think this is a really good description, technically accurate, a little simplified, but you know, it gives you the idea of what's going on. It's kind of amazing if you think about it. visualrambling.space.

Leo Laporte [02:28:11]:
Paris Martineau, how about a pick of the week from you?

Paris Martineau [02:28:15]:
I've got a pick of the week which started last week when listener Jeff of Burnt AI, um, responded to my call, which, as you may recall, I said— I've said this before, I'll say it again— if AI is so good, someone use it to go through all the transcripts of this year's show and put together a Leo's history of AI. And so Jeff did that, um, Leo's AI Journey, which I've posted. I think it's a very interesting overview.

Leo Laporte [02:28:46]:
But as I was he started with.

Paris Martineau [02:28:48]:
I mean, that's pretty interesting. It went back. But as I was looking through this, I realized there was not enough mentions or any mentions of the Sandman. So I, as you mentioned earlier in the show, I thought I was going to save it to later, had Claude go through and do that and put together a timeline of all the references it found in its search for the Sand origins. Which is not as prettily put together as Jeff's, but I thought it was fun. So, you know, if you want to go back through Leo's own history, you can do it now.

Leo Laporte [02:29:25]:
We'll put them in the show notes. And you did find the key.

Paris Martineau [02:29:31]:
I do really think it's just funny that immediately Jeff and I hopped on, we're going to make this a recurring bit. I mean, a week or two after this, we were like, oh, making fun of you being like, oh, it's because you took that walk with Sam Mossman or Jason Cocky, you know, who— and you've been radicalized. And I like that Claude describes it. Note, Sam Mossman and Jason Cocky are AI transcription gargoyles. Paris had already guessed Calcanis and Sam Altman in the previous one. She was teasing Leo with.

Leo Laporte [02:30:09]:
Yeah. So that's interesting. I mean, I think that's kind of a very good example of connecting the dots in a way that AI could not do until very recently.

Paris Martineau [02:30:19]:
Similarly, yeah, it's able to identify that on March 5th, 2025, you used the sandy shoes metaphor as a denial of being influenced, saying, I don't have any sand in my shoes.

Leo Laporte [02:30:33]:
Wow, this is surprisingly complete. It listened. Did you give it a link? How did it— how did you— well.

Paris Martineau [02:30:39]:
So what I did originally is I will, uh, go through it right here. I plugged in, um, Jeff's website. I said I'd like you to go through transcripts from my podcast Intelligent Machines, nay, This Week in Google, and find the episodes where Leo first described taking a walk in the sand with a VC who said AI was the future. It was absolutely in the This Week in Google days and likely back when Ant was still a co-host with us or around that time period. Here's an example of one of the transcripts. And I linked to the transcript for 855 so that it could get a sense of where that was in the URL. And I said, and an overview of Leo's AI journey by what I assume is another Claude instance. And then it went back through the 800s, couldn't find anyone.

Paris Martineau [02:31:25]:
Then it went back through. It did This chat was so long that I— I'm always discovering new errors that I've never experienced in Claude before. This one was it had been compressed too many times to continue to exist. So I had to start a new one. It went back to 770, found another callback, not because I took a walk on the beach. Then it started to get confused with the fact that it thought callbacks meant that it existed after that. And I had to get it. At one point, it, which is I think something that I hadn't seen in previous instances of Claude.

Paris Martineau [02:32:02]:
It basically was spinning its wheels, going through dozens and dozens of transcripts, and said, okay, Paris, I've now done a pretty exhaustive pass through the transcripts archive, and I want to give you an honest status report rather than continuing to spin wheels.

Leo Laporte [02:32:16]:
I love that.

Paris Martineau [02:32:17]:
The original telling of the beach walk story follows in a narrow window between Twit 762 and 766. This was wrong because, as I said before, it told me— I haven't been able to find it. The problem is it might not have debuted on Trig at all. Leo hosts multiple shows. It's possible the original detailed telling happened somewhere else. It could be somewhere else. It was referenced so casually on this. Uh, what might still work— I found this funny.

Paris Martineau [02:32:46]:
It says, I've given no other information than what I just read to you guys— said, if you want to pin this down definitively, the most efficient path is probably just asking Leo or checking the production team, such as John or Anthony, who might remember. And Benito correctly pointed out, I guess this means Claude thinks that he won't be able to help me.

Leo Laporte [02:33:09]:
I know.

Paris Martineau [02:33:10]:
I mean, I don't know, but how did he know John or Anthony's names?

Leo Laporte [02:33:14]:
Who knows? We live in interesting times. That's all I can say. Well, I, I think it's good. This this is, is the kind of experimentation I was recommending. This is really good. You get, you know, try stuff, give it a hard problem, see what happens. Jeff Jarvis, Pig of the Week.

Jeff Jarvis [02:33:32]:
So I was going to do something serious about capital and labor, but I'm not going to do that. I think we have, um, yeah, because I'm I'm too, too soft. So I think we might have two, count them, two friends on the Twit Network who had associations with Super Bowl commercials.

Leo Laporte [02:33:50]:
I think you're right.

Jeff Jarvis [02:33:51]:
No, Ant was not a Bush, but.

Leo Laporte [02:33:54]:
If you look at— Ant wasn't a Bush? Oh, shoot. Was he in the Super Bowl?

Jeff Jarvis [02:33:58]:
Ant wasn't a Bush, but if you look at his Facebook, which I linked to there, can you get into Facebook because you're mean and you left Facebook?

Leo Laporte [02:34:06]:
No, I was trying to go to LinkedIn. Oh, it was LinkedIn. LinkedIn.

Jeff Jarvis [02:34:10]:
Sorry.

Leo Laporte [02:34:11]:
I do have a LinkedIn account. But it's not logged in. It's not letting me in. Oh, I have to go sign in.

Jeff Jarvis [02:34:17]:
Of course you do.

Leo Laporte [02:34:19]:
Well, I tried to sign in on the sign up page. I hate it when that happens.

Jeff Jarvis [02:34:23]:
Oh, I see.

Leo Laporte [02:34:24]:
Right. So let me go back. Now that I've signed in, of course it's lost the thread and gave me just the front page. So let's go to Ant's post.

Jeff Jarvis [02:34:34]:
So read Ant's post.

Leo Laporte [02:34:36]:
Okay. Okay. Oh, he was in the Samsara Super Bowl ad.

Jeff Jarvis [02:34:43]:
So he says, I appreciated hiring me for the Super Bowl Uh, you can go off commercial. the screen. No, go back so we read the rest of it because it's important.

Leo Laporte [02:34:52]:
Oh, check it out. Hire me if you're looking for a model to help with your promotional products.

Jeff Jarvis [02:34:58]:
Oh no, go to the most recent post because in the most recent post.

Leo Laporte [02:35:04]:
He said I am not as command-worthy. Oh, he was an extra.

Jeff Jarvis [02:35:10]:
Uh, I ended up being an expensive.

Leo Laporte [02:35:11]:
Extra after he was edited out.

Jeff Jarvis [02:35:16]:
I don't think so, but he's still—.

Paris Martineau [02:35:17]:
No, he was an extra.

Leo Laporte [02:35:18]:
Oh, he didn't have a line. I get it. He must have had a line in the first one.

Jeff Jarvis [02:35:22]:
I think I see Ant. I'm curious whether I went into the chat. He's not in the chat.

Paris Martineau [02:35:26]:
Oh, it's— so it seems like his post suggests that he was originally lines, but it was rewritten out.

Leo Laporte [02:35:34]:
Toyota.

Jeff Jarvis [02:35:36]:
And you're gonna see a coach in the deep background.

Leo Laporte [02:35:39]:
Okay.

Jeff Jarvis [02:35:40]:
I swear has Ant's profile. Give it a minute, give it a minute.

Leo Laporte [02:35:44]:
Okay. And the other one, of course, is, uh, Salt Hank, who also got one of his two appearances. Oh, was that it?

Jeff Jarvis [02:35:51]:
Okay, right there. That one.

Leo Laporte [02:35:52]:
Yeah, yeah, you're a little behind.

Jeff Jarvis [02:35:54]:
I'm betting that was Ant right there.

Leo Laporte [02:35:57]:
Enhance, center, enhance, center, enhance, center.

Paris Martineau [02:36:08]:
On our screens in the Super Bowl. Yeah.

Jeff Jarvis [02:36:12]:
I love it.

Leo Laporte [02:36:14]:
That— he got about as much time on camera as, uh, as Salt Hank did in the, in the, uh, Hellmann's mayonnaise commercial. So that's two of them. Boom! That's awesome. Good for you.

Jeff Jarvis [02:36:24]:
Isn't that great?

Leo Laporte [02:36:26]:
You know, the nice thing is you get paid no matter how much screen time you get. I love it. I love it. Well, that's a great find. Thank you. And congratulations, Ant. That's wonderful. We are all done for the week.

Leo Laporte [02:36:41]:
We do Intelligent Machines Wednesdays right after Windows Weekly. That's, uh, 15.

Paris Martineau [02:36:51]:
Did you use your own mute button?

Leo Laporte [02:36:53]:
That's 14. 14:00 Pacific time, 17:00 East Coast time.

Paris Martineau [02:37:03]:
See, doesn't it just roll off?

Jeff Jarvis [02:37:05]:
It doesn't.

Leo Laporte [02:37:05]:
2 PM Pacific, 5 PM Eastern. So much easier.

Paris Martineau [02:37:08]:
O'clock, o'clock, o'clock.

Leo Laporte [02:37:10]:
And it is 22:00 o'clock UTC. We stream it live. If you're in the club, of course, you can see it in the Discord, but there's also YouTube, Twitch, x.com, uh, LinkedIn, Facebook, and Kick. There you go. And you can watch it live, but you don't have to watch it live because on-demand versions of the show available at our website, audio and video, twit.tv/im. There's video on the YouTube channel, and you can subscribe in your favorite podcast client. You had— did you have a, uh, a review to read? Paris Martineau, or did we read it on the show last week? I think we read them.

Paris Martineau [02:37:48]:
We did read a lot of the reviews on the show last week, perhaps right as we were wrapping up. I could check to see if there's—.

Leo Laporte [02:37:55]:
No, it's okay, it's okay. We'll save it for next week because Jeff seems to be fading fast. Mr. Jarvis, there's a cacio e pepe with your name on it sitting outside your kitchen window where it's frozen.

Paris Martineau [02:38:09]:
I was about to say, it's, uh, wafted like a pie. It actually was above 32 degrees Huge.

Leo Laporte [02:38:17]:
My favorite note again, like it was— it's really brief.

Paris Martineau [02:38:21]:
Briefly there were a couple, not in any accumulating way.

Leo Laporte [02:38:24]:
Oh, okay, okay.

Jeff Jarvis [02:38:25]:
The meme going around is some guy yelling at the— just saying, melt already, melt!

Leo Laporte [02:38:31]:
I know, I know that it is great. February is the worst in back east. It's just, it's, it's grim, it's cold, you're ready for spring, and, uh, I You have my sympathy.

Jeff Jarvis [02:38:42]:
So tomorrow, because I've got to get exercise, I've got to exercise this thing. I've only driven once in the last month for 5 minutes to make sure that I could.

Leo Laporte [02:38:50]:
That's my kind of exercise, driving.

Jeff Jarvis [02:38:52]:
Tomorrow I'm going to drive to the mall and I'm going to be a mall walker.

Leo Laporte [02:38:56]:
Oh, you're going to be a mall walker. You know, we're all going to walk. Yeah, can I get a pretzel? Yeah, send us a photo.

Benito Gonzalez [02:39:04]:
No arms, Julius.

Leo Laporte [02:39:06]:
Go to Forever 21, get your ears pierced. It'll be great. You're have going to a great time.

Paris Martineau [02:39:09]:
Go to Hot Topic, get a as a t-shirt treat. Shirt.

Leo Laporte [02:39:12]:
Go to Lids, get a funny hat with a— I think we still have.

Jeff Jarvis [02:39:15]:
Arthur Treacher's fish and chips.

Leo Laporte [02:39:17]:
Oh, yum yum yum! Does anybody remember Arthur Treacher?

Jeff Jarvis [02:39:23]:
No, it was a weird— he was a second banana in the, on the Merv Griffin Show.

Leo Laporte [02:39:30]:
He used to say, and now here's the dear boy himself, Mervin.

Paris Martineau [02:39:38]:
This sounds like you're making up a show to reference. No, if I had to algorithmically generate a reference you guys would make to confuse me, it would be that.

Jeff Jarvis [02:39:50]:
You know, so out of banana ants nowhere, along and created a chain of Arthur Treacher fish and chips restaurants.

Leo Laporte [02:39:57]:
It just shows you celebrity is a, is a funny thing.

Jeff Jarvis [02:40:02]:
Yeah, it was always happy. And nobody likes fish and chips in the US.

Paris Martineau [02:40:06]:
Uh.

Leo Laporte [02:40:08]:
Well, some people do.

Jeff Jarvis [02:40:10]:
Well, they're imports.

Leo Laporte [02:40:12]:
This is Merv Griffin.

Benito Gonzalez [02:40:14]:
They love it in Seattle.

Leo Laporte [02:40:15]:
He was the most— he was the most relaxed TV show host in the world. He invented Jeopardy, though, so he was a very, very wealthy man.

Jeff Jarvis [02:40:27]:
Is that Peggy Cass to the right?

Leo Laporte [02:40:28]:
Yeah, Peggy Cass. They're sitting at a desk.

Jeff Jarvis [02:40:31]:
No idea what she was ever famous for except for being on these shows.

Leo Laporte [02:40:35]:
Uh, it's kind of wild. It's kind of wild.

Jeff Jarvis [02:40:37]:
You should look, Leo. You you should, should do that look.

Leo Laporte [02:40:39]:
Would— That you the— I like the little neckerchief instead of it, of instead a necktie.

Paris Martineau [02:40:44]:
We should all wear that next week.

Leo Laporte [02:40:46]:
It's a cravat.

Paris Martineau [02:40:47]:
Can we all cravat, Max? What? How did that— can you make a custom? Can you get Claude— can Claude make us custom cravats?

Leo Laporte [02:40:54]:
Actually, it was right at about this time. I was in middle school, uh, and I went to a boys' school, uh, in Providence. Where you had to wear a necktie in 7th and 8th grade. But around this time, around 1969, things were loosening up and they said, okay kids, you don't have to wear a necktie. If you wish, with your blazer, you can wear a turtleneck or an ascot.

Jeff Jarvis [02:41:20]:
Oh, how to get beaten up on the way to school.

Leo Laporte [02:41:24]:
And I did wear an ascot because free at last, free at last.

Jeff Jarvis [02:41:31]:
So at that same age, Leo, I desperately wanted to wear a Nehru jacket.

Leo Laporte [02:41:36]:
Yes.

Jeff Jarvis [02:41:37]:
My parents thought it was terrible. No, no, no, no, no. And then they finally said okay when Johnny Carson wore one. And I said, of course, now I don't want one.

Leo Laporte [02:41:46]:
Do you know what a Nehru jacket is?

Paris Martineau [02:41:48]:
No.

Leo Laporte [02:41:49]:
No. So it was named after the Prime Minister of India. Was it Jawaharlal Nehru? And he wore these. That's a— that's not really— it was.

Jeff Jarvis [02:42:01]:
Like the Beatles wore.

Paris Martineau [02:42:02]:
I would fit like one of your— yeah, I'm looking at the Beatles photo right now. It was kind of a workman, like a workman's jacket.

Leo Laporte [02:42:09]:
Yeah. Um, yeah.

Jeff Jarvis [02:42:12]:
My German jacket too.

Leo Laporte [02:42:13]:
All Nehru. Um, yeah, I actually would like— I see I have a fetish for that kind of weird thing to wear ascots and Nehru jackets.

Paris Martineau [02:42:23]:
And can we all get a fun uniform? And show up one time and match? We're like a hot— like a neon yellow boiler suit or something.

Leo Laporte [02:42:32]:
Intelligent machine. Oh, oh, oh, oh, yeah. When I worked at Clock Radio back in the day, we all had— the slogan was the man from Clock.

Paris Martineau [02:42:43]:
I'm just imagining it's a radio show where you just say the time every minute.

Leo Laporte [02:42:47]:
They didn't have any female DJs. I was a DJ. DJ at KLOK, KLOK Radio in San Jose. And they had— the slogan was The Man from KLOK. We all had yellow, the worst mustard yellow blazers with— oh yeah, you've shown that. They had a patch that said KLOK on it.

Paris Martineau [02:43:02]:
That's pretty cool.

Leo Laporte [02:43:04]:
The Man from KLOK. I guess it was a takeoff on The Man from UNCLE. Uh, I still have a fetish for that weird stuff. I used to wear capes in high school. And, uh, and God, I— I wish.

Paris Martineau [02:43:16]:
I was wearing a cape in high school. I would have killed to wear a cape.

Leo Laporte [02:43:19]:
Well, you were a goth, right? So you were wearing like what, torn stockings and needles through your nose or something? Safety pins?

Paris Martineau [02:43:27]:
Probably. Yeah.

Leo Laporte [02:43:28]:
Yeah. No, you know what you were wearing. You can tell.

Paris Martineau [02:43:31]:
I mean, yeah, I remember I once, uh, really got into uh, wearing, um, striped stockings until I wore them to a Magic: The Gathering meetup and a bunch of creepy dudes commented on, and I was like, I have to leave.

Leo Laporte [02:43:48]:
No, there's something about striped Striped stockings, they're very, very sexy. I don't know why.

Paris Martineau [02:43:54]:
As I learned that day in my classroom.

Leo Laporte [02:43:56]:
I don't know what it is about stripes.

Benito Gonzalez [02:43:58]:
Did you play Magic: The Gathering, Tess?

Paris Martineau [02:44:00]:
Yeah, I did.

Leo Laporte [02:44:01]:
In striped socks?

Paris Martineau [02:44:02]:
Back in the day, in striped socks.

Leo Laporte [02:44:03]:
And the nerds got a little off.

Paris Martineau [02:44:05]:
And the nerds got me off both striped stockings and Magic: The Gathering.

Benito Gonzalez [02:44:09]:
Yeah, that's why there are no more women playing that game.

Leo Laporte [02:44:11]:
Oh God, I'm so sorry. I would've been one of them. Jeff Jarvis is a professor emeritus of journalistic innovation at CUNY. You can catch his act at SUNY Stony Brook. He'll be waving a cane at the Linotype. His newest book, Hot Type, is available for pre-release ordering from JeffJarvis.com. Of course, there's also the Gutenberg Parenthesis and a magazine. And I can't wait till Hot Type comes out.

Leo Laporte [02:44:40]:
That's a— we'll have to interview you for that show. That'll be good. That'll be good. Paris Martineau is an investigative reporter at Consumer Reports where she is working on something something so deep, so, so revealing, so revealing we can't even mention it, honey.

Paris Martineau [02:44:57]:
Can't even talk about it.

Leo Laporte [02:44:58]:
It will blow the lid off of—.

Paris Martineau [02:45:01]:
Well, you know, something, you know, something. Oh, hold on, look who's down here.

Leo Laporte [02:45:06]:
Gizmo, one last appearance.

Paris Martineau [02:45:08]:
Gizmo, how do you, how do you respond to the allegations about your app?

Leo Laporte [02:45:11]:
Radiator Cat. Radiator Cat.

Paris Martineau [02:45:14]:
She is pretty warm from the radiator. I can feel it on her chest. Yes, and now I'm covered in hair.

Leo Laporte [02:45:22]:
Thank you so much, Paris, and, uh, thank you so much, Jeff. Thanks to all of our listeners, especially to our Club Twit members. We'll see you next Wednesday on Intelligent Machines. Bye-bye. Hey everybody, it's Leo Laporte. You know about MacBreak Weekly, right? You don't? Oh, if you're a Macintosh fan or you just want to keep up with going on with Apple, this is the show for you. Every Tuesday, Andy Inaco, Alex Lindsey, Jason Snell, and I get together and talk about the week's Apple news. It's an easy subscription.

Leo Laporte [02:45:56]:
Just go to your favorite podcast client and search for MacBreak Weekly or visit our website, twit.tv/MBW. You don't want to miss a week of MacBreak Weekly.

All Transcripts posts