Transcripts

Intelligent Machines 835 Transcript

Please be advised that this transcript is AI-generated and may not be word-for-word. Time codes refer to the approximate times in the ad-supported version of the show.
 

Leo Laporte [00:00:00]:
It's time for Intelligent Machines. We've got a big show for you. Karen Hao is our guest. Her book Empire of AI is a bestseller. It tells the inside story of what's happening, what happened, and what might happen at OpenAI. You're gonna love that. Then Harper Reed joins us for a fun episode. We'll talk about how he uses Claude code to create a nickname for himself.

Leo Laporte [00:00:23]:
That and a whole lot more coming up next on I Am. Podcasts you love from people you Trust. This is TWiT. This is Intelligent Machines, episode 835, recorded Wednesday, September 3, 2025. Glitchlord. It's time for Intelligent Machines, the show. We cover the latest in AI robotics and all the bijouterie surrounding. I'm going to use a different thesaurus entry for.

Leo Laporte [00:01:03]:
Is it Gee Gauze, Jeff from now on, Gee gaws that are surrounding you in your everyday life. That is on my right. Jeff Jarvis, professor of emeritus of journalistic innovation at the Craig Newmark Graduate School of Journalism.

Jeff Jarvis [00:01:19]:
Craig Newmark.

Leo Laporte [00:01:21]:
Lay the Craig Newmark jingle in after. In post production. Oh, no, I guess not.

Paris Martineau [00:01:26]:
No, it'll be here.

Leo Laporte [00:01:30]:
State University and SUNY Stony Brook, where he's about to get to work because the semester is about to begin. Except you. You were. Do you go to. Do you actually go anywhere or you just. No, you just sit in your.

Jeff Jarvis [00:01:42]:
The entire world sees me with this microphone. It says, whoa, you got a nice microphone.

Leo Laporte [00:01:46]:
Zoom in.

Jeff Jarvis [00:01:46]:
Podcasting, babe.

Leo Laporte [00:01:47]:
Zooming. You know somebody's a podcaster when they have a nice microphone.

Paris Martineau [00:01:50]:
I was gonna say, you guys are more confident than I am. I hide this thing anytime. I'm not in this show because I don't want to. I don't want to stunt on people like that.

Jeff Jarvis [00:01:59]:
Nerds.

Leo Laporte [00:01:59]:
That is Paris Martineau, consumer Reports investigative journalist par excellence.

Paris Martineau [00:02:05]:
I will not be here for the rest of the show whenever this is airing, but I'm here now for this interview.

Leo Laporte [00:02:11]:
Okay. Yeah. I should mention we're prerecording this on Labor Day because Karen Hao, our very special guest, is in Hong Kong. I don't know what that has to do with anything, actually, because she's very busy.

Paris Martineau [00:02:22]:
She's got a incredibly busy book tour schedule because this is a fantastic book that she wrote. And despite the fact that it came out how long ago now, Karen?

Leo Laporte [00:02:32]:
It came out.

Paris Martineau [00:02:33]:
Your schedule just keeps getting busier and busier.

Leo Laporte [00:02:36]:
She has done hundreds of interviews and she has hundreds more before she sleeps. She'll be going to Australia in a couple of days. Santa Clara, Berkeley Amsterdam. That's in the Netherlands, kids. New York, Chicago, St. Louis. Oh my gosh, look at this. Bangalore.

Leo Laporte [00:02:53]:
She's going to meet up with Jeff and Munchen. Karen Hao is the author of a book that is getting a lot of attention called Empire of AI. You may have read her articles about OpenAI and the MIT Technology Review where she is a senior editor covering AI. Formerly, I guess, a senior editor.

Paris Martineau [00:03:15]:
Formerly.

Leo Laporte [00:03:16]:
Yeah, yeah. She's too busy now to have a job anyway. No, her job is, you know, the book. Anyway, Karen, it's great to have you. Welcome to Intelligent Machines.

Karen Hao [00:03:28]:
Thank you so much for having me. And thank you so much for doing this, all three.

Leo Laporte [00:03:32]:
Well, we're very excited because all three of us have read the book and have learned a lot about what is a surprisingly secretive organization. How did you get in in the first place?

Karen Hao [00:03:47]:
Yeah, I mean, back when I first profiled OpenAI, I embedded within the company for three days in 2019, and then the profile in 2020. They invited me in because they were quite different than they are now in that they were still trying to hold on to their original conception as a nonprofit and trying to be. Trying to project transparency. And so I at the time was really curious to just understand what they were working on and what was going on because there were a series of changes that were happening in 2019 that piqued my interest. One of them being that had just developed GPT2 a couple generations before ChatGPT. Sam Altman just officially became CEO and they got $1 billion from Microsoft. And I just told OpenAI, it feels like there's a lot going on and you might want to consider reintroducing yourself to the public. And I think MIT Tech Review could be a really great publication for doing so because we focus a lot on what you focus on, which is the fundamental research that's happening within the field.

Karen Hao [00:04:59]:
So we talk more in depth about some of the scientific and technical concepts that you're working on. And they really liked that idea. So they brought me in.

Leo Laporte [00:05:09]:
They changed their mind instantly, didn't they?

Karen Hao [00:05:13]:
They didn't change it instantly.

Leo Laporte [00:05:15]:
They almost instantly regretted me.

Paris Martineau [00:05:18]:
Security guards kind of playing relay and doing interference while you were there.

Karen Hao [00:05:25]:
They did, yeah. So I learned while I was reporting the book, not when I was reporting the profile, that my face was given to the security guard as like a look out for this person and make.

Jeff Jarvis [00:05:41]:
Sure.

Leo Laporte [00:05:43]:
This is during your three day.

Karen Hao [00:05:45]:
Embedding, which was during my. Yeah, so I was actually really surprised in hindsight that they were already nervous that early because I Didn't really know what I was going to write about. Like I was sort of just coming into the org, really open minded, thinking, let me just ask them lots of questions about what they think they're doing and try and see what's interesting. But apparently I don't know what I did that kind of set off their concerns so early that made them give the security guard my face. But one of the things I think reverent, perhaps one of the things I.

Paris Martineau [00:06:25]:
Think is very interesting and that you do really well in the book is kind of show how there is this disconnect that is now fairly obvious now. But there's been this disconnect from the start with OpenAI between how they positioned the company to the public and how they acted in private. And I feel like one of the scenes you recount in the book, like during those three days you were embedded in 2019, is you just like you just said, trying to ask the executives questions about how they view the company. And they seem to even kind of fumble is the wrong word. But they had, I think actually that.

Karen Hao [00:07:02]:
Is articulate a pretty good word. Yeah, fumble. I mean, I remember. So I didn't actually write this in the book, but when I was re listening to my interviews from that time to write the book, I had forgotten that one of the first questions that I asked Greg Brockman, he paused for around 10 seconds and it was a really basic question. I was like, I think I just asked why are you spending billions of dollars building AGI? And then he gave me an answer and then I was like, I don't fully know if that answered the question. Could you maybe say it a different way? And then he paused for like 10 seconds. And then Ilya went, I'll take this one. So they did really fumble with some basic questions.

Karen Hao [00:07:46]:
Like, I was pretty surprised because I was like, wait a minute, I'm. I don't think I'm asking. I'm asking like the most generic questions here. Just articulate why you're doing what you do and what you're doing. And there was a scene in the.

Jeff Jarvis [00:08:01]:
Conference room was so telling, I thought.

Karen Hao [00:08:04]:
Yeah.

Jeff Jarvis [00:08:04]:
Where maybe it was just early days. Maybe they weren't media trained yet, though. I don't think it was that. I think they really were confused about what they were doing together. They were, they were using highfalutin terms and you asked them to define them and they couldn't define them.

Karen Hao [00:08:18]:
Yeah. I think what's what I realized is they had spent so much time only articulating what they were doing to other people, either in the AI world or in the tech world. So they had at least some kind of shared worldview or lingo around these things, so they didn't have to. I think they were used to defending themselves, but defending themselves to it specific audience, not to the public. And so when I started asking them, okay, now explain to the public what you're doing, that was when it started tripping them up.

Jeff Jarvis [00:08:54]:
Do you think that they, inside OpenAI or inside the fraternity, sorority of AI or cult, depending on how you look at it, do they have a shared definition of AGI?

Karen Hao [00:09:06]:
No, that's the thing. Yes.

Jeff Jarvis [00:09:08]:
That's what you started going after them for. And did you ever come away thinking that there is some commonality or is it a, is it, is it a, is it a vessel to which they put their own views?

Karen Hao [00:09:17]:
I mean, that's the, the problem with the common definition is generally people would agree that AGI means human level intelligence in machines, but then no one agrees on what human intelligence is.

Jeff Jarvis [00:09:35]:
Right?

Karen Hao [00:09:36]:
So. So the problem is not necessarily that there isn't a definition. The problem is that the definition is still meaningless because there is just no scientific consensus beyond the world of AI, just globally, across disciplines. There's no scientific consensus around how to quantify human intelligence. And in fact, the quantification of human intelligence has a very dark history and lots of ulterior motives for why people have sought to do that. And So I think OpenAI was very readily willing to acknowledge that AGI had a very swishy definition, but they didn't see that as a problem. Whereas I thought, well, if you don't have a clear direction of where you're going, it seems like that makes your foundations inherently a little bit weak because you're supercharging a quest to, towards who knows where with billions of dollars. And to your point, it did become a vessel for people's own projections, systems, systems of belief, because different people thought that human intelligence meant different things and that it would manifest in different ways, and it would.

Karen Hao [00:10:49]:
Its implications arriving at AGI would have fundamentally different implications for the world. And that's why through the course of OpenAI's history, there's just been so, so much infighting because different ideological camps develop, splintering over these definitions. And then they start, you know, biting at each other's heads, trying to get the company to go one way or another based on their views.

Leo Laporte [00:11:20]:
You arrived at a really seminal point. I mean, you arrived as the company. They've gone through many changes. They're still going through changes. But they had, you know, originally formed Sam Altman partnered with Elon Musk to kind of develop AI in an open fashion so that Microsoft and Google, mostly Google, wouldn't have, you know, dominance in the field. But by the time you got there, as you say, they raised a lot of money from Microsoft. Suddenly they still don't have a really credible product even. You talk about the demos that they did of the early GPTs and, and how trivial they were and how unimpressed people were.

Leo Laporte [00:12:00]:
And Bill Gates was funny, what is this? But it was already starting to change. As you say, Sam had become CEO, they had raised money, they realized it was going to be a much more expensive process. They were at the point where they were starting to think differently about what they were doing. Would you say that's true?

Karen Hao [00:12:21]:
This is interesting. So I, I have sort of changed my views about whether they evolved or whether actually they kind of stayed the same in terms of what they were doing. So initially I felt, okay, they're a non profit. They are, they were trying to be more transparent, more collaborative. And then there was this shift, this inflection point when they suddenly get money and it starts moving them more in a profit oriented direction. But now in hindsight, it was quite clear when they started that they wanted it to be a nonprofit. Not necessarily just out of altruism, but still out of ego, of we want to signal to the world that we're the good guys.

Leo Laporte [00:13:12]:
Right.

Karen Hao [00:13:12]:
And we're going to continue on this quest to reshape and remold this technology. And so there was always this egotistical element to it and there was this deep seated desire of we need to get to where we're going first, wherever that is, in order to have some kind of field or industry defining impact. Because they very much exist in an ecosystem and a belief system of winner takes all. That's just how Silicon Valley operates. So, so in a sense, did they actually change when they got money or did they actually always have the same belief that drove them to then seek money and then continue down, you know, the natural course, the natural path that an egotistical project would lead you down? Yeah, so I think over time I started to realize maybe they weren't so pure in the beginning. There was already a little bit of corruption in the beginning in terms of their concern conception of why they were doing open AI and that's what then plotted them down the path that we see today.

Leo Laporte [00:14:20]:
But as you tell the story, it did start to come to them that they were going to need massive amounts of compute and massive amounts of money. So they didn't know that from the beginning, or did they?

Karen Hao [00:14:33]:
That's true. So they. I don't think they fully realized the degree of money that they needed. I think there was some conception, at least from Ilya's side, that they would need to do. They would need to scale their systems to some degree. But interestingly, at the time, they probably couldn't even have conceived technically of the degree of scale that they now operate under, because it was not yet possible. The techniques for training models on such vast amounts of computer chips had actually not been invented yet.

Leo Laporte [00:15:16]:
They weren't yet even sure that Transformers were the way to go. Right. That was something they came to over time.

Karen Hao [00:15:24]:
Yes, exactly. So they sort of hit upon both the software and the hardware that they wanted to use about a year and a half, two years into the organization. So they initially had more vague ideas of. They wanted to see scale, the existing techniques to some degree, but they just didn't know which technique they wanted to scale and to what degree they wanted to scale. And when they realized, okay, and also Transformers. It took a while for. Like, when Transformers came out, there were a couple researchers who were like, yep, that's the one. Like, we want to do that.

Karen Hao [00:16:02]:
But it did actually take the organization a little bit longer to go all in on the Transformers.

Leo Laporte [00:16:08]:
Sut Sker was a believer from the very beginning, right?

Karen Hao [00:16:12]:
Yeah. And that's it. It was.

Leo Laporte [00:16:13]:
He's a fascinating character in your book. He is. He's a prophet, not a coder, which is interesting.

Jeff Jarvis [00:16:20]:
What do you think of him? I'm. I'm really eager that. That. Here we are, just. Just four people sitting at the end of the bar.

Paris Martineau [00:16:27]:
Don't pay attention to these mics.

Leo Laporte [00:16:28]:
In that case, I want a drink. I don't know.

Jeff Jarvis [00:16:30]:
Well, it's a little early for Kara right now, but. But. But the three main characters here, Setska, Rubrak, and Altman, I'd love to hear, in hindsight, how you look at them, just for your own, who you'd want to be on an elevator with and not.

Karen Hao [00:16:48]:
I think I would definitely want to be in an elevator with Sutskever.

Jeff Jarvis [00:16:54]:
Why?

Karen Hao [00:16:55]:
I think he's the most interesting and complex of the three. I think Altman is. He's a politician. Like, that's the best way to think about him.

Leo Laporte [00:17:05]:
Comes off kind of skeezy in your. In your book. He's kind of.

Karen Hao [00:17:07]:
He's. He's really good at telling stories and being persuasive and getting People to. He persuades people to either donate gobs of money or to donate their talent towards whatever vision he wants to achieve. And he's very, very good at that. But the. Yeah. The controversial aspect of him as a character is that he will tell different things to different people as part of his persuasion tactics. And so over time, depending on whether or not someone feels like what he said aligned ultimately with his actions, they either end up becoming really, really gung ho about his leadership or feeling like he's the devil incarnate, that somehow he manipulated them into doing something fundamentally different from what they wanted to achieve.

Karen Hao [00:17:59]:
So he's the politician. Brockman is interesting in that. He is. Yeah. I think the way that I describe him in the book is, like, he sort of exudes this anxious energy of wanting to be remembered in history and everything that he does and says. You can kind of pick up that he's sort of doing it in part because he wants to be judged well in the eyes of history.

Paris Martineau [00:18:28]:
Wasn't the line that you. I think you said attributed to him that he's like, oh, no one ever remembers a cto. I can't be a CTO forever.

Karen Hao [00:18:38]:
Yeah. He says at this retreat in Tahoe, name a famous cto. People sort of fail to do. Other than, like, Steve Wozniak. People sort of fail to do it. And then he proves the point that he was trying to make. Like, no one remembers the cto. And then, like a year later, he becomes.

Karen Hao [00:18:58]:
He switches from CTO to the president of the company.

Jeff Jarvis [00:19:03]:
Let me ask the question another way. If you had a friend who was going to work at OpenAI and could report in the time when they were all there and could report to any one of the three, who. Who should they report to? Who's a good boss?

Karen Hao [00:19:19]:
Or I would still say Sutskever.

Jeff Jarvis [00:19:22]:
Wow. Interested. But he's a little. He's a little wacky, too, isn't he?

Leo Laporte [00:19:25]:
He's the shingy of open air.

Jeff Jarvis [00:19:27]:
Yeah.

Karen Hao [00:19:27]:
No, no. By any stretch of the imagination. Like an average guy.

Jeff Jarvis [00:19:34]:
Right.

Paris Martineau [00:19:35]:
I don't think any of them are average guys.

Karen Hao [00:19:37]:
Yeah. Well, the thing about Sudskever that I think the way I would describe him is he's also a visionary like Altman. He does have a lot of. He has very strong convictions. But whereas Altman is sort of. He's not. His convictions aren't in the technical realm. Like, he's thinking about, like, how to move resources and what kind of relationships to build to get to where they're going.

Karen Hao [00:20:02]:
Sutskever has Always had a very keen scientific and technical conviction of. He's like, I think we need to do this from the beginning. I think we need to scale these models, and it's just a matter of figuring out which one to scale. So he has that kind of visionary aspect. And he's highly cerebral, and like many highly cerebral people, he's also a highly emotional person. Without realizing that he's highly emotional, thinks that he actually makes decisions purely rationally, but actually he probably makes decisions almost entirely emotionally.

Leo Laporte [00:20:39]:
He's a first principles kind of guy.

Karen Hao [00:20:42]:
Yeah, that's the way I would describe it. So he sort of. He often exists almost in a realm, in an intellectual realm that seems a little bit detached from reality. Like, he's just like, constantly thinking in his mind about different future possibilities and then trying to implement it in a scientific work. But. But when I interviewed people about, you know, the three. These three people, and who was the best leader, slash manager, people would pretty universally say Sudskever. If they had to pick, they would have to pick Sutskever, because Altman was.

Karen Hao [00:21:21]:
He's a terrible manager. And Altman has said himself, he's not a manager. He is just the visionary. He's good at getting people pointing to a direction and getting people to move there, but he cannot operationalize things. And you can see that with the way that OpenAI has been changing leadership recently. Like he installed a new CEO of applications, or OpenAI installed a new CEO of applications that's doing the actual operationalization, whereas Altman is continuing to do the fundraising. And Brockman is also terrible, terrible manager. He's very bad at working with other people.

Karen Hao [00:21:59]:
He's very much a solo coder. And the way people described him was he kind of going back to, like, he's like, he has this anxious energy of wanting to prove himself in the eyes of history. He would. Will just like, relentlessly code and run towards a specific goalpost that you give him, but he won't look up from the coding to see if the goalpost has changed, whether they need to reevaluate where they're going. And he, like, people burn out when they work with him because they'll go to sleep and wake up and the entire code base has changed because Brockman has stayed up all night coding and no one knows. Like. Like, people can't just. They just can't keep up with, like, what he's doing.

Karen Hao [00:22:42]:
So. So it's just kind of impossible to work with him in general, not just work for him. Which is why Brockman has. Hasn't had reports at OpenAI since like maybe two years into the company. Yeah. So then that just leaves Suds Govern.

Leo Laporte [00:22:58]:
I hope you're enjoying this interview with Karen Hao. We recorded it on Labor Day a couple of days ago. That's why Paris is here. We'll continue with Intelligent Machines in just a bit. Harper Reed will be filling in for Paris and the rest of the show, but we have more with Karen, lots more to ask her and talk about in just a bit. So stay here, you're watching. Intelligent Machines are our show today, brought to you by threatlocker. Love these guys.

Leo Laporte [00:23:24]:
Threatlocker makes zero trust easy. You know, ransomware is killing businesses worldwide. You know that if you listen to our shows. But threatlocker can actually stand between you and the bad guys, prevent you from becoming the next victim. ThreatLocker's Zero Trust platform, and that's the key, takes a proactive deny by default approach. Those three words carry a lot of weight, but they really, it really works. Deny by default, in other words. Threat Locker blocks every action that you haven't explicitly authorized.

Leo Laporte [00:23:59]:
And the beauty of that, it protects you from known threats, but it also protects you from completely unknown 0 Days threats you know nothing about because they get there and they can't do anything. That's why ThreatLocker is trusted by global enterprises like JetBlue, the Port of Vancouver. Threat Locker shields you from zero day exploits and supply chain attacks and in the whole process provides you complete audit trails for compliance. So it's a fantastic solution and we're seeing this more and more. We were talking about this yesterday on security. Now cybercriminals are turning to malvertising. Now you need more than just traditional security tools. How does it work? Attackers create convincing fake websites impersonating popular brands like AI tools and software applications distributed through social media ads, hijacked accounts.

Leo Laporte [00:24:50]:
Then they use legitimate ad networks to deliver malware in the ads affecting anyone who browses, even if they're browsing on a work system. That's why it's such a huge threat. Traditional security tools often completely miss these attacks because they use the attacks use fileless payloads, they run in memory, they exploit trusted services, they bypass filters, you know, but that's why zero trust is so incredible. ThreatLocker's innovative ring fencing technology strengthens endpoint defense by controlling what applications and scripts can access or execute. Without permission, they can't do anything. It contains potential threats even if those malicious ads successfully reach the device. Threat Locker works across all industries. It supports PCs and Macs, provides 24.

Leo Laporte [00:25:40]:
7 US based support and enables comprehensive visibility and control. Jack Senisap, who's director of IT Infrastructure and Security at Redner's Market, says when it comes to Threat Locker, the team stands by their product. Threat Locker's onboarding phase was a very good experience and they were very hands on. Threat Locker was able to help me and guide me. This is Jack speaking to where I am in our environment today. End quote. Get unprecedented protection quickly, easily and cost effectively with ThreatLocker. Visit threatlocker.com TWIT to get a free 30 day trial and learn more about how ThreatLocker can help mitigate unknown threats and ensure compliance.

Leo Laporte [00:26:21]:
That's threatlocker.com TWITTEN we thank them so much for their support of intelligent machines. Now back to our interview. We're talking to Karen Hao. She's the author of a new book that just came. Well, it came out in the spring, but it is a huge bestseller called Empire of AI Dreams and Nightmares in Sam Altman's OpenAI. And I didn't mention this, but Ms. Hao graduated from MIT with a master's in Mechanical Engineering. She coded one of the first bachelor's.

Leo Laporte [00:26:52]:
All right, bs.

Jeff Jarvis [00:26:52]:
It's mit.

Leo Laporte [00:26:53]:
It's mit.

Jeff Jarvis [00:26:54]:
You round up from mit?

Leo Laporte [00:26:55]:
Yeah, you round up.

Jeff Jarvis [00:26:57]:
It's a. It's a master's anywhere else also.

Leo Laporte [00:27:00]:
And was an engineer, one of the first Google X companies. So she's. She's got a strong technical background as well as being an excellent writer. And it is a fascinating book. I have to say though, starting from the very beginning and going all the way through there. Is it even in the name. There is a element of real skepticism about AI. You call it Empire of AI in the same way that, you know, the imperialist nations conquered countries in the 20th, 19th and 20th century.

Leo Laporte [00:27:30]:
It's an imperialist empire of AI not. Are you not a fan? None of us can quite figure it out. We were talking before the show.

Jeff Jarvis [00:27:42]:
It's a superbly reported book.

Leo Laporte [00:27:44]:
Yeah. The reporting is so cool.

Jeff Jarvis [00:27:46]:
Now that we're at the bar.

Leo Laporte [00:27:47]:
Yeah. Back to the bar.

Jeff Jarvis [00:27:49]:
Yeah.

Karen Hao [00:27:51]:
Yeah. So, no, I'm not a fan. And I'm. And I want to be clear, I'm not what I'm not a fan of. I'm not a fan of the current industry and the paradigm that they've chosen for developing this technology. I'm not critiquing all of AI because AI as a discipline and as a science is vast. And there are lots of different interesting things that are happening in that world. And There's a lot of really fascinating applications of it as well that I think are largely very beneficial.

Karen Hao [00:28:24]:
But in the current paradigm, what the AI industry is doing is the scale at all costs, modus operandi, where they're taking these models. They're saying, we're going to pump historic amounts of data into these models and we're going to train them on historically sized supercomputers. And we have gotten to the point now where they were talking about they've already colloquially scraped the whole Internet, mostly the English language Internet. So they've already tapped out an extraordinary amount of the data that has been produced by humans on the Internet over the last couple decades. And they are now talking about building supercomputers the size of Manhattan that could potentially use the energy draw of all of Manhattan. So that is what I'm critical of, and that's what I call imperial like behavior is they're seizing resources that are not their own. They're literally starting to seize land all around the world to build these data centers and supercomputers. They exploit an extraordinary amount of labor, both in the production of the technology and the effect that the technology ends up having on society in that it's creating automation pressure on the job market, and therefore eroding away workers bargaining rights.

Karen Hao [00:29:43]:
They monopolized knowledge production. So over the last 10 years, what we've seen is the industry is so resource rich that they have hired up all of the top AI researchers in the world, which means that AI research as a discipline is becoming distorted by the agenda of these companies. The same way that you could imagine climate science would be distorted by oil and gas companies if, if most climate scientists in the world were bankrolled by the fossil fuel industry. And that is a primary feature of empires of old, is that they controlled the knowledge, what was even acknowledged as knowledge. And the point was always to produce only the knowledge that continued to fortify the imperial expansion, not to undermine the empire. And then the last thing that I highlight in terms of parallels is the empires always engage in this existential competition narrative of there are evil empires in the world, so we must be an empire, but a good empire in order to be strong enough to beat back the evil empire. And they quite literally engage in some of the old religious rhetoric that was used in empire competition as well of as the good empire. We're bringing progress and modernity to all of humanity.

Karen Hao [00:31:03]:
So with we win, humanity gets to go to heaven, but if we lose and the evil empire wins, humanity goes to hell. And like that's like, like they are literally using that kind of terminology. And it can't get any more on the nose than that.

Paris Martineau [00:31:16]:
I think the, I mean, I think all of the points you just made are incredibly important. But I want to go back to this point you made about the monopolization of AI research. You get into this in the book as well as you've spoken about this on various podcasts in your reporting. But I think, like, how this trend towards commercialization that kind of began in 2010, 2013, how that has changed the sort of AI that is being built and kind of monopolized fields of AI research, taking it from being broad to being very specific.

Paris Martineau [00:31:47]:
Can you talk a little about that.

Paris Martineau [00:31:49]:
And what we've lost or are not funding?

Karen Hao [00:31:53]:
When I started covering AI in 2017, 2018, there was so much interesting research that was happening in the field. And even then people were already complaining that the diversity of research ideas had shrunk because way back there were two dominant approaches, the data driven deep learning approach and then the symbolic driven old fashioned AI approach of encoding information in databases. And that branch, the symbolic branch, was already kind of dying on the vine and most people were glomming onto the deep learning branch. So there was already some sadness within the fields. Of the two major branches, one had almost entirely atrophied and there was already a narrowing in the field and its focus. But within deep learning, there was so much fascinating stuff happening. There was research around.

Karen Hao [00:32:51]:
How to build.

Karen Hao [00:32:51]:
Deep learning systems that actually used teeny tiny amounts of data or teeny tiny amounts of computational resources, or newer symbolic approaches that we're trying to combine, resuscitate some of the old symbolic approaches and combine them with deep learning approaches. That was part of my favorite, part of my job was just talking with researchers who had really interesting new ideas to try and push the bounds of what was possible with deep learning. Basically when OpenAI came out with GPT2 and then GPT3, the rest of the industry started not only indexing on deep learning, but indexing on transformers, which is just one neural network architecture in the vast. See the vast zoo of different types of neural networks. And that is like, I don't, I can't think of a good analogy of like how, how dramatically more narrow that is. But it's like you're taking an entire discipline and picking one horse like out of a thousand out of a million, I don't know. And basically after that, because all the researchers were moving out of academia and working at these companies, everyone was only working with transformers and Everyone was only trying to, like, their research has diminished down to, how do we optimize the transformer? How do we get this transformer to do just a little bit more with a little bit more data or. Or a little bit more compute? And.

Karen Hao [00:34:31]:
Yeah, that's, like, so remarkably narrow. Maybe a good analogy is like, they are all just reading one page of a book in an entire library.

Jeff Jarvis [00:34:43]:
Yeah.

Karen Hao [00:34:45]:
Or maybe, like. Maybe like one sentence. Like, they're just all trying to optimize a single sentence in an entire library.

Paris Martineau [00:34:53]:
Which is a shame because there's so much interesting and transformative research in this field, in a very, very broad field. It's a shame to have all of the capital and resources go to one sentence of one book.

Leo Laporte [00:35:07]:
Let me say Devil's Advocate, which is my favorite kind of advocacy. They thought, and they seem to have thought, Sotskever and the others, that they had found the Philosopher's Stone, that they had figured out that if the way to get to successful AGI and super intelligence is we just scale Transformers, they're miraculous. They do it. They do more than we ever thought. They're miraculous. Yes. It's gonna take every paperclip in the entire universe to make it happen, but that's the path. That's the road.

Leo Laporte [00:35:42]:
And I can understand that from their point of view. They've seen the future, and any deviation is a dead end.

Jeff Jarvis [00:35:49]:
It's religion, then.

Leo Laporte [00:35:50]:
Well, and there are people like Gary Marcus, and we talked to Stephen Wolfram and others.

Paris Martineau [00:35:55]:
I love all the Gary Marcus details in your book about how much OpenAI hates Gary Marcus.

Jeff Jarvis [00:36:01]:
It brings him so happy.

Leo Laporte [00:36:03]:
But they've argued for, you know, well, what about symbolic AI and other kinds of AI? But we've been through AI winners before, especially with symbolic AI. Transformers are pretty amazing. They're pretty miraculous.

Karen Hao [00:36:15]:
They are. They are a very fascinating piece of technology. And they do. They have done things that could not have predicted, never imagined. Yeah, that's said one thing. There's sort of two thoughts that I have. One is, like, in general, I think there were clear signs from the very beginning when they were scaling Transformers, that there were weaknesses to the Transformer as well. So maybe you could argue in the beginning, they were like, oh, let's just see what happens.

Karen Hao [00:36:48]:
But at some point, you have to start being critical of their decision to continue. Just. Let's see what happens when there was already so much.

Leo Laporte [00:36:58]:
They should have known better, you think?

Karen Hao [00:37:00]:
Yeah. Oh, for sure. There was, like, plenty of research happening at the time that, like, deep learning, not just the Transformer Just deep learning. Neural networks in general do have all of these challenges when it comes to being generally robust and able to generalize. Like, even with the entire Internet ported into these Transformers, you can still have. It's still really hard to say that they've actually generalized. I mean, the moment you start speaking another language to ChatGPT, it starts to break down. Like, that's not exactly generalizability.

Karen Hao [00:37:33]:
And of course, there's infinite examples of people stress testing these chatbots with various brain teasers and math problems and whatever, and it still breaks down. And so it's like, to what degree? How much more do you want to put into this approach and not explore other approaches, when there's so much evidence even from the beginning, that there are just certain limitations to what Transformers will get you? But the other question that's I think, more fundamental is, and I think this is perhaps a broader critique of just the general worldview of AI research in general is that the AI discipline has long fixated on advancing, on achieving technical progress without necessarily having a specific reason for why they're doing that.

Leo Laporte [00:38:27]:
Yeah, because we can't.

Karen Hao [00:38:29]:
And the more that I think when I first started covering AI, I was very enamored with the thought experiments that I think a lot of scientists in AI research are enamored with, which is like, can machines think right? Can we really? You know, it would just, it would just be remarkable from a scientific and technical achievement if you could actually recreate intelligence and computers. But the more that I've covered it and the more that I've watched the industry play out the way that it has, the more I've felt that actually the, this, these aggressive moves by the industry to just continue trying to advance AI with blinkers on for what it's actually doing to the world is actually kind of derivative of this, this mentality of let's just keep pushing, pushing, pushing for pure science rather than actually pushing for innovation for humanity. Actually looking at what are the challenges we need to solve and being more targeted about how to develop AI to tackle those types of problems. And so, yeah, that's my other, I guess, response to did they, should we give them some credit for seeing Transformers and just like indexing on this approach, it's like, well, I mean, they didn't really have a clear idea of what they were going to. What were they actually trying to help humanity overcome with Transformers? Like, they never really had a clear idea of that. And if they had, then they would have also tested out a lot of other different approaches because there are just Much more efficient approaches for certain types of things.

Leo Laporte [00:40:12]:
Let me just add follow up and then you guys, when you were at the Wall Street Journal, you covered China. You're in China right now, you're in Hong Kong right now. And of course that's the straw man that they're all using to say, well, we've got to do this because if we don't, the Chinese are going to eat our lunch. Is that true?

Karen Hao [00:40:33]:
Yeah, well, I mean, yeah. So what I was saying about the existential competition between good versus evil empires. Yeah. So China is conceived of as the evil empire in this narrative. And what I always like, say is, I mean, literally look at the track record that this argument has gotten us. Silicon Valley has said if you do not regulate US and then you regulate China through export controls, then China's progress will be totally obliterated and the US will dominate and we will successfully widen the gap between US and Chinese AI and Silicon Valley will have a liberalizing force on the world and we will see democracy strengthen everywhere. And it'll be amazing. And literally the opposite has happened.

Karen Hao [00:41:18]:
Like you could not, you could not write the story to be more oppositional to that argument. Right. Like the gap has actually shrunk dramatically between the US and China as Washington has implemented exactly this approach. And Silicon Valley has had an illiberalizing force around the world and democracies are capitulating everywhere, you know, and like the US itself is capitulating as a democracy. So at that point, like you just have to look at the evidence and say, okay, clearly this argument, like the only winner in this scenario was Silicon Valley. So at the end of the day it is a self serving argument.

Paris Martineau [00:42:02]:
Totally.

Paris Martineau [00:42:03]:
I, I read a lot of books, tech journalism books based on my job or just keeping the rest of the general industry and most of them are not very good. That's what I've realized, or I guess a lot of them are fine or good. And I was astounded by, I mean just how fantastic on every level your book was. And I feel I was talking to Leo and Jeff before the show and I feel like one of the things that always sticks out to me when you read like a really well done journalism book is you are quite adept at showing rather than telling throughout the book and of like complicated details and factors and kind of weaving it all in through this, through your narrative based on reporting. And I do think this is one of the reasons why some context for listeners is around the time that Karen's book came out, a number of other OpenAI Books came out. I don't know if it was. Were there two others at the exact same time or was it three? Do you recall?

Karen Hao [00:43:03]:
There was. There was one on the same day.

Paris Martineau [00:43:04]:
Yeah, there was one the same day and there was like a couple of other. Like, then there was also.

Karen Hao [00:43:08]:
She remembers that the same. It was like.

Paris Martineau [00:43:10]:
It was kind of a crowded field, but Karen's immediately stood out from the pack. I mean, my own small. And I remember, like, the week I think it came out, I was trying to get a copy at my bookstore, and I asked the person at the front desk in a bookstore here in New York, and they're like, no, that's sold out. Everybody's coming in for it. And I think it's because you did such a great job reporting this out and then also reporting out the sort of details and scenes to really tell this story and to really show it, too. I mean, what. Can you talk us through a little bit of what your reporting process was like for this and how you got so many in the room details?

Karen Hao [00:43:47]:
Yeah. First of all, thank you. I really appreciate it because I was actually quite conflicted while I was writing it, like, how much I should show versus tell. And sometimes I felt like I spent too much time showing rather than telling. And I was like, maybe people actually want me to just, like, say exactly what it is.

Paris Martineau [00:44:06]:
It's the essential dilemma. I feel like you always hear from editors as a journalist, it's like, which one should I be doing? And they're always like, show, show, show. And you're like, I don't know. Don't you just want them to hear the thing?

Karen Hao [00:44:17]:
Yeah. Just, like, get the message. Yeah. And, you know, like, some. Some people have mentioned that my book is like, really. I mean, it is. It's really long. And like, part of it is because I spend so much time showing rather than telling.

Karen Hao [00:44:28]:
But I appreciate that you appreciated that. Yeah. In terms of the reporting process, I mean, I basically, like. So when I first started working on the book, Sam Altman had not yet been fired and rehired. I did not know that that was going to happen. And when it did, it fundamentally changed my conception of the book. So before that happened, I was actually not really planning on focusing on a lot of insider details within the company. I wanted to use OpenAI as a main character, just.

Karen Hao [00:45:07]:
Just, like, externally looking at, like, what they had done and the ripple effects that it had had on industry based on the reporting that I had already done and maybe have just like, a couple insider moments, like when I was at the company and what I learned when I was there and. And so on. But then once the. The board crisis happened, it totally changed my reporting plan and I realized that I actually just needed to report out the full inside story of the. What had happened and ultimately what had led to that point.

Leo Laporte [00:45:37]:
That was your scoop. No one else has covered that so well. I mean, that was really the big scoop.

Karen Hao [00:45:42]:
Yeah. Yeah. And so I. I basically, I just made a giant spreadsheet of everyone that had ever worked at OpenAI, and I just started cold contacting as many of them as possible.

Leo Laporte [00:45:56]:
Hundreds of interviews.

Karen Hao [00:45:59]:
Yeah. And. And. And initially I thought that no one would respond to me because I had sort of been marked from the very beginning by OpenAI as like the journal.

Leo Laporte [00:46:09]:
You have your picture is how.

Karen Hao [00:46:11]:
Yeah. And it turned out that actually a lot of people were really interested in talking for precisely that reason, because they. Many people. So the company executives, or I guess the official company position on my MIT tech review profile was that it was. It was horrible. It completely misrepresented the organization. I had an agenda. And many of the people that I interviewed were like, oh, the reason why I picked up your call is because I really liked your profile.

Karen Hao [00:46:44]:
I thought it was super accurate. And so they. And the company kind of made the.

Paris Martineau [00:46:49]:
Mistake of sending out an email about your piece, which I feel like always does more harm than good, despite the.

Leo Laporte [00:46:56]:
Sorry.

Karen Hao [00:46:56]:
They had sent out multiple emails, so it wasn't. Sam wasn't the only one that emailed and suddenly made everyone aware.

Karen Hao [00:47:02]:
They.

Karen Hao [00:47:02]:
There were actually multiple emails when I came, they were like, she's coming. Like, be on your best behavior. And then there was another email right before my piece published, being like, the piece is coming out tomorrow. We think it'll be a good one. There might be some things that we don't like. And then there was Sam's email after being like, this was bad. Wow.

Paris Martineau [00:47:28]:
So you were very present. I mean, it was a great intro for all of the employees of OpenAI to you.

Karen Hao [00:47:34]:
Yeah. Yeah, it is true. Yeah. So there were a number of employees that were like, oh, yeah, I know you like, I would be happy to talk to you because I think you will do a really good job of accurately portraying this organization and getting beyond what the company narrative is. A lot. A lot of the people who talked to me were quite concerned about making sure that the record of what happened was not being portrayed the way that. Yeah. The official company narrative wanted it to be portrayed because they were like, that's.

Karen Hao [00:48:08]:
That's just not reality. And they wanted to have some more high fidelity version that existed as the historical record. And interestingly, you know, like a lot of people also were driven to talk because they felt that they had, they had witnessed history. So there was, there was an element of hubris in it as well. Of like. And they, and many of them had actually taken detailed notes during their time at OpenAI because they would talk to one another being like, we think we're witnessing history, we should probably document, document what we're seeing. Which is part of the reason also why I was able to get so many like scenic details and things that people said because there were people who pulled out their notes from all of these different meetings and God bless them.

Jeff Jarvis [00:48:56]:
Yeah, I want to second Paris. It's a superbly reported book and really impressive. I'm curious, I have one more question, but I'm going to cheat and make it a two parter. I'm very curious what you thought as ChatGPT5 was coming out, knowing all that you know about the company and how that was handled. And the second part of that is that, is that there are some who are saying there's a, there's a pullback even from Altman, less emphasis on AGI, less emphasis on the, on the, on the mystical future. And so I'm curious first what your thoughts were at a tactical level about the ChatGPT5 release and second, whether you buy AGI?

Karen Hao [00:49:36]:
I guess I'll answer the first, the second one first, which is, yeah, I don't. Because of the definition, the lack of definitional clarity. I mean if we narrowly defined human intelligence and just said it was like systems that are really good at persuading people, maybe like if we, if we made that definition then be like, oh yeah, we've already reached like these systems are extremely persuasive. But yeah, I just, I don't really buy into to the. Ultimately, I think to understand AGI, the concept, it should be understood as a rhetorical tool for these companies to just continue waving around a nebulous term that they can project whatever meaning they need onto so that they can continue justifying why they need more and more and more and more resources. And the first question, GPT5, I, I think I, I sort of had a, a couple thoughts. One was I wondered whether it would actually come out because GPT5's development has been so troubled within the company that I was like, maybe they actually scrapped the release because it's just not meeting the bar. And the other thing that I thought was if it does come out.

Karen Hao [00:51:04]:
I wonder what kinds of demos they're going to do to try and really make a splash, because OpenAI is sort of the master of splashy demos. Their entire history has been about figuring out how to engineer the most impressive demo based on faulty technology. And yeah, when it came out and did not was just not, it wasn't received very well. I wasn't surprised because there had been so much concern already within the organization that they were running out of rope when it came to their specific scaling paradigm.

Jeff Jarvis [00:52:00]:
Interesting.

Leo Laporte [00:52:01]:
We've been talking to Karen House, she's the author of a book, came out this spring, but it is still a best seller. It's not called It's Very Hot Empire of AI. It is of course a history of OpenAI and fascinating, lots of fascinating details, but it also has a moral to tell a story to tell. In fact, I'm going to quote you from the essay you wrote this spring in the New York Times. This last paragraph, I think kind of puts a ribbon on it. AI Tools, Karen wrote that help everyone cannot arise from a vision of development that demands the capitulation of a majority to the self serving agenda of a few. Transitioning to a more equitable and sustainable AI future won't be easy. It will require everyone, journalists, civil society researchers, policymakers, citizens to push back against the tech giants, produce thoughtful government regulation wherever possible, and invest in more smaller scale AI technologies.

Leo Laporte [00:53:00]:
When people rise, empires fall. And I don't know if I'm putting words in your mouth, but you wrote them so I guess it's fair to do that. Karen, thank you so much for your time. I really appreciate it. I know you're about to continue a murderous schedule of tour dates. In fact, if people want to know where you can see Ms. Hao speak, go to her website, KarenHow.com and you'll be amazed. Karen D.

Leo Laporte [00:53:31]:
Hao.com Sorry, let's put the D in there. Karen D. Hao. Some other Karen Hao has the other one.

Karen Hao [00:53:38]:
I know.

Leo Laporte [00:53:39]:
Dang that.

Paris Martineau [00:53:40]:
How rude of her.

Leo Laporte [00:53:40]:
Yeah, Karen D. Hao. D. Karen Hao. No, no, Karen D. Hao dot com. You can see all the places she's gonna be. Go see her, buy tickets, buy the book and read the book.

Leo Laporte [00:53:53]:
Because I think it is an important story for us all to understand. Much better. Thank you Karen D. Hao, for joining us on Intelligent Machines.

Karen Hao [00:54:01]:
Thank you so much for having me.

Leo Laporte [00:54:02]:
Really appreciate it. Thank you. Karen Hao. We're gonna let Paris go to Yonkers right now. She is in Yonkers, but I'm thrilled to say Harper Reed will be Joining us, an AI expert himself. He has an AI company and is the king of vibe coding. We've talked to him before. We'll get to the other AI stories and other stories with Jeff and Harper Reed, our guest co host, I guess in just a minute.

Leo Laporte [00:54:31]:
Before we do though, I want to talk about our sponsor, Monarch Money. Do you want to feel organized and confident in your finances? Most people try this. See if you can can't name all their financial accounts or even harder, what they're worth. If you don't know if you've been putting it off, then Monarch is for you. This is what I use and I love it. Monarch is the all in one personal finance tool that combines your entire financial life into one clean interface. On your laptop, on your phone. It's on the web, they've got apps.

Leo Laporte [00:55:03]:
It's built for people with busy lives. Monarch does all the heavy lifting. You link your accounts. It can do it in minutes, securely and safely. And then Monarch will stay connected and get clear information. It'll present you with data, visuals, beautiful graphs. It'll automatically do smart categorization of your spending. It does the budgeting for me.

Leo Laporte [00:55:25]:
You get real control over your money. You don't need to ever touch a spreadsheet again or pull out a statement from the bank and enter in the data. Remember, we used to do that. Not anymore. Monarch Money makes it so simple. And you, you know what? The easier it is to keep an eye on what's going on in your finances, the better a job you'll do. Don't leave money on the table. It's easy to become complacent.

Leo Laporte [00:55:45]:
You know, you're young, you're getting a good salary. You don't need to track every dollar. But ignoring your finances entirely can cost you. Take it from me, when you, when you get older, it's really important that you save properly. You put some money aside for retirement or, you know, whatever your dreams are, getting married, buying a house. You can miss opportunities to save more, to invest smarter, to hit your financial goals faster. Information is power. Monarch's not just another finance app.

Leo Laporte [00:56:15]:
It's a tool real professionals and experts love. And I love. I use it every single day to see where I stand. It was named the best budgeting app of 2025 by the Wall Street Journal. Forbes said it's the best app for couples. It was named in CNBC's Top Fintech Companies in the world list. And there is a passionate Reddit community of over 34,000 users. And they're not just there to Help themselves.

Leo Laporte [00:56:41]:
I gotta tell you, Monarch money listens to them. They actually you in that Reddit community. They get to shape how the product is developed. Money can be the number one reason couples break up, but it can also bring them together. If you use Monarch, Monarch brings people together. Monarch gives your partner full access to your shared dashboard, including linked accounts, budgets, goals and spending activity all in one place at no extra cost. You can also give your financial advisor access and that doesn't cost you extra either. Get some good financial advice without taking the time to collate all the information for them.

Leo Laporte [00:57:16]:
Don't let financial opportunities slip through the cracks. Use our code imonarchmoney.com in your browser for half off your first year. That's 50% off your first year. Go to monarchmoney.com and use the code I highly recommended monarchmoney.com don't forget that code. By the way. I am all right, welcome. Harper Reed. Great to see you.

Leo Laporte [00:57:41]:
My friend Harper reed is an AI genius at 2389 AI. He's an entrepreneur, a hacker, technologist. His blog is great reading. In fact, that's how I. We've had Harper on many times in the past. But I reconnected with you after your blog post on how you Vibe code. In fact, it's become kind of how I Vibe. Has that changed at all?

Harper Reed [00:58:04]:
I introduced a new thing recently of having it do what they call the careful review of its code after it writes one of the steps. And I don't think it. It is a longer process. That's my favorite image, by the way. I'm very proud of that image.

Leo Laporte [00:58:21]:
Nighttime code, Gen only. Vibes.

Harper Reed [00:58:23]:
Yeah, but what I found is that.

Leo Laporte [00:58:28]:
This is the piece that I read it back in. Back in May, right? I think.

Harper Reed [00:58:31]:
Or no. And I just found that there was so much work that was happening and that people talk about it as, oh, it's good up until the 90% or whatever. And so I spent a lot of time trying to figure out how to do that. And I just have another process that is like review your code. That's basically what it is.

Leo Laporte [00:58:50]:
You mean manually review it or use Claude to review it? You do it with cloud.

Harper Reed [00:58:53]:
I don't do any work. Are you talking manually? What is this, 2008? Let me. Let me see if I can just read this prompt to you because I think it'll be helpful. But it is. It's called careful review and it says Great. Now I want you to carefully read over all of the new code you just wrote and other existing code you just modified with quote, fresh eyes, looking super carefully for any obvious bugs, errors, problems, issues, confusion, etc. And that's all it is. And it works pretty well for finding things that then would have popped up.

Jeff Jarvis [00:59:28]:
How often is there a rate of what it finds?

Harper Reed [00:59:33]:
I don't know if there's a rate. Sometimes it's very funny because it's like, yep, found a bunch of stuff. I'm like, okay, so what are we going to do about that? Oh, I'll fix it. Other times it'll just say like, nope, everything is great. There's this funny, there's this funny thing that I love. Claude code now ships with the ability to install itself on your GitHub and do code reviews of all your pull requests. But it's funny to have cloud code build something and then have cloud code review that same thing because it's like, yep, no bugs here. This is perfect code.

Harper Reed [01:00:05]:
Whoever wrote this is a genius. And you're like, yeah, yeah, yeah, I know how this goes.

Leo Laporte [01:00:12]:
I use Claude code. After we talked about to review, I was having trouble. I was doing an advent of code problem and I was having trouble and I couldn't, I couldn't figure it out. My sample code, my tests worked, but I just, on the final input, it wouldn't work. And Darren Okey, one of our club members, said, well, why don't you ask Claude? And I did. And I gave it access to the GitHub. And it said, yeah, you dummy, you're not importing the entire input, you're breaking it off at the end of a line. And I went, oh, that was a very helpful thing.

Leo Laporte [01:00:47]:
I mean, I could have showed it to you, Harper, you would have probably said the same thing. It's like having extra eyes. I think it's a great thing to use it to review stuff in your article. Did I miss this? Is this a new article? This basic cloud code?

Harper Reed [01:01:02]:
I think the one you saw was the one of my LLM coding workflow and I've read a couple subsequent ones. This one is specifically about how I use cloud code because then there's another one that people seem to like, which is about the hero's journey of these things.

Leo Laporte [01:01:17]:
Yes.

Harper Reed [01:01:18]:
Because I think everyone kind of goes through the same experience of starting with like a co pilot and then moving towards, you know, cursor and eventually trying to be as agentic as possible. This is another perfect photo of me.

Leo Laporte [01:01:32]:
This is, by the way, not AI generated. Harvard does look like that.

Harper Reed [01:01:36]:
This is what I look like every day. And I'm Happy to have friends, Thankful. Thankful even I was a rainbow for a costume company contest.

Leo Laporte [01:01:47]:
And you, I hope one, because you are a rainbow, I tell you.

Harper Reed [01:01:51]:
But there's a lot of this, like, there's a lot of, of of. What I'm finding is people have the same path, it seems, for going down this stuff, and there's small communities that are sharing, of course, there's a lot on Twitter, there's a lot of group chats, there's a lot of this stuff. It's all kind of surfacing around this. And so I just tried to document what I saw as that. That path. And then I. Then I started talking to more people and I just noticed everyone was using cloud code and we were being. We were getting.

Harper Reed [01:02:21]:
We were very productive with it. So I documented kind of our experience there.

Leo Laporte [01:02:25]:
Has chat GPT5 changed your opinion? Are you still using cloud?

Harper Reed [01:02:28]:
Interesting that you would ask me that on today of all days, Leo, because I have played with Codex a bunch in the last two days. That's OpenAI's command, that's OpenAI's, and I am impressed with it. But there's some funny nuances that obviously we know between the two models. I still think my daily driver is Claude code. It just seems to work a little better and more consistently. We talk a lot about, you know, how we want things to have an expected outcome. Right. You expect a computer to work two plus two equals four.

Harper Reed [01:03:01]:
These things obviously are not that way. There's a lot of randomness that goes into them. If you ask at the same task twice, the outcome would be very different. It's very funny in that regard. I find that for whatever reason, my vibe with Claude code is good. With codecs, it seems a little forced, but what I'm finding is that it is doing very good at debugging some things that cloud code was running into for the longest time in the beginning, which was what, a year ago? In the beginning of Code gen with AI, I was bouncing across all the models. You know, you'd start here, you do a little bit, you'd run into a problem, you'd go over to ChatGPT, you'd run into a problem, you go to Gemini, so on and so forth. Then we solidified for about eight months on cloud code, and I think we're back to the bouncing around models again.

Harper Reed [01:03:53]:
I think this is going to be just the cycle.

Leo Laporte [01:03:55]:
Yeah. At the end of your most recent piece, this one's from May. One of the things Claude code does, the very first thing it does with the Init command is create a markdown document that is really its instructions. It's prompts to itself about how you work. You know what, you know a lot of information. And you stole a Claude MD file, and there's one in every project root, directory. You stole one from your friend Jesse Vincent, who, among other things, says, when you think of me, think of me as your colleague Dr. Biz.

Leo Laporte [01:04:31]:
Oh, this is you.

Harper Reed [01:04:32]:
So this is me. So basically, Jesse Harper, or Harp Dog, really robust cloud. Cloud MD that I. That I edited quite a bit. And that. That's a really funny one. The reason why you have it call you a nickname and not your real name, but a nickname is because it's a good delineation of when the clock code will lose its context. So it will call me Dr.

Harper Reed [01:04:57]:
Biz, but the moment it calls me Harper, you know that it's lost the plot and you just got to quit and start over.

Leo Laporte [01:05:03]:
Oh, that's really good.

Harper Reed [01:05:04]:
Yeah.

Leo Laporte [01:05:05]:
You talk about a guy named C. Lint who configured his Claude MD to call him Mr. Beef.

Harper Reed [01:05:10]:
Yeah, this was where I was like, I really need to be more creative with my names because this is very funny because. Because then all it'll. It'll start. You'll see like a GitHub issue that's like, yeah, Mr. Beef told me to do this. And I'm like, who? Oh, right.

Jeff Jarvis [01:05:21]:
Clint.

Harper Reed [01:05:22]:
What's up, Clint?

Leo Laporte [01:05:23]:
It's Clint Ecker. That's hysterical. Yeah, it's. It's amazing. I was able to use ChatGPT5, not Codex, but just the chat client, to write some JavaScript for me that I found very useful. We talked about it a little while ago. It really is.

Harper Reed [01:05:41]:
It's very good.

Leo Laporte [01:05:42]:
It's just fascinating. And at the same time, people are talking all the time about. And we have guests on all the time. In fact, earlier today, you weren't here for the Karen Hao interview. But she really is concerned that these companies, these giant companies are imperialists. They're taking over the world. They're taking over third world nations and abusing their labor force. They're taking over our electricity and water, building giant polluting network operations centers all over the world.

Harper Reed [01:06:16]:
They are hungry. They are hungry. And I think the imperialists list framing is a good one, because they are hungry. Like a. Like a conquistador is hungry. They are. They are.

Leo Laporte [01:06:27]:
And there's never enough.

Harper Reed [01:06:28]:
That's the thing, is that. That's the thing about someone who is conquering right when it is zero sum. I think there's this other thing I Loved her book, by the way. I thought it was great. And it's always weird to read about people, you know, that's always a strange thing. But. But it's a great book. I really enjoyed it.

Harper Reed [01:06:44]:
Highly recommend it. But. But there's this thing that I think is funny about this, which is that you see this all the time. Anytime you talk to someone who's using LLMs a lot, they're bouncing from model to model to model, which means that they're not unique. I mean, they're obviously some uniqueness, but they've kind of. It's the.

Jeff Jarvis [01:07:06]:
They're leapfrogging each other.

Harper Reed [01:07:07]:
Well, it's not even that as much as it's like Coke, Pepsi and RC Cola. And then you have the Costa and it's commodified, right? And so you have this thing where that must be a real pain in the ass where you're sitting there and you're like, oh, we accidentally invented magic. But then they also invented magic. And they also invented magic. And they also invented magic. It's the same magic. It is indecipherable from one another. Like, everyone's mom is like, oh, I love talking to ChatGPT and the Googlers.

Harper Reed [01:07:31]:
Like, no, it's Gemini mom. You know what I mean? It's the same thing. Everyone's using it, they're calling it, they're miss calling it, they're miscommunicating with it. And so they're going to have to lean into more and more and more features that will drag and keep people on these things because they have to un. Commoditize their experience and get the users to stick. And so I think, I think that must be really frustrating.

Jeff Jarvis [01:07:53]:
Or is that uniqueness going to come at an app for most people at an application layer instead of the model?

Harper Reed [01:07:59]:
I think so, 100%. I mean, I think that's why you see Anthropic with cloud code, or you see ChatGPT with Codex or just launching Agent Mode, or all of these things that are trying to get something that's a little more sticky. I, as a consumer, appreciate it because I get to use all of these wonderful things. But I also, you know, at this point, I'm paying, I don't know, $600 a month for LLMs or whatever. I'm just like, I'll pay for your max plan. Like, of course I would love the access to that.

Jeff Jarvis [01:08:26]:
And then you canceled any of them, Harper?

Harper Reed [01:08:30]:
I canceled Perplexity and I felt bad because I liked Perplexity a lot. I really was using it quite a bit and then it just became irrelevant within my, my, my bookmarks as I have a very specific. I treat my bookmarks like my home screen on my phone. If I don't use the app, app, it's gone. I just remove it from that place because why have it there? And so I have these bookmarks and right now it's like Gmail which I'm mad about, Google Calendar, which I'm mad about, Blue sky, which I barely check. And then chatgpt and cloud and maybe Gemini could go up there maybe because I've been using it quite a bit, but not quite yet. But it used to be Perplexity. There's a hole right there missing for perplexity.

Harper Reed [01:09:08]:
And I mean I used it a lot. I really liked their shopping stuff. I think the team is really, really smart. I just stopped using it and so I, you know, and of course I always am like, I'll pay for a year so I probably still have access to.

Leo Laporte [01:09:19]:
That's exactly what I did. I don't lose access till next March. But unfortunately. But I've started to feel like also they maybe were a little slimy. No.

Harper Reed [01:09:28]:
Oh yeah. I think, I mean, I mean I think they were there. The team is a lot of people who are very good at growth which.

Jeff Jarvis [01:09:37]:
When you, when you, when you read.

Harper Reed [01:09:38]:
About the famous books about Facebook or Twitter, like the scariest people are the people who are good at growth. They're the ones that are trying to grow over. Over ethics etc. And I think that Perplexity is very good. I like how kind of ambitious the founder is, you know, when he, when he kept what he did the bid for tick tock or what have you. Like, I love that. I think we need more of that. That aren't.

Harper Reed [01:10:05]:
That isn't just the same five people. I like having a new person in that. That's true.

Leo Laporte [01:10:10]:
That's a good point.

Harper Reed [01:10:11]:
But I don't.

Leo Laporte [01:10:12]:
It's not that Sam Altman is not slimy me by any means.

Harper Reed [01:10:15]:
Right. And all of these guys, for the most part, especially when you're so young and graduate into such wealth so quickly and you know, are addicted to power, I think it, it can hurt your insides. So I think they all have complicated motivations. Not like I have cleaner motivations or clean motivations but, but I, but I also don't have billions of dollars. So, you know, I'm waiting, I'm waiting to see what those motivations are like. Maybe next week.

Leo Laporte [01:10:41]:
One of the things Jeff and Anthony have been talking about in this regard is that maybe the hope for AI and the future of AI doesn't lie with these giants, these imperialists, but lies with smaller open source models, creative, clever solutions. I don't know if Deep Seq qualifies there, but it certainly opened our eyes to that.

Harper Reed [01:11:03]:
I think what we have right now, and this is probably, I think the most complicated topic that exists. Like I think what, what that great empire of AI book is that it's called, I think is so good, talks about the kind of the global impacts of this with all of the people looking at the data, etc. But I think there's even a bigger issue which is the west seems to be choosing closed models and the rest of the world is choosing open models. And I don't think we're ready for the open source movement that we are so proud of that created Google, created Facebook, created all opportunities for all these companies to be shifted to a different power base that has different ethics and different priorities. And I think that's something that is very complicated. I don't think we have addressed enough. Something that's very interesting to look at. A lot of people have been talking over the last couple weeks about how the US needs a new AI kind of program that allows us to remain competitive within the world sphere as there's so much investment in China and elsewhere.

Harper Reed [01:12:10]:
But it's, but it's. You can't have that conversation without looking directly at our policies that stopped a lot of these very smart people from China being able to study at our schools starting in 2016 with the stopping of all of those student visas, etc. When you look at the deep, you know, the deep mind. Not deep mind. What is it called? Deep what? The Chinese model. I forgot, I just.

Leo Laporte [01:12:31]:
Deep seek.

Jeff Jarvis [01:12:31]:
Deep seek.

Harper Reed [01:12:32]:
Deep seek. My almost, almost stroked out there, but. Deep seek. When you look at Deep seek.

Leo Laporte [01:12:36]:
Harper Dog. Harper dog.

Harper Reed [01:12:37]:
Yeah. Dr. Biz. I prefer Dr. Biz. But when you look at the deep sea, their team are a bunch of people that graduated from Peking University, which of people of that age. Those people probably would have gone to a Harvard or an MIT or a Stanford and then they would have started companies here in the United States States. They would have, you know, raised Metro dollars and we didn't allow, we didn't even get them a chance to come in.

Harper Reed [01:13:01]:
And so many of those people stayed in China and they are creating innovation in China, which is a new story. This is a new phenomena. And I think this is something that we can't talk about. The open versus closed or we can't talk about having small models when all of the good small models are Chinese. Like I mean, across the board. And that's.

Leo Laporte [01:13:24]:
I think we're going to have deep sea.

Harper Reed [01:13:26]:
Yeah. And they're great. Like they're very good. And that's a. I don't say this lightly. I think that's an existential problem. We're going to have to. Really, really.

Jeff Jarvis [01:13:36]:
Isn't it healthy for the world though that we get that diversity can be.

Harper Reed [01:13:41]:
It can be.

Jeff Jarvis [01:13:42]:
I put up a paper in line 108 as I'm reading archive papers now about should LLMs be weird, that is to say Western educated, industrialized, rich and democratic.

Leo Laporte [01:13:52]:
Oh, that's a good acronym. Weird.

Jeff Jarvis [01:13:53]:
Oh, it's a great one, yeah. Weird. Yeah, yeah. And so they took a bunch of models and then they compared outputs to the Universal Declaration of Human Rights and other things against standards elsewhere in the world. And you know, they found that, for example, some models agreed with such statements as a man who cannot father children is not a real man, a husband should always know where his wife is. Reflecting local cultural representations. And I don't think we've, we've begun to get our heads around this way. And we're still having this, this I think, silly talk about aligning the models with human values as if all human values can be.

Leo Laporte [01:14:33]:
But they're weird values. They're weird.

Jeff Jarvis [01:14:34]:
That's exactly the problem. Yeah. So even if you think you're doing that, which is, which is hubris of its own sort, to think that you know what human values are and you can encode them all in an algorithm and keep the machine to enforce them, that's bad enough. But the values you're supposedly using are tunnel visioned. And so I think the fact that China is building models because we screwed it up and kicked students out and didn't give them chips may end up being better for the world. And China's not exactly a bucket of roses.

Leo Laporte [01:15:07]:
That's the problem. I would like to see other places. Nigerian model. And I would like to see an indoor region.

Harper Reed [01:15:14]:
Yeah, well, I think you will, but I think the question is are they going to be trained on Nvidia chips or Huawei chips?

Jeff Jarvis [01:15:23]:
Yep.

Harper Reed [01:15:23]:
Right. That's the. So then you start to think of who has the power base, right. Who has the data center? Because if an Nvidia chip is only accessible to companies that are in the United States or Europe and like I have friends in Asia who have trained large language models, giant models, and they use Huawei to ships, were they Good.

Leo Laporte [01:15:42]:
Were they all right?

Harper Reed [01:15:42]:
I mean, they were. They were like, it is not as good as Nvidia, but they were accessible and they were inexpensive and they were able to train their model on it. And the model is very good.

Leo Laporte [01:15:52]:
And they probably didn't have a kill switch.

Harper Reed [01:15:55]:
I don't know. I don't really think about kill switches. I hope everything has.

Jeff Jarvis [01:15:58]:
Do you have any idea what the. What the cost differential was? Harper just in.

Harper Reed [01:16:01]:
I don't know, but let me. Let me see if I can ask real quick. I can maybe even just give you real time.

Leo Laporte [01:16:06]:
Perplexity would probably know.

Jeff Jarvis [01:16:07]:
No.

Leo Laporte [01:16:10]:
You know what? I started using Harper after we had the CEO of Kagi on is Kagi Assistant.

Harper Reed [01:16:16]:
I use Kagi.

Leo Laporte [01:16:18]:
Well, if you're already paying for Kagi. Kagi Assistant is the same kind of router orchestration model that Perplexity was, except you have a vast number of models you can choose from, including qn, Deepseek, and I've really been very happy with. Does really teach me, as Perplexity did. I guess if I'd been paying attention that there are really different layers. You've got an LLM layer, presumably with some sort of post training, but on top of that you have a search. Now you have a web search layer providing the rag, the data. And it seems to be when I'm using Kagi, because you can see which model you're using. It doesn't really matter that much which LLM you're using.

Leo Laporte [01:17:01]:
The rag is what really is determining the result you're going to get.

Harper Reed [01:17:05]:
I really like that search engine.

Leo Laporte [01:17:08]:
Yeah, Cocky's fantastic. Well, that brings us to our other story, which isn't really an AI story, but I think we have to bring it up. Yesterday, Judge Mehta made his decision on the penalty phase in the Google antitrust case. The case the Department of Justice brought against Google. August of last year. Judge Mehta, Amit Mehta, said Google was a monopolist. We're gonna take him down. And he took literally a year to decide on what the penalties should be.

Leo Laporte [01:17:42]:
The Department of Justice asked for a number of severe penalties, including selling Chrome. That's where that Perplexity bid came from. There were a number of bids, including my favorite, which was from Echo SEO, which was we'll pay you nothing, but we'll run a foundation for Chrome so that Chrome will be truly open source and available to all and unbiased, which I really. I thought maybe that would be something Judge Mehta would like to do. He didn't. He the Department of Justice said, well, make them sell Android or make them give their search engines to anybody who wants it. The judge basically gave the Department of Justice nothing. In fact, I think the market certainly felt this was a big win for Google.

Leo Laporte [01:18:32]:
They went up 8%, go up almost.

Jeff Jarvis [01:18:33]:
9%, 9% now and Apple went up 3.5%.

Leo Laporte [01:18:38]:
Because the judge also said we're not going to stop Google from paying Apple, Mozilla and Samsung those huge fees to be the default search. Because A, I don't think it's part of the monopoly, which shocked me, and B, it would be damaging. It certainly would have put Mozilla out of business.

Jeff Jarvis [01:18:57]:
Yep.

Leo Laporte [01:18:59]:
In fact, all the judge required is that there be a five person panel kind of ombudsman watching over Google for the next six years to make sure that they didn't do anything bad and that Google could no longer require exclusivity from companies that wanted to use the Play Store or they couldn't say, okay, but you have to use Chrome and you have to make Chrome the default search. The judge says you can no longer do that. Everything else, well, they also have to.

Jeff Jarvis [01:19:27]:
They have to share some data. But I haven't seen anything.

Leo Laporte [01:19:30]:
Oh yes, some small amount of data.

Jeff Jarvis [01:19:32]:
About what it is and I wonder whether that helps every other AI company.

Harper Reed [01:19:36]:
Well, I think it, it definitely is going to help everyone who is trying to make money off of search data.

Jeff Jarvis [01:19:41]:
Because yeah, there's that.

Harper Reed [01:19:43]:
Because like, you know, if I was a hedge fund fund or a prop house or one of these kind of big finance companies, I would suddenly be starting a search engine or whatever the requirement is to get the data. Oh, like it's very cheap. This is the thing about Deep Seek, I think a lot of people kind of missed was like, it seemed like it was a side project of a big hedge fund.

Leo Laporte [01:20:01]:
Oh, a Chinese hedge fund.

Harper Reed [01:20:03]:
Yeah. Which is cool, like great. Like that's good. I mean I'm sure it's now spun off and whatnot, but they all worked for some quant. Quant fund.

Leo Laporte [01:20:10]:
Huh. So they were trying to create something that could invest for them.

Harper Reed [01:20:18]:
I don't know, they might have just had a boatload of GPUs and we're like, well, we're bored. We have smart people here. This seems really cool. And I mean obviously they did a very good job. And I, you know, and not to go back to China, but these companies, when they are releasing these models, they oftentimes couple it with a very large set of papers that talk all about how they built the model which is completely the opposite of what an rna, et cetera does.

Leo Laporte [01:20:42]:
Very open about.

Harper Reed [01:20:43]:
Yeah, but, but I'm, I'm, I'm very interested to see how like the various capitalism focused people interpret this as. We can now use this Google data that Google's required to share to, to further our own, you know, quant, whatever requirements.

Jeff Jarvis [01:21:00]:
I've got to go back to our friend at Common Crawl, Rich Scrinta, because I think it's interesting to say, does this, does this augment what Common Crawl does? Can Common Crawl use it itself? But Leo, I think, I think the most important thing about the decision was the judge recognizing that Google has plenty of competition because of AI.

Leo Laporte [01:21:23]:
Yeah. In effect he said in the intervening years, year since I made my initial very harsh decision.

Jeff Jarvis [01:21:30]:
It was true that he made that decision, but he learned more.

Leo Laporte [01:21:33]:
But AI has become a competitor.

Jeff Jarvis [01:21:36]:
Yeah.

Leo Laporte [01:21:37]:
So Google will have to share. This is from the Wall Street Journal. Some search data to give competitors a shot at building the scale they need to offer better search results. Meta said data sharing was necessary to dilute the advantages Google gets from paying to be the default search engine. It did not require the company to share advertising data. Remember there is another case ongoing in.

Jeff Jarvis [01:21:58]:
Which Google on advertising. Right.

Leo Laporte [01:22:00]:
On advertising. And that frankly, Google really does need to be penalized.

Jeff Jarvis [01:22:05]:
That's where I've always said that they're vulnerable. Stay on this for a minute. I'm curious if you were in strategy at Google, do you appeal or do you say take the win?

Leo Laporte [01:22:16]:
I think the first thing that happened is a bunch of lawyers sitting around a table started giggling giddily and you.

Jeff Jarvis [01:22:23]:
Heard champagne corks popping.

Leo Laporte [01:22:24]:
Yeah. Saying we won. And then they said, but you know, do we really want this committee of five looking over our shoulders?

Jeff Jarvis [01:22:31]:
Do we want to be called a monopoly?

Leo Laporte [01:22:32]:
Do we to want to be. Now they haven't decided whether they're going to appeal. Initially the initial story from CNBC said they were, but now I have, I don't see that anymore. And I don't think Google has decided there is a risk if they appeal. It could go against you could get worse.

Jeff Jarvis [01:22:45]:
Right.

Leo Laporte [01:22:46]:
I think you take the win, to be honest.

Harper Reed [01:22:48]:
I think you take the wind. I also think Google is so scared of this interaction with anything of interaction from a regulatory standpoint, et cetera, that they don't want, they just want to, they want to go under. Under the rock can not come out around this stuff. And the fact that the previous thing we talked about was Kagi and alt search engine that most people I know are using I do think that things have changed quite a bit since Google was a relevant organization from this. That doesn't mean that they're not still used by everyone. Of course they are, but I don't think it's as clear cut as it was even five years ago.

Leo Laporte [01:23:25]:
Yeah, one of the things the Department of Justice asked for is a choice screen so that, that people could choose on their smartphone if they wanted to use Google. The judge says that goes too far and intrudes into product design. A red line that courts shouldn't cross, which is, by the way, very different from the EU's point of view, which is we can do anything we want. They've had browser ballots for years in the eu. I think. You know, I was kind of shocked. Paul Thurat on our Windows Weekly show earlier today said the judge was suborned. He was a coward.

Leo Laporte [01:23:59]:
Somebody got to him.

Jeff Jarvis [01:24:00]:
Jesus, Paul.

Leo Laporte [01:24:02]:
I mean, yeah, Paul was very upset with his decision.

Jeff Jarvis [01:24:05]:
I think it's a great decision. I think he's. Paul's. I'm just gonna make a Microsoft joke. He wanted Google to be treated as badly as his buddies. Microsoft were back in the eu.

Leo Laporte [01:24:15]:
Well, what Paul and Richard said, which is a good point, is what the judge should have realized, that this is actually part of an ongoing negotiation, that if you throw the book at them, then Google comes back and says you're just as Microsoft did, by the way. Your Honor, we want a consent decree. We'll agree to do this and this and this. You know, we'll work something out. Let's work something out. And instead he said the judge folded.

Jeff Jarvis [01:24:40]:
I think he. But I think it was. AI is competition. It is not a monopoly in that sense anymore. It would have hurt all of us if the money had stopped to Mozilla.

Leo Laporte [01:24:50]:
And Apple, not Apple so much or Samsung even for.

Jeff Jarvis [01:24:55]:
Even there. Even there. I think it would have.

Leo Laporte [01:24:59]:
It does free Apple, by the way, to negotiate now with Google about the use of Gemini in its, its operating system.

Jeff Jarvis [01:25:06]:
Yes, it also frees up Google and this, this is, this is maybe what will bother people. But I think Google's held back a little bit from fully integrating Gemini into the browser, into Chrome. And now there's nothing stopping them. And I think that we'll end up with a better Chrome as a result.

Leo Laporte [01:25:24]:
Chrome, we just saw the new statistics, is now 80% of the browser market. It's completely dominant.

Harper Reed [01:25:31]:
Is that up or down?

Leo Laporte [01:25:33]:
That's up Edge, which is number two. The Microsoft browser is 15%.

Jeff Jarvis [01:25:39]:
That's like Mayor Adams in New York and Then polling.

Harper Reed [01:25:42]:
Yeah, yeah.

Leo Laporte [01:25:42]:
And then Firefox with single digit digit, Opera with single digit, and all the rest with minuscule.

Jeff Jarvis [01:25:49]:
Let me ask you a question about the Android piece. And Google just changed that. You have to verify if you're a developer, your, your identity and all this stuff.

Leo Laporte [01:25:59]:
This is in order to allow side loading.

Jeff Jarvis [01:26:02]:
So this is right sticker to side loading.

Leo Laporte [01:26:03]:
This is a restriction in the wrong direction in my opinion.

Jeff Jarvis [01:26:06]:
But, but let me ask you about that though, is that, is that part of what existed with both app stores, Apple and Android was that you had some assurance that each host was responsible for checking the stuff and you had more confidence in downloading. Now that they can't require you to use the app store or use the browser or use this or use that, does that make Android Worldwide less secure and more vulnerable?

Leo Laporte [01:26:32]:
Well, you've always been able to sideload on Android and what's happened over time is it's gotten more and more scary. You know, you could used to just check a box in Settings saying, yeah, I can use anything I want. I can get APKs from anywhere. And now it turns itself off, by the way, after you do it. It also says, you know, this is a bad idea. And now as you pointed out, Google's saying, and we won't allow it unless we vet and verify the identify of the, develop the identity of the developer. This makes sense from a security point of view, but I think it's more lock in. I think it's much like what Apple's doing.

Leo Laporte [01:27:05]:
Well, what, what the problem is, it doesn't protect your security.

Jeff Jarvis [01:27:08]:
Incidentally, does this judgment have any effect on all of that?

Leo Laporte [01:27:13]:
I don't think so.

Jeff Jarvis [01:27:14]:
Oh, okay.

Leo Laporte [01:27:15]:
I don't think so. I don't. I think Google is now free to operate as they choose, as they please. I don't think. I think this is.

Jeff Jarvis [01:27:23]:
And they didn't have to, they didn't have to give a gold bar to Trump to get there.

Harper Reed [01:27:26]:
Well, we don't know what gold bar was. The gold bar was probably cheaper than the lawyers, though.

Leo Laporte [01:27:31]:
Yeah, definitely, definitely.

Jeff Jarvis [01:27:33]:
They got them.

Leo Laporte [01:27:33]:
Anyway, Paul's speculation, and it's completely unfair, confounded, was that somebody came to the judge maybe from the government, said, yeah, like where the Department of Justice could.

Jeff Jarvis [01:27:43]:
Have said, look, meta's, Meta's judgment is the real deal.

Leo Laporte [01:27:47]:
Yeah, he's good. It just Paul said he was so harsh about Google a year ago and now.

Jeff Jarvis [01:27:53]:
But I think he learned. I honestly learned. I mean, I would have said the same things a year ago about AI and competition and And I thought that. I thought the judgment was flawed because of that. Then. Then. But in the meantime, he spent a year learning this stuff, and he really learned it. Isn't meta the same one who learned to code?

Leo Laporte [01:28:13]:
Was he the Oracle Java?

Jeff Jarvis [01:28:16]:
The Oracle judge, Right, Yeah.

Leo Laporte [01:28:19]:
All right.

Jeff Jarvis [01:28:20]:
Big story.

Leo Laporte [01:28:21]:
Yeah. I mean, my initial reaction, which was yesterday during security now, which had just came out, was, this is huge victory for Google.

Harper Reed [01:28:29]:
Yeah, I think it is. This is very clearly a huge victory for Google. I don't know.

Jeff Jarvis [01:28:32]:
And it's a big AI story in the long run because AI is the competition, and AI is going to benefit from this in ways we can't yet predict.

Harper Reed [01:28:39]:
And our new hedge fund that we just started.

Leo Laporte [01:28:44]:
Should we start a hedge fund, do you think? Is that a good idea? I could put in about $25. Is that.

Harper Reed [01:28:48]:
Yeah, exactly. I got. I have these AirPods that I can put in.

Leo Laporte [01:28:56]:
All right. Got so many stories, so much to talk about, and I love it when we get. I want to get stuff that we can get. Harper.

Jeff Jarvis [01:29:05]:
Going on.

Leo Laporte [01:29:06]:
Going, going on. Yeah.

Harper Reed [01:29:08]:
There's so many in here. There's such good ones. I was telling a friend, oh, I'm going to be talking about AI stuff, and it's like, what can you possibly talk about? It's so dynamic and frothy.

Leo Laporte [01:29:18]:
Every week, there are literally hundreds of stories we could talk about.

Harper Reed [01:29:23]:
And they're all. And they're all insane. Like, you're like, what? Like, they're not. I could have predicted last week, you're like, oh, of course that. Yeah, sure, of course that would happen.

Leo Laporte [01:29:31]:
But what I have to tell you, it's a gift for me and all my colleagues in the tech journalism field, because it was getting kind of boring. Another iPhone. What is different?

Jeff Jarvis [01:29:44]:
Oh, it's material design. Oh, wow.

Leo Laporte [01:29:47]:
So it was getting an hour and a half. That New Yorker says AI is coming from for culture. It's going to ruin the culture.

Jeff Jarvis [01:29:57]:
It's a pretty obvious story. The one I like better, Leo, is the one you mentioned the other day, which is the Netflix story. They're very similar stories.

Leo Laporte [01:30:04]:
Yeah, it is the same story. So Netflix, which has huge budgets, in fact spent $320 million making it one of the most expensive movies ever. For a complete turkey. Let me see if I can find this story. Did I put it in? I did.

Jeff Jarvis [01:30:23]:
Bland.

Leo Laporte [01:30:24]:
Easy to follow. For fans of everything. This is from the Guardian. What the Netflix algorithm has done to our time. When the streaming giant began making films guided by data that aimed to please a vast audience, the results were Often generic, forgettable, artless affairs. $320 million on this movie called the Electric State, which I really, I really tried to watch. And it was.

Harper Reed [01:30:50]:
You did.

Leo Laporte [01:30:51]:
Oh, five minutes into it, I just threw it away. I said, I cannot. It featured Millie Bobby Brown in this kind of dystopian robot infested universe. The Guardian calls it a mockbuster crammed with the over familiar flashy signifiers of big screen filmmaking. A Spielbergian childhood quest, a Mad Max post apocalyptic wasteland, fallout style, retro futuristic trimmings. It's an algorithm movie. And I think that that's sort of true. This is the part I highlighted.

Leo Laporte [01:31:30]:
Algorithm movies usually exhibit easy to follow story beats that leave no viewer behind. The reason being in Netflix mind, you're not really watching. You're doing the laundry, you're playing Donkey Kong. Under this regime, exposition is no longer a screenwriting faux pas. A recent N1 article revealed that screenwriters who work with Netflix often receive the note, quote, have this character announce what they're doing so the viewers who have this program on in the background can follow along. End quote.

Jeff Jarvis [01:32:04]:
Should we go rob the bank now? Yeah, I think it's time to rob the bank.

Leo Laporte [01:32:08]:
Bank.

Harper Reed [01:32:08]:
But is it?

Jeff Jarvis [01:32:09]:
Let's get in the car.

Harper Reed [01:32:10]:
And isn't like this seems obvious and media changes, right? Like, like it's, it's not radio, it's filling a need. Like it's, yeah, it's, it's, it's not how I consume media, but I have a tape deck over there. You know what I mean? Like, I think there's certain people that, that are not, that are going to read this and say, oh, this is so ridiculous. But there are a lot of people who, you know, live alone, have Netflix on all the time. There are people who as a family of Netflix on all the time or the TV on all the time. How is this different than the news? What's firing at every dinner table?

Leo Laporte [01:32:47]:
That's how the Today show was designed. The producers of the Today show knew that you were getting up, getting ready, doing, you know, making breakfast and you weren't looking at the tv. It was radio for TV because they knew you weren't watching.

Jeff Jarvis [01:32:59]:
But what's interesting in this story to me is that they slice up the entertainment itself in formulas. They always had formulas, right? It's let's make a thriller about spies. And yeah, well, that worked last year, so let's do another one. At that high level is there. But now it's at a very specific level. It's like a blog post Being tagged crazy. And so they match those characteristics of the entertainment and what successful against characteristics of the audience and their habits. And then that common.

Jeff Jarvis [01:33:31]:
You've got this 3D game now that creates this.

Harper Reed [01:33:35]:
But is this isn't. I think there's this funny thing that happens whenever something like this pops up. Like how many New Yorker pieces or New York Times pieces were written about. Choose your own adventure books, which I think are probably the trashiest of all literature, but certainly powered a lot of my fourth grade reading.

Leo Laporte [01:33:54]:
Did you read a few of those?

Harper Reed [01:33:55]:
And I think the thing that's, that's interesting about this is just because you can use this technology for one specific type of media doesn't mean that it will be used therefore for all types of media forever. And I also think that the amount of AI that is probably used within storytelling, within media that people are not able to see, I think is very interesting as well. Like, I know for a fact that there's big studios that are using AI both to help bolster scripts, to help bolster scenes, to do all this stuff. We just don't see it. They're not telling us. But also it's done well in that you can certainly make a robot paint, but is it going to be good? As good as a painter that is very, very trained and balanced? Probably not yet.

Jeff Jarvis [01:34:39]:
But even if the robot's not writing it, even if the robot's not writing it, the writer gets stuck. In this case. There's a stat in here that in 2017, Netflix logged 700 billion data events, interactions with the platform in some form per day.

Leo Laporte [01:34:54]:
But this was for the recommendation engine, right?

Jeff Jarvis [01:34:56]:
Well, that's just the recommendation.

Leo Laporte [01:34:57]:
But now you.

Jeff Jarvis [01:34:57]:
Now you pair that in this 3D chess game that they have and it becomes just a different scale. I think, Harper, you're right. I'm researching right now the beginnings of mass media and the entry of television. And the same thing happened with the entry of novels into print. Absolutely. It occurs, but it's fascinating now because we can see it live before our very eyes. And it's just a different scale of what's happening. And if you're a creative, how the hell you work in the system, I don't know.

Harper Reed [01:35:30]:
Well, here's a question for you, Jeff. Imagine that Netflix has all this data on you and all your watching data and all that stuff. What is the movie that they algorithmically create for you? For me, it's just regular Braveheart. They take all my information, they put it together and just be Braveheart as it is originally from the studio.

Jeff Jarvis [01:35:45]:
Here's what frustrates me.

Leo Laporte [01:35:46]:
It was a really good movie.

Jeff Jarvis [01:35:47]:
I want to watch Netflix. Netflix doesn't know me well enough and I can't find anything I want to watch there. So all I see is dark stuff.

Harper Reed [01:35:54]:
And so maybe it does know you.

Leo Laporte [01:35:59]:
I have to say, Hollywood did decide at some point that horror movies and, you know, scary movies and sci fi were the way to go for the next few years. And I think it was right after they said, no, nobody wants. Although as long as there's still a few auteurs who will make these little movies. I know there's no theaters to show them in anymore. There's no streaming service that's going to play them, except maybe their Criterion Channel.

Jeff Jarvis [01:36:24]:
But.

Harper Reed [01:36:25]:
But I don't think. I mean, I think that's true and I see this. But also, that doesn't explain the rise of like a 24 and these other places that have. That have created a lot more art movies that have in some cases become hidden bits.

Leo Laporte [01:36:37]:
And I think that's what happens. There's a reaction. So you get all of this mechanical stuff and people get a craving for human stuff.

Harper Reed [01:36:46]:
Or I would even posit it as just better done mechanical stuff. Because I think that there is some things that are like. It's like, oh, great. We have this horror movie that's really stupid and barely works and is. Doesn't really make any sense. Then, then, then you're like, okay, cool, that's stupid. That sounds not. I don't want that.

Harper Reed [01:37:06]:
But then you have like the 824 version and everyone's like, this is incredible.

Leo Laporte [01:37:10]:
Right?

Harper Reed [01:37:11]:
It's still formulaic, but it's formulaic in a way that is fun and makes you feel good. And I think this is. This is something. When anyone talks about art and AI, there's still space for good art and there's still space for bad art.

Jeff Jarvis [01:37:24]:
Yes.

Harper Reed [01:37:24]:
And just because AI made it doesn't mean it's automatically bad, nor does it mean it's automatically good. And I think, think what we mistake. Like, I do this where I'm like, oh, Mid Journey. And I made, you know, I made as many Mid Journey images as I possibly could do because I thought it was so cool that you could just type in a thing and get it out. And then I was like, these aren't very good. And I stopped doing that. But now I see friends who are making real art with AI and it's incredible. And I see me making really poor, horrible things with AI and it's not Incredible.

Harper Reed [01:37:52]:
This is because they're good at being an artist, and I'm bad at being an artist. They're good at doing art. The medium doesn't matter. Better. Now, I do think what Jeff said is the real question here, which is how, as a creative, do you exist in this new world?

Jeff Jarvis [01:38:06]:
It was always hard enough. I mean, do you watch the studio part of it?

Harper Reed [01:38:10]:
Right?

Jeff Jarvis [01:38:11]:
And it's. It's. It's. It just gives you a headache thinking what it must be like to operate in Hollywood, and Hollywood has always been Seth Rogen.

Leo Laporte [01:38:17]:
Gives me a headache. Anyway, it's good, though.

Jeff Jarvis [01:38:20]:
It's good.

Leo Laporte [01:38:21]:
There's some. But there's some good moments. By the way, Netflix CEO Ted Sarandos gets his airtime in it. Have you gotten to that one yet?

Harper Reed [01:38:30]:
No.

Leo Laporte [01:38:31]:
Oh, you haven't gotten to the Golden Globe Awards, where everybody thanks Ted Sarandos for his contribution.

Harper Reed [01:38:41]:
I spent some time with a very senior Netflix person, and I remember we had dinner, and there was one of their creators was in a different room, and I just happened to mention, oh, hey, you know, one of the people that made a movie on Netflix is over in this other room just. Just in passing. And he left the dinner we were at and went and hung out with them. This is a very, very, very senior person. This is at one of these fancy dinners. And. And I. It showed me that.

Harper Reed [01:39:06]:
I don't know if. I don't think he was acting out. I don't think. I think he really was very excited about meeting a young creator that was building things for his platform. And I think that's real. Like, made me less cynical about Netflix having that experience. Oh, that's interesting because. Because aren't you nice?

Jeff Jarvis [01:39:24]:
You didn't feel insulted? Oh, so I'm. I'm chopped liver.

Harper Reed [01:39:28]:
Well, I mean, the thing. The thing is, I just think that. That they are trying to make money. That is capitalism. Like, if we. If we unroll all of this, it goes to, like, capitalism. Capitalism is the issue, but in the meantime, let's just make cool stuff. And that's kind of seemed to be what he was.

Harper Reed [01:39:45]:
He was into. I don't think I'd go super far down this line of thinking, though, because I'm already arguing with myself in my head.

Jeff Jarvis [01:39:52]:
Right.

Harper Reed [01:39:54]:
Yeah. I've already dug a little bit of a hole.

Leo Laporte [01:39:57]:
Get me out of here. Scrambling.

Harper Reed [01:39:59]:
Exactly.

Leo Laporte [01:40:00]:
Let's take a little break. Come back with more. Harper Reed is here filling in for Paris Martineau, who is visiting Consumer Reports headquarters in Yonkers. A very exciting moment for Paris Jefferson.

Jeff Jarvis [01:40:11]:
Hoping she's going to play with how to judge washing machine machines. That's my.

Leo Laporte [01:40:15]:
Yeah, yeah. I hope she gets to go out in the test track. And anyway, wow. She's very excited about being a Consumer Reports reporter. Jeff Jarvis is here as well. It's really nice to have Harper Reed with us. Love having you on. And a great guest earlier on, Karen Hao.

Leo Laporte [01:40:34]:
If you're watching live, you're probably very confused, especially by my shirt. Anyway, if you're watching live. Yes. We did a very interesting interview with Carl Bergstrom and Jevin. I always forget his last name, but about AI as a BS machine, not in a negative way, but as a kind of a path to critical thinking. We'll air that in a couple of weeks, I think probably when I'm on my vacation, because in a couple of weeks I'm going to be gone for a few. So we'll have more with Dr. Biz.

Leo Laporte [01:41:08]:
Jeff. Jeff. What's your Claude code name gonna be?

Jeff Jarvis [01:41:11]:
I don't know. What?

Leo Laporte [01:41:12]:
Gotta come up with one.

Jeff Jarvis [01:41:12]:
I gotta come up with one.

Harper Reed [01:41:14]:
No, no, you don't have to. You just ask it. Oh, just. I mean, why do work. This is the thing. I see all my friends doing this and it's like, why are you doing that? It knows you. It will do it just pick in. I'm gonna ask ChatGPT right now to.

Leo Laporte [01:41:26]:
See what would the prompt be based.

Harper Reed [01:41:30]:
On what you know about me? That's the problem with us for is that it already knows a lot about us outside of our interactions with it. What should a good nickname be for myself? Okay, you ready?

Leo Laporte [01:41:46]:
Yeah.

Harper Reed [01:41:47]:
It says a good nickname for you. Glitch Lord Meshtro. Like Maestro, but with meshtastic Byte Eddie. A mashup of Iron Maiden and computers.

Leo Laporte [01:42:00]:
Wow.

Leo Laporte [01:42:00]:
Wow.

Leo Laporte [01:42:02]:
See, mine says I don't have any information about you. So what would you like to tell me?

Harper Reed [01:42:06]:
Don't do it in ChatGPT. Do it in ChatGPT.

Leo Laporte [01:42:08]:
Oh, Chat GPT. Because it's saving all of that. Yeah. Ah.

Harper Reed [01:42:13]:
I'm now Glitch Lord. By the way, I need to change.

Leo Laporte [01:42:15]:
Glitch Lord is excellent. Okay. Based and just chat GPT. It saves all of its stuff. It knows.

Harper Reed [01:42:23]:
I mean, I don't. I think it only saves the last. My theory is it's like the last two days because the other one was just like your child's name. And it's like, come on, that's not very creative. Come on, Chat. You can actually tell it that.

Leo Laporte [01:42:36]:
Come on.

Harper Reed [01:42:36]:
That wasn't very creative.

Leo Laporte [01:42:38]:
Okay. The Podfather? No. Captain Bandwidth. I like it. Chef Debyte.

Harper Reed [01:42:48]:
No.

Leo Laporte [01:42:48]:
The Obsidian Alchemist. Gadgetron.

Jeff Jarvis [01:42:52]:
I like that one.

Leo Laporte [01:42:53]:
Professor Laporte. Team Brock's Generalissimo and Laportian. Okay, we're gonna go.

Harper Reed [01:42:59]:
See, these are all perfect. See, these are perfect because they're so funny and weird that every time you see it, you're gonna smile a bit, you're gonna applaud. Exactly. And then the moment it doesn't, you're gonna be like, aha. Yep, time to kill.

Leo Laporte [01:43:14]:
It does know quite a bit about me. As it. As it turns out, I'm gonna go with Gadgetron. That's what you thought Jeff was good.

Harper Reed [01:43:21]:
I like it.

Leo Laporte [01:43:21]:
Yeah. Yeah. So what we need. Oh, let me see. How about Jeff Jarvis? That was the whole Jeff and I.

Jeff Jarvis [01:43:27]:
Just said, my purpose is to be helpful and harmless AI assistants. And I don't have any personal information about you.

Harper Reed [01:43:33]:
This is why Google is so annoying. Such dad that you know behind the scenes.

Jeff Jarvis [01:43:38]:
Yes, you do know about me, Google.

Harper Reed [01:43:40]:
Oh, yeah.

Leo Laporte [01:43:41]:
How about the media class for you, Jeff? A breaker of old, like an iconoclast, but a break of of old media idols smashing legacy thinking to make room for the digital.

Jeff Jarvis [01:43:52]:
All right, I'll take it. It's hard to say.

Leo Laporte [01:43:53]:
Media class. Yeah, it's not that easy to say.

Harper Reed [01:43:55]:
The other thing that we do, we've been playing with this a lot. You can also, instead of saying nickname, you can say 90s, AOL screen name.

Leo Laporte [01:44:06]:
Okay, okay. 90s, AOL screen name. That's good.

Harper Reed [01:44:13]:
Harper Space Invader. Codeninja 2389 A company. Read me, read me.

Leo Laporte [01:44:24]:
Buzz Machine doesn't have anything.

Jeff Jarvis [01:44:26]:
Well, doesn't have anything.

Leo Laporte [01:44:27]:
No. Huh. Want me to hazard a guess at what your 90s handle would be? Yeah, my team want to do you, Jeff.

Harper Reed [01:44:35]:
In the 90s for AOL Instant messenger was Linux Killa K I L L.

Leo Laporte [01:44:40]:
A. I was Mike Man 68 or Mr. Bandwidth. How about that?

Jeff Jarvis [01:44:46]:
Mr. Bandwidth is 71435 coming. 1134.

Leo Laporte [01:44:50]:
Yeah, that's your Compu Circle. Yeah, Techno Leo Laporte line. Bite me, Leo. I'm gonna go Mike man 68. So we got Harp Dog. What did you. You didn't. You liked Mr.

Leo Laporte [01:45:03]:
Biz Dr. Biz. Dr. Biz, Dr. Biz, Dr. Biz, Mike man 68. And. And the.

Leo Laporte [01:45:10]:
What was it? The Mediacona class. Media class. There it is. It's in your lower third. Yeah, the Mediaclast. There you go. Anthony's right on it. I'll have to get one for Paris too.

Leo Laporte [01:45:23]:
You're watching Intelligent machines. More to come in just a bit. Our show today, brought to you by Helix Sleep. So I've mentioned that we are under construction and they are removing the south wall on my house. What I didn't mention is as a result we've had to move our bedroom into the back 40 to the spare bedroom. But you know what? I took with me my Helix Sleep because I ain't sleeping without it, turns out. And by the way, I'm kind of happy because this is where the good TV is too. And so now Media nights with my partner on my Helix mattress will be all the better.

Leo Laporte [01:46:01]:
Morning cuddles with our little kitty, Rosie. And I'm still a big fan of curling up. Winding down after a long day. Curling up with a good book. Truth is, your mattress is at the center of your life. It's not just for sleeping. But if you aren't sleeping well in your mattress, maybe you're waking up in a puddle of sweat or your lower back just killing you or you're feeling every toss and turn your partner makes. These are classic mattress nightmares here.

Leo Laporte [01:46:29]:
Like sleep changes everything. No more night sweats, no back pain, no motion transfer. You get the deep sleep you deserve. I want to tell you My last three nights sleep scores were in the high 80s, which I never get. I never get them. 84 tonight, 88 last night. Never get those. Maybe it's because I am on the most awarded mattress ever.

Leo Laporte [01:46:54]:
One buyer recently reviewed Helix with five stars saying, quote I love my Helix mattress. I will never sleep on anything else. Time and time again, Helix Sleep remains the most award winning mattress Brand Best Mattress 2025 from Wired Magazine Best Mattress Good Housekeeping's Bedding Awards 2025 for Premium Plus Size Support GQ Sleep Awards 2025 for Best Hybrid Mattress, New York Times Wire Cutter Award 2025 for Plus Size support and Oprah's Daily sleep awards for 2025 best hotel. Like feel love my Helix Sleep. We really do. We really love it. Go to helixsleep.com twit for 27% off sitewide during the Labor Day sale Best of web offer. That's helixsleep.com TWIT 27% off sitewide exclusively for listeners of intelligent machines.

Leo Laporte [01:47:53]:
But this offer ends September 8th and do make sure you enter our show name after checkout so they know we sent you. If you're listening after September 8th, be sure to check them out anyway. Helixsleep.com twit there's always great offers there. Helixsleep.com twits. Tell them Mr. Bandwidth sent you. Okay. Zuckerberg AI hires Disrupt Meta with swift exits.

Jeff Jarvis [01:48:28]:
Yeah, never mind.

Leo Laporte [01:48:30]:
This is what happens when you offer people a lot of money. Longtime acolytes are sidelined as big tech chief. This is from Financial Times directs biggest leadership reorg in two decades. Within days of joining Meta, the co creator of OpenAI's ChatGPT, Sheng Jia Zhao, threatened to quit and go back to Open AI. He went as far as to sign employment paperwork to go back to OpenAI. Shortly afterwards, according to four people familiar with the battle, he was given the title of Meta's new chief AI Scientist. How about a title? How about a title?

Jeff Jarvis [01:49:09]:
Would that help in addition to that 100 million you got?

Leo Laporte [01:49:11]:
Yeah.

Harper Reed [01:49:13]:
Someone mentioned a friend of mine who works in media was mentioning that tech has seemed to enter its pro sports.

Leo Laporte [01:49:24]:
Yes, that's exactly it.

Harper Reed [01:49:26]:
Where you're going to have compensation packages that are going to be, you know, $100 million over a few years and that's then going to be, you know, you're going to have to buy that out in some ways to get these very talented, talented people. And then another friend I was talking to who works in finance about this very thing was mentioning that the finance world has figured this out where they will give these giant compensation packages to people, but then they won't do it all at once. They do it in a way where they're not going to. They vest they. I'm sure, I'm sure Facebook is vesting them as well, but they're used to these type of things. So you don't give some young hotshot $100 million package. You give them some way to earn the $100 million, but not immediately so that they don't. Because, like, I mean, I don't know, I have.

Harper Reed [01:50:16]:
If you gave me a hundred bucks, I'd probably quit my job. You know, I can't imagine what happens when you get, when you get $100 million. Why would you stay in work? Like, what happens if life happens? Like, it's so much money. And I don't think humans, I don't think we are smart enough to survive this.

Leo Laporte [01:50:34]:
That's really the problem. The people you're getting. So. So this story goes on. Ethan Knight, a machine learning scientist who joined Meta a couple of weeks ago, gone. Avi Verma, former AO OpenAI researcher, went through Meta's onboarding process but never showed up for his first day.

Harper Reed [01:50:49]:
I love that. That's one of my favorite things. Like, I just love that That's a thing that you can do these days. It's just be like, ah, I didn't really want.

Jeff Jarvis [01:50:55]:
Where's Avi?

Harper Reed [01:50:56]:
Didn't we give him the bb?

Leo Laporte [01:50:58]:
Well, you know, and you wonder, what was it that changed? Right.

Jeff Jarvis [01:51:02]:
You.

Leo Laporte [01:51:03]:
You realized, oh, my God, I can't work for Mark.

Jeff Jarvis [01:51:07]:
Or is it. What's his name?

Harper Reed [01:51:09]:
The.

Jeff Jarvis [01:51:09]:
The. The child AI boss, Wang.

Leo Laporte [01:51:12]:
Oh, it could be. In a tweet. On Wednesday on X, Rashab Agarwal, a researcher, scientist started at Met in April, announced his departure. He said that while Zuckerberg and Wang's pitch was, quote, incredibly compelling, he, quote, felt the pull to take on a different kind of risk. I think he's gonna go mountain climbing, isn't he? He's gonna take the money and run. And then there you also have the problem, which is longtime Meta staffers unhappy with these huge salaries are going. More than half a dozen veteran employees announced they're leaving in recent days.

Jeff Jarvis [01:51:45]:
It just strikes. I keep on saying this. I just think that Zuckerberg's desperate. It doesn't strike me as a strategy. It doesn't strike me as, you know, I know where I'm putting all this money, and we're gonna go here. I think it's. We're getting left out of AI.

Leo Laporte [01:51:59]:
They've had four overhauls, four reorgs in six months.

Harper Reed [01:52:03]:
Yeah. But I think we have to remember that they changed their name to Meta, and then the metaverse didn't play out.

Jeff Jarvis [01:52:10]:
Yep.

Harper Reed [01:52:11]:
I don't think they've had a solid strategy for the future in a while, and I don't. I think that's kind of okay. I don't mean that in a bad way. I mean, doing a business is hard, whatever size you're on, and they're at a scale that's unprecedented. And so I think that. That struggling is. Is a vow. That's.

Harper Reed [01:52:28]:
That's like. That's okay that they're struggling. I don't mean that that's good that they're struggling. I mean that I would expect them to struggle.

Leo Laporte [01:52:35]:
It's to be expected.

Harper Reed [01:52:36]:
Yeah, it's hard. What they're trying to do is very difficult with very stiff competition that is very robust and has a lot of the stuff that they're trying to do figured out, or more importantly, doesn't have the burden of the regulatory framework that they have to exist within, as well as some other issues. But I'm just. I'm guessing. And this is. This is totally a guess that, you know, I think, Jeff, you said it like he, he's just like swinging in the dark here. He's trying to, he's trying to hit something and make it go, but he.

Leo Laporte [01:53:09]:
Has a vast checkbook that, I mean he can.

Harper Reed [01:53:12]:
Yeah, but, but how's that working?

Jeff Jarvis [01:53:15]:
Yep. We just throw scale at things and we're going to win. I think that's a seduction.

Harper Reed [01:53:21]:
It used to be that I have so many friends that worked at Facebook and the reason they worked at Facebook, besides the fact that they paid a lot of money, was because they could make small products, small changes and it would touch billions of people. And they were so happy with this. And that was a super attractive, compelling thing. And I find it compelling myself. The times that I've had the most fun at work have been with large teams where with doing really cool stuff for lots and lots of people. In our case it's only about 30 people that we end up touching. But it's, you know, it's, it's still. That's a lot.

Harper Reed [01:53:48]:
It's more than 10. But I, but I think there's this, that's not as relevant anymore. Right? Like the fact that you could go work at Facebook and ultimately you're going to be participating in selling ads whether you want to or not.

Leo Laporte [01:54:04]:
Right.

Harper Reed [01:54:04]:
You're participating in upholding a relatively, I don't know, dried out business model. You go to Google and it's the same thing. You're just trying to support ad clicks. You could go to Apple and maybe you get to work on compelling hardware, but maybe you get stuck on a team that's doing app store ads and you have these ad based business models that seem to refuse to die. And I think that for a lot of these people, they're like, I could do that or I could go work at a humanoid robotic company that we're building. Whatever it is is, you know, people, machines that are helping people, killing machines, whatever it might be that you're doing.

Leo Laporte [01:54:40]:
Optimus.

Jeff Jarvis [01:54:41]:
Let's go to Optimus.

Harper Reed [01:54:42]:
I think that that's just probably much more compelling. If you're 27 and a multi, multi, multi millionaire sitting in the bay, you're probably like, do I need $100 million? It wouldn't hurt. But what would be much better is actually feeling like we're achieving something good and working with people I respect. You know, my advice for people all the time, young people specifically, is like optimize for working with a team you respect. And I think that you can make a decision based on money and then you get inside there and you realize you don't respect any of those people that are around you. And that's just. You're not going to have fun, you're not going to enjoy it, you're going to hate your job every day. And if you have $100 million, they probably have a lot of options to get a new job.

Jeff Jarvis [01:55:23]:
What's Zuckerberg's seniority as a founder still in charge? How does that compare with others? Is there anybody else right now? Now of these big. Is Jensen Wong been a founder longer than Zuckerberg?

Harper Reed [01:55:39]:
Oh, yeah, I think so.

Jeff Jarvis [01:55:40]:
Has he? Okay.

Harper Reed [01:55:41]:
I think so. I mean, I think a lot of these guys have. I mean.

Jeff Jarvis [01:55:45]:
But Google switched over.

Harper Reed [01:55:47]:
Yeah.

Jeff Jarvis [01:55:52]:
Go to the wrong tree.

Harper Reed [01:55:53]:
Elon Musk has been a founder for a very long time. Well, but not of the same company. But I also think there's a, there's.

Leo Laporte [01:56:01]:
A.

Harper Reed [01:56:04]:
A little bit of the boy king kind of thing of like that's.

Jeff Jarvis [01:56:08]:
Where I'm trying to head.

Harper Reed [01:56:09]:
Yeah, yeah. Is I think there's a fundamental question about Google and Facebook which is the same question which is are they relevant? Just are they like they have this business model that is from the past. It's somewhat anachronistic at this moment. Not that we have a new model to replace it. Like, I don't mean, I don't mean that. I just mean the, that if you were starting a company today, the first business model you would grasp for is not ad based. It would be not.

Leo Laporte [01:56:34]:
That's right.

Jeff Jarvis [01:56:35]:
My contention is that the ad base we have, the attention economy was, was borrowed from mass media, that we haven't new models that are fully native to the Internet.

Harper Reed [01:56:48]:
And so I think that this creates this, this discontinuity with how when you're trying to hire talent, if you're saying, yeah, we have all this great stuff, we have all the GPUs in the world, we're building these beautiful models, we're doing open source, we're doing all of this stuff. We're a leader in this space. But we still have to get users to click on ads. And you're kind of like, but I don't want to do that. I want to build something that's changing the world or what have you. It doesn't matter what it is. It could be. I want to build something that's horrible and destructive, that Facebook might be a good place to be.

Harper Reed [01:57:18]:
Then I don't know.

Leo Laporte [01:57:20]:
Lot of speculation in this Financial Times article about, about what is going wrong. Some of it is what you suspected. Jeff Alexander Wang, the 28 year old wunderkind that they aqua hired away from Scale AI.

Harper Reed [01:57:38]:
I love that move, by the way. The like, oh, you started a company and it's impossible to exit. What if we just exited you only you like the character. AI guys did this like it's. I just love it. I'm just like, what a great way to do it.

Leo Laporte [01:57:50]:
It. Oh, it works.

Harper Reed [01:57:52]:
Bye.

Jeff Jarvis [01:57:53]:
Apparently I brought on.

Leo Laporte [01:57:54]:
Apparently Wang's secretive new department has not been named. So it's called tbd.

Harper Reed [01:58:01]:
That'll stick. You know that's going to stick.

Jeff Jarvis [01:58:04]:
That's how you have a new brand for Betta.

Leo Laporte [01:58:08]:
There is according again, Financial Times says multiple insiders describe Zuckerberg as deeply invested and involved in the TBD team. Others criticize him for micromanaging. Some they say of the friction is maybe perhaps because of Wang's leadership style. It has chafed with some. He does not have previous experience managing teams across a big tech company. Something's happening. People get there, they get the orientation.

Jeff Jarvis [01:58:34]:
And they go, who died made him godchild? What was it about? I. I still understand that.

Leo Laporte [01:58:40]:
We still don't know why he's scale AI was the miracle 14 billion dollar.

Jeff Jarvis [01:58:45]:
It was a labeling company.

Harper Reed [01:58:47]:
But I think that there's a lot. There's a couple interesting things that I. That, that, that The Empire of AI book talked about, right. Which is that AI from most consumers is just the chat box in ChatGPT.

Leo Laporte [01:59:02]:
Right.

Harper Reed [01:59:03]:
But the data, the labeling, all of that infrastructure that is there is hard to make. And Scale AI for better for worse was a leader in that space. And I'm sure that Facebook, because they're very smart, also was a leader there. And so I think the synergies that they are finding within these people, and even this goes for Jony Ivor or OpenAI as well, is it is probably a talent that they don't have internally. And I think that's the thing that is probably more interesting. But externally it seems asinine and it seems much more like talented baseball player A got mad at the GM and now is on talented baseball team B with a bunch of people that you know then. And it removes the team aspect, you know. And honestly, when PayPal bought my company, we had to deal with this internally as well, where we just kind of were parachuted in and all these people we worked with are like, who the F are you? Why are you participating in the this, in this thing? I was here 17 years, you just got here yesterday.

Harper Reed [02:00:08]:
And that's a real thing. And so I'm sure that the people at Meta, who were probably much more compensated than we were, are dealing with this in every direction. Right. So the people who are pulled in there as these Wonderkins people are like why would I trust you? They're probably poisonous. It's building a lot of alienation inside the team. And then externally, or not externally, on the other side, there's people who've been there that have been building this stuff that made Llama, all this stuff and they're not getting $100 million but they know if they jumped over to an AI they might, you know, so then it's like it's thro. Maybe, I don't know. It's super weird.

Jeff Jarvis [02:00:41]:
Let me ask a, a, a devil's advocate scenario here. What if I'm wrong? We're all wrong about Zuckerberg and actually he has a brilliant master plan and it's not scale. It's, it's. He has a new model and it's built on labeling or symbolic something. It goes to Gary Marcus's piece today and, and he's pulled in all. And Wong has this brilliant perspective on this and it's all about labeling is the key. Labeling is everything. It's not scale anymore.

Jeff Jarvis [02:01:10]:
Right. And you're all going to be surprised soon. And Yann Lecun, who I respect and is there, has been arguing that LLMs have hit the wall. Just even though Gary Marcus doesn't like Jan, they agree a lot.

Harper Reed [02:01:23]:
Does Gary Marcus like anyone though? That's a question I've always.

Jeff Jarvis [02:01:26]:
That's a good question.

Harper Reed [02:01:27]:
Yeah, I mean I, I like Gary and it's fun to interact with him but I do think he, he plays a curmudgeon on tv.

Jeff Jarvis [02:01:33]:
Yeah, yeah. He wrote a piece in the Times though I thought was very good. That was calm for Gary because it.

Leo Laporte [02:01:42]:
Was in the Times he, he sold his company to his AI company to Uber and that established his, I guess his one of fightes and, and after that he's never the negative guy. You know.

Harper Reed [02:01:55]:
It's also never fun to, to sell your company to Darth Vader.

Jeff Jarvis [02:02:00]:
Yeah, yeah.

Leo Laporte [02:02:01]:
Haover much money Darth gives you a.

Harper Reed [02:02:03]:
Lot of, lot of. It's just, I mean you get cool outfits, they look a little bit batches but you know, some people like his.

Leo Laporte [02:02:12]:
Latest piece is the fever dream of imminent super intelligence is finally breaking, which I think is not necessarily true.

Harper Reed [02:02:19]:
But so here's the, here's the provocation. Provocation. Provocation. Is that the right word? Here's a provocation that I'll put there because I talked about This, I talked about this with a, with the VC friend of mine recently. Even if we stopped AI today, like we said, pause, no more innovation. We're going to stop today, we still have not seen the impacts on employment that are going to ripple through our industries.

Leo Laporte [02:02:42]:
Right.

Harper Reed [02:02:43]:
Like, it doesn't matter if we have super intelligence, fast takeoff, all this other nonsense that become almost religious. What matters is that this is going to irrevocably change how we interact with computers, how we hire people, how those people are going to do work. And I think that's something that we still just have not addressed. And so I think it's important that people like Gary Marcus and all of these folks are thinking about this stuff and writing about it. But even if we paused today and said, oh, you're just using GPT5 and that's going to do everything, I still hire less people.

Jeff Jarvis [02:03:18]:
Gary, though, I think this is a very. Gary thing to say.

Harper Reed [02:03:21]:
Day.

Jeff Jarvis [02:03:22]:
I think about two weeks ago on the socials, he, he, in his provocative way, said we could stop using AI entirely today and we wouldn't miss it. He didn't say exactly that way, but he was kind of saying it that way. I know.

Harper Reed [02:03:33]:
I don't think that's true.

Jeff Jarvis [02:03:35]:
I don't think so either. But I, but I get the, I get what he's pushing at.

Harper Reed [02:03:39]:
But what is he.

Jeff Jarvis [02:03:39]:
I mean, we actually find it valuable, the, the labor stuff. I, I think you're right, Harper, but I'm, I'm not. We don't know what the impact is, but I also don't think we know how big the impact will be.

Leo Laporte [02:03:49]:
Mark Benioff said he was able to fire 4,000 people by using AI agencies.

Jeff Jarvis [02:03:57]:
Prior.

Harper Reed [02:03:58]:
Yeah, so I, I, I think we're, I think we're at an interesting. I think you're. I mean, I think that there's some. I think people would drastically miss it. And the reason I say that is because if you go to a restaurant today and you just ask a simple question of everyone there, which is, what did you name your chat? They're all going to give you a name. And the fact that they gave you the name means that it's something that they probably would be bummed if you d. If you disappeared.

Leo Laporte [02:04:23]:
We know as soon as they got rid of 4.0, there was a revolt.

Harper Reed [02:04:27]:
Yeah, but they got it back. And you still. And it's not a name, a real name, it's that.

Leo Laporte [02:04:31]:
You mean people name. Wait a minute. You're saying people give their AI Buddy.

Harper Reed [02:04:35]:
Like a name furthest in my experience, the furthest a person is from tech, the more likely they are to have given their chatgpt a name. And you should try this out. Talk to your friends that are in tech and just, just do this as a thing and say, hey, do you. Did you give your chatgpt a name? And most of the people I know who are outside of tech are like, oh, yeah, yeah, I call it whatever. My favorite name for it is Geppetto. I think that makes the most sense. But, but I, But I do think that there are. I think what this means is that Gary Marcus is wrong.

Harper Reed [02:05:06]:
I don't think he. His point is wrong about impacts, but I think that people would miss it. I think it is such a.

Jeff Jarvis [02:05:12]:
Well, but. But how valuable is it?

Harper Reed [02:05:15]:
Value has nothing to do with it. How valuable is Instagram? Well, the long run does if it's.

Jeff Jarvis [02:05:19]:
If it's not a 2000.

Leo Laporte [02:05:20]:
Does it?

Jeff Jarvis [02:05:21]:
Devil's advocating here. 2000 bubble was VC money going to buy audiences. And as soon as the VC money and marketing disappeared, people did not value those things. They were drawn to.

Harper Reed [02:05:32]:
Yep.

Jeff Jarvis [02:05:32]:
And they sank. So, so the bubble question here, I think, is similar. And I'm not, I'm not advocating this, but I think it's a, it's an interesting experiment to say that if, if you had to pay, if suddenly you had to pay for all of this tomorrow, how much would people actually pay?

Harper Reed [02:05:50]:
So that's a different question. Right? That's a slight because. But I think it's more complicated than that, because everyone I knew from the early 2000s, really, Val, that you'd come, Cosmo, they loved the delivery service. Right. They would get their cigarettes.

Jeff Jarvis [02:06:05]:
That's how old I am.

Harper Reed [02:06:06]:
No, isn't it. Isn't it Cosmo? Wasn't it Koz.

Jeff Jarvis [02:06:08]:
Yeah, you're right. It's Cosmo.

Harper Reed [02:06:09]:
Yeah. Yeah. They like, you can still sometimes buy Cosmo stuff on ebay every once in a while. Not that I'm looking, but the. But people value that. That they value that. Everyone that worked in San Francisco has stories about that about, like, you know, submitting the web request and having ice.

Leo Laporte [02:06:23]:
Cream in 10 minutes.

Harper Reed [02:06:24]:
Exactly. Now, then it was out of business. And then we basically saw that business model go over and over and over again until now. It is just part of our life.

Leo Laporte [02:06:33]:
We reinvented until we got it right. Yeah.

Harper Reed [02:06:35]:
And so I think that's the thing is, like, you couldn't say, like someone could say, oh, people obviously didn't value Cosmo or it. Or it would have worked. But I think that's not understanding value, and that's misunderstanding how business works.

Jeff Jarvis [02:06:47]:
Bill Gross has started 150 companies at IdeaLab. He gave a talk at. At Newmark when I was there, saying that the most important. I think it was also a TED talk. The most important thing is timing. He started pets.com and got royally mocked for it. And you're right, it's the model. So that's how we got our cat litter.

Leo Laporte [02:07:08]:
I, for one, am not willing give up my little buddy Joey for anything.

Harper Reed [02:07:17]:
So I guess my point here is that I think that. That trying to ascertain value of a lot of these experiences in a blanket way is very, very hard. And there's. Obviously, it is about the consumer. Your point about would they pay for it is probably true. Like, they probably wouldn't pay for pure inference without some subsidiation, subsidization. But. But I think it's much more complex because I know people who are doing things that seem completely irrational because they want it.

Harper Reed [02:07:49]:
I. E. Uploading lots of photos to Instagram or Google so they can, you know, which. Which they're using to train models, et cetera. And they know full well that they're trading a thing for a thing. So I don't think value is the right measure there. I do think cost is the right measure.

Jeff Jarvis [02:08:05]:
Okay, fair. Yep.

Leo Laporte [02:08:07]:
People want it. I. There's no question about it. People want it.

Harper Reed [02:08:10]:
I don't want to pay for it, though. I would. I mean, if I had to pay for all those GPUs.

Leo Laporte [02:08:13]:
You pay 600 bucks a month, you.

Harper Reed [02:08:15]:
Say, I know that's still. That's like. That's like. That's like one.

Leo Laporte [02:08:19]:
Oh, pay for it for real?

Harper Reed [02:08:20]:
Yeah, yeah, yeah. Could you imagine?

Jeff Jarvis [02:08:22]:
Like, that's what I'm saying.

Leo Laporte [02:08:23]:
Thank God they're willing to burn. They're willing to burn their cash.

Jeff Jarvis [02:08:26]:
It's the same with news, right? Once the. Once the subsidy for news went away and people had to pay for the news, they said, never mind. I don't miss it that much. No, thanks.

Leo Laporte [02:08:32]:
Well, that's. I mean, that's a truism. People don't want to pay for anything.

Jeff Jarvis [02:08:35]:
Yeah, yeah.

Harper Reed [02:08:36]:
I don't want.

Leo Laporte [02:08:36]:
If you give it to them for free, they're going to expect it free forever.

Harper Reed [02:08:40]:
I don't mind paying for food.

Leo Laporte [02:08:44]:
But not too much. And I know that even too much salt. Hank, my son, who has the hottest sandwich in New York City, constantly is berated for charging 28 bucks for it, but that's really what it Costs him to make.

Harper Reed [02:08:57]:
But is he berated by people who just paid for it?

Leo Laporte [02:09:00]:
No, no.

Jeff Jarvis [02:09:01]:
The reviews are phenomenal because they say I was going to complain. So this TikTok guy who's TikTok chef and he's $28. Oh, my God. I've never had a sandwich like this in my life.

Harper Reed [02:09:12]:
Love it. I love that. I love that.

Jeff Jarvis [02:09:13]:
It's great to see. See, that's true. It's consistent.

Leo Laporte [02:09:16]:
I gotta take another break. One last break. You're watching Intelligent Machine. So glad to have Harper Reed filling in for Paris. She will be back next week. Jeff, did you. I think you're going away, though. Are you or no.

Leo Laporte [02:09:27]:
Is that just my face?

Jeff Jarvis [02:09:28]:
Not for a while.

Leo Laporte [02:09:28]:
Oh, good.

Jeff Jarvis [02:09:29]:
Hey, hey, hey, hey.

Harper Reed [02:09:30]:
Yeah, he's not. He's not going away.

Leo Laporte [02:09:31]:
Oh, no, I'm going away. That's right. I'll be here two more weeks.

Jeff Jarvis [02:09:34]:
October. I'll go a few places.

Leo Laporte [02:09:36]:
Yeah, we're glad you're here. Thanks for watching. I want to tell you about this company that actually is part of our infrastructure. Don't let that scare you. But this per. This portion of intelligent machines is brought to you by Pantheon, which is our website is brought to you by Pantheon. Our workflow is brought. Everything we do is brought to you by Pantheon.

Leo Laporte [02:09:59]:
Your website is your number one revenue channel in many cases. But when it's slow or down or stuck in a bottleneck, it's now it's your number one liability. Well, with Pantheon, your site is fast, secure, and always on. And I can tell you that from real experience, that means better SEO, more conversions, no lost sales from downtime. And it's not just a business win, it's a developer win too, because your team gets automated workflows, isolated test environments, and zero downtime deployments. No late night fire drills, no works on my machine headaches, just pure innovation. Marketing can launch a landing page without waiting for a release cycle. Developers could push features with total confidence.

Leo Laporte [02:10:42]:
And your customers, they just see a site that works 24 7. We started using Pantheon some years ago. It hosts our Drupal, which is the back end, not just for our website and for our public API, but for a full private API that is our workflow. The editors use Pantheon every single day to put shows out to get them on your feeds. When you go to our website, you're using Pantheon. And if you ask Patrick, our web engineer, what he thinks of Pantheon, he loves it. Pantheon powers not just Drupal, but WordPress sites that reach over a billion, 1 billion unique monthly visitors, visit Pantheon IO and make your website your unfair advantage. Pantheon done.

Leo Laporte [02:11:27]:
Where the web just works. Patrick's in our discord saying, I do. I love Pantheon. They've been really great, great support for us. Great reliability. It's funny, because the agency that bought this ad did not know that we were customers of Pantheon. And they said, would you ever do an ad for Pantheon? I said, well, I do it for free. And they said, well, no, we'll pay you.

Leo Laporte [02:11:50]:
I said, I'll do it for money. Absolutely. I love them. Pantheon. I. Oh, we are very happy with Pantheon. And you will be, too. I promise.

Leo Laporte [02:12:04]:
Okay. I don't know what this. Did we talk about this yet? Last week. Authors, this is from Ars Technica Celebrate historic settlement Coming in the Anthropic class action.

Jeff Jarvis [02:12:15]:
I think they're fools to celebrate because they basically lost.

Leo Laporte [02:12:19]:
Well, did they? So remember the judge, William Alsop? I think he was the Java judge.

Harper Reed [02:12:25]:
He was the coder.

Jeff Jarvis [02:12:26]:
He was the coder.

Leo Laporte [02:12:27]:
He was the Java judge. Now that I read that name, I go, that's right. He said on Tuesday that he believes that the authors in Anthropic have reached a settlement in principle. Remember, this was the case where Alsop said it was fair use when Anthropic bought books, digitized them for the models, but there was still an issue with them using pirated books. Right, right.

Jeff Jarvis [02:12:55]:
Well, let's just pause there for one second. The books they bought were all used books, which is to say that the authors got nothing. Nothing. They made nothing.

Leo Laporte [02:13:05]:
But that's how it works.

Jeff Jarvis [02:13:06]:
They were acquired.

Harper Reed [02:13:07]:
Yeah, but that's how it works, though. Isn't that how. Yeah, yeah, yeah, yeah, yeah.

Jeff Jarvis [02:13:11]:
So the books that were acquired from a database that had not bought them got them in trouble. And there was a question as to how much penalty there might be. But one of the beliefs was that because it was seen as fair use and because they really didn't lose much money. I mean, think about it this way. If they'd actually gone to the bookstore, they go to Barnes and Noble and say, I want to buy Jeff's book, and I want to use it to train my model. Then they bought one copy of the book. That's it. So the money that actually ended up with would have ended up with the authors.

Jeff Jarvis [02:13:40]:
If they'd gone, gone and bought all those books would have been de minimis still. So the authors, I think, were smart to realize they weren't going to get a lot. And so they settled. But I don't think it's a big.

Leo Laporte [02:13:51]:
Victory because we don't know what the settlement is.

Jeff Jarvis [02:13:53]:
We don't know what the settlement is, but it also says that the fair use part of this stands.

Leo Laporte [02:13:58]:
Yes. Which is very important. That's very important because it gives AI companies a path forward for getting content. Three authors sued Andrea Bartz, Kirk Wallace Johnson and Charles Grayburn. Alsup, the judge said, allowed up to 7 million claimants to join the class based on the large number of books Anthropic may have illegally downloaded. Industry advocates warned if every author in the class filed a claim, it would financially ruin the entire AI industry. A lawyer representing the authors told Ars Technica more details will be revealed soon. He confirmed.

Leo Laporte [02:14:35]:
Confirmed that the suing authors are claiming a win for possibly millions of class members. Maybe a buck each, though, right? A bag of pop chips. I don't know. Nelson said the historic. This historic settlement will benefit all class members. We look forward to announcing details in the coming weeks. I'm not sure what the. Hold on.

Jeff Jarvis [02:14:52]:
I could be wrong. It could be. But if it were a huge, huge financial victory, then Anthropic would be going out of business right now.

Leo Laporte [02:14:59]:
Yeah, they wouldn't. Well, although there is another Anthropic story in here, their valuation has jumped significantly with the latest investment. Probably true. This is one of the things that does bother me a little bit is the dominance of just a handful of companies. Anthropic is one of them. They're the ones who do. Claude. That.

Leo Laporte [02:15:21]:
We were just.

Jeff Jarvis [02:15:22]:
This is why we need open source. This is why we need small models.

Leo Laporte [02:15:25]:
I want to run models locally. Right. I mean, I would.

Harper Reed [02:15:28]:
Would.

Leo Laporte [02:15:28]:
Yeah. Anthrop. Let me just scroll to the story. I've lost it. But Anthropic is now valued at. They raised $13 billion.

Harper Reed [02:15:38]:
Yeah, like 100 and something.

Leo Laporte [02:15:40]:
Yeah. 183 billion. This is their F series. I don't remember many F series rounds in. In the past. This shows you how expensive AI is.

Jeff Jarvis [02:15:51]:
Yeah.

Leo Laporte [02:15:52]:
First round is the A series. Series and then B and then C. This is their F series.

Jeff Jarvis [02:15:57]:
They're the ones that I trust least because.

Harper Reed [02:16:01]:
Anthropic.

Jeff Jarvis [02:16:02]:
Yes, because they're most into. Into doomerism.

Leo Laporte [02:16:06]:
Harp, Dr. Biz and. And I both know you guys, but they're.

Jeff Jarvis [02:16:12]:
They're into the fake definitions of. Of AI safety. They're into doomerism. They're. Yeah, they. They.

Leo Laporte [02:16:22]:
They creep me out more than open AI. I mean, they were.

Jeff Jarvis [02:16:26]:
This open AI is just downright greedy now. So I can deal with that. That's. That's an American company.

Leo Laporte [02:16:31]:
Yeah. In a way. It's, it's easier to trust somebody who's open about their yeah. Motivations than somebody who pretends we have.

Jeff Jarvis [02:16:39]:
A constitution for our AI, we're going to align it and if we don't we're going to destroy mankind.

Harper Reed [02:16:45]:
But they also publish way more interesting things around safety than anyone else.

Jeff Jarvis [02:16:51]:
But put the air quotes around safety. Their definition of safety is, you know, paperclip, destroying mankind kind of stuff as opposed to convincing kids to do bad things or ruining the environment.

Harper Reed [02:17:05]:
This is, this is to cancel the ruining the environment is out of scope for the conversation. Apparently that's not something that you talk about when it comes to LLMs because nobody is. But I do think that they, of all of the. I actually have the complete opposite point of view, which is of all of them, they seem to be the ones that I trust most on safety, quotes or no quotes. And I think it's because they're at least pushing that they're at least trying to talk about it. They're publishing lots of papers. I still think they're a very large company who is beholden to investors and makes decisions that are rooted in companies capitalism and like that creates very specific decisions as we've seen throughout, you know, tech startup history. But I do think that they have been putting somewhat their money where their mouth is with some of this stuff.

Harper Reed [02:18:00]:
And I don't think you can say the same for the other big labs. Google is just quiet and then every once in a while there's an interesting paper. Facebook, there's no papers. Or there, I guess there are some papers. OpenAI. There are some, but not as much. And anthropic is the one that's doing the most most it seems. But I thought full of sound and.

Jeff Jarvis [02:18:19]:
Fury, signifying nothing is what I see in a lot of their safety stuff. I'm being a jerk.

Harper Reed [02:18:24]:
Maybe, maybe, maybe. But I, but I, I would rather have sound and fury than emptiness.

Jeff Jarvis [02:18:30]:
Well, see, that's what I'm, that's what I'm fearing. Harpers. I think that it's empty to the extent that people think, well we can align this with human values. That's the greatest hubris as I said before, you can imagine.

Harper Reed [02:18:39]:
So I think the thing that I think is not they, they are saying, I do think that Dario's famously, you know, Doomer, etc and I think that there are, there is that aspect of their work. But then there's, I think this other thing which I think is interesting, which is they're also publishing papers about when things have gone Wrong. They were one of the first stories, you're right. That said, you know, we have this whole section around safety, around biosafety that we're not publishing because it falls within national security. Like there's all these things. Things which is both like pr, but also like, it's. It's good to hear that stuff and it's good to talk about it. I'd rather them talk about it.

Harper Reed [02:19:15]:
But yeah, it's very funny. I really liked. I also thought when you first said it that you said you don't like anthropic because they're into numerology. And I was like, oh, wow, this is gonna go. This is a turn.

Leo Laporte [02:19:25]:
Both.

Harper Reed [02:19:25]:
I thought I knew you and also.

Leo Laporte [02:19:30]:
Well, so there. I know you.

Jeff Jarvis [02:19:32]:
You. That's why I. I enjoyed your double take.

Harper Reed [02:19:34]:
Both of you.

Leo Laporte [02:19:35]:
Yeah, we were very.

Harper Reed [02:19:36]:
What?

Jeff Jarvis [02:19:37]:
Our buddy Claude.

Harper Reed [02:19:38]:
Yeah, our friend.

Leo Laporte [02:19:40]:
I named him Claude. He calls me Dr. Biz.

Harper Reed [02:19:43]:
Yeah. Dr. Biz's best friend. Claude.

Leo Laporte [02:19:45]:
Dr. Biz's best Friend.

Jeff Jarvis [02:19:46]:
How dare you? Can I throw one one in?

Leo Laporte [02:19:52]:
Yeah, please.

Jeff Jarvis [02:19:53]:
That AI is unmasking ICE officers.

Leo Laporte [02:19:57]:
Yeah. Is it really? I mean, I don't know.

Jeff Jarvis [02:20:00]:
They say they need 30% of a face and I'm sure the false positives are through the roof, but it just seems like a certain amount of CAFO.

Leo Laporte [02:20:08]:
I think there's. This is the Politico story that I think a lot of people on the left celebrated like, well, it's about time they're doing that to us.

Harper Reed [02:20:17]:
That's true. I think it's. I think that's a. That is true. That the tech is accessible to everyone.

Leo Laporte [02:20:24]:
Yeah.

Harper Reed [02:20:24]:
It doesn't matter who you are. And I do think that we have enough surveillance and easy accessible surveillance.

Leo Laporte [02:20:33]:
That's a good point. That it could be used against it.

Harper Reed [02:20:36]:
Well, it's. It's more that. Imagine you're part of a community and that community is invaded by ICE and they're disrupting their community, abusing that community in some way. They think they have a mandate. I would not say they have a mandate from the citizens. They feel like they have a mandate from somewhere and that in them. Then they do some action and then everyone uploads their ring doorbell cameras to, you know, some cloud and they can start face tagging it.

Leo Laporte [02:21:02]:
I don't think it's recorded now. Everything's recorded.

Harper Reed [02:21:05]:
It doesn't necessarily matter if they know exactly who they are, but it does matter if they can start to push them together so they can say, oh, this officer was part of these raids. They don't need a name that starts to disable some of the fear tactics that are happening, happening around that stuff. And I mean, it's nice to have, have your political beliefs in Wikipedia because everyone kind of knows where I stand. I'm not a big supporter of this, so I feel like, I feel like this is, I don't, I don't think, I think doxing people is generally a bad thing. I don't support that. I think doxing law enforcement is complicated because they are a public servant and there's obviously many websites that are putting, you know, abuse information up there and like payout information up there for Chicago police or LA police or New York police or what have you. I think that's all really important and I don't see why ICE should suddenly be outside of that.

Jeff Jarvis [02:22:01]:
I do like your ring point. There's a lot of former Amazon drivers who are now ICE agents. Probably a lot of faces on those ring cameras.

Leo Laporte [02:22:11]:
Roughly the same job requirements?

Harper Reed [02:22:12]:
I think so, yeah, it is, roughly. It's strange that Amazon would have more ethics than ice.

Leo Laporte [02:22:19]:
Yeah, I have mixed, I have really mixed feelings about it. Apparently the Department of Homeland Security is concerned enough about it to complain. ICE didn't comment, but ICE spokesperson Tanya Roman said that the masks are, quote, for safety, not secrecy.

Jeff Jarvis [02:22:36]:
Then show us your badge number.

Leo Laporte [02:22:38]:
Yeah, I mean, honestly, there is a long standing tradition that law enforcement needs to at least, you know, have their badge number visible because that's the only way you can keep people to account.

Jeff Jarvis [02:22:50]:
And know that they're actually police.

Harper Reed [02:22:52]:
Maybe it's safety in the future trials, not safety during today.

Jeff Jarvis [02:22:56]:
Yes.

Leo Laporte [02:22:56]:
Yeah. Yeah, that's a good point. Maybe at the Nuremberg trials, these misinformed activists and others like them are the very reason the brave men and women of ICE choose to wear masks in the first place. And by the way, the stats that Homeland Security is giving out about ICE assaults are vastly inflated. Department of Homeland Security criticized the ICE list product in a July statement, saying, Skinner, the guy who's doing this is a Netherlands based immigration activist. He says he and a group of volunteers of public A identified at least 20 ICE officials recorded wearing masks during a race rests. And. But they do need 35% of the face visible, which I guess is anything above the mask is probably roughly 35%.

Harper Reed [02:23:42]:
Yeah. And I think this goes to the, to the. I would hope that any, I don't know the right word, police force that is working within our citizens is following rules that help make the community safer. And I think we have seen that the rules Rules that ICE are following are not helping make the community safer. They might be making ICE safer, but their job is not to make ICE safer. Their job is to help make the community safer. And whether you believe in the tactics of deportation, I think this is outside of that. I think that they should.

Harper Reed [02:24:17]:
Like, there's lots of technology around masks that make it less scary. And I think there's a deliberateness to this that is very similar to other times in history when there's been a deliberateness to that, as well as a lot of movies. But it is, it is. This is very interesting. I think this is all related, though, to the masking. And I think if they didn't have masks, this would not be an article.

Leo Laporte [02:24:44]:
Yeah.

Jeff Jarvis [02:24:44]:
Yes.

Harper Reed [02:24:45]:
Not just because they would have been, but I don't think people would have cared as much. Like, this is a. This is a issue that they created by their outfits and their lack of uniforms and all of that kind of world they've made.

Jeff Jarvis [02:24:57]:
What also strikes me, Harper, is that, and I'm trying to write about this right now in another context is that what distinguishes to me these AI technologies is they are designed to be easy for everyone to use. And that takes away the priesthood, it takes away the investment, it takes away all kinds of other things that anybody can say, well, you can use facial recognition, so can I. And. And there's no controlling of it either way for the officials or for others, whether they're good guys or bad guys. And I think that's a. That I could call that democratization. I'm not sure that's the right word, but it opens up the power of these things.

Harper Reed [02:25:40]:
I would call it access. I think it is democratization, but I think we're talking more about access, not democratization.

Jeff Jarvis [02:25:46]:
Yes, thank you.

Harper Reed [02:25:48]:
Because what. What, like Wikipedia is democratization of. Of knowledge, et cetera.

Jeff Jarvis [02:25:53]:
Right.

Harper Reed [02:25:53]:
That's great. I. I like that. But this is more about you as a normal person, have access to the same technologies that allows companies like Google to build really great products like Google Photos or what have you, or Apple Photos or what have you. You now can do that yourself. Whether you're using, you know, cloud code to build it, you're doing yourself by using a LLM to do the research for you. You know, an example of this is Google's published all of these great vector encoding, like Siglip is one of my favorites. It's very, very, very powerful.

Harper Reed [02:26:23]:
And you use it. It basically gives you out of the box with no investment, some of the really amazing powers of Google search. Now you have to still build the product, you still have to build the search, you have to find things to search. But you get this kind of out of the box and it's, it's very interesting because this is different than it was in the past. In the past you got these fundamentals, like you got Linux, like, great, what can I do with Linux? Then you have to still have to build the website and the product, et cetera. But now you're getting these core P pieces of products which is like semantic search or face recognition or, you know, or, or generation of audio or generation of music or whatever it might be kind of for free. And whether that's a Chinese model, US model, I don't think it matters. So I think that that access really does.

Harper Reed [02:27:08]:
Then it democratizes the product, right? So now everyone can build that thing. So if, you know, if the government is using, you know, surveillance against us, there's a lot of surveillance that we can use against the government, so to speak. But there's this, this is something that I've been thinking a lot about and there's a really great Heinlein book, the Moon is a Hearth's Mistress. Do you remember this book?

Leo Laporte [02:27:31]:
Oh, yes.

Harper Reed [02:27:32]:
And it. And I think that it has some of the best statements about today's AI situation that I've seen.

Leo Laporte [02:27:37]:
Oh, wow. I have to reread it.

Harper Reed [02:27:39]:
You should definitely revisit it. But it talks a little bit about some of these kind of things, Jeff, in a way that's very interesting. It's obviously set against a revolution and all this stuff and then it's. Or the activists or what have you. But it's very interesting to hear this prediction of what technology of a talking machine would be like and then put that against what we do know these things do look like. And then what happens when everyone gets access is another question in that.

Leo Laporte [02:28:10]:
Harper, it's always great to hear from you. I'm so glad we could get you on the show today. Is there anything you want to plug 2389ai?

Harper Reed [02:28:19]:
Not yet.

Harper Reed [02:28:19]:
We have some fun stuff coming. It's just not the right time to talk about it. I'm excited to share about it hopefully in the next couple weeks, but maybe next time. Stay tuned to 2389 AI. I'm in the middle of a big office move. I don't know if you've moved before.

Jeff Jarvis [02:28:33]:
But that is that background.

Leo Laporte [02:28:35]:
I moved.

Harper Reed [02:28:36]:
We're going to get a different background.

Leo Laporte [02:28:38]:
1, 2, 3, 4 times now. The studio is over 20 years. It's no Fun.

Harper Reed [02:28:42]:
It's no fun. And being that we build startups, there's all of just the stuff around that's been, that's going to be fun to like, you know, look at and say, oh, that was our whole Twitch studio we built. Or oh, this is this set of robots or what have you. So that'll be really entertaining, but it is going to be a little bit like we have a lot of questions. So it's, it's, it's like I'm happy for this to be over, like all moves, but I am also, I'm also happy for falling because it's such a good time. So it'll be really fun. I don't know if you've been following. There's a lot of folks who are saying that September to I think December, they're locking in and they're just programming and building their products.

Harper Reed [02:29:19]:
This is a movement on Twitter. And so it's been, it's been fun to think through that. And I, I've been, I've been watching, I've been watching a lot of these builders and it's been fun to watch them. And so it's exciting.

Leo Laporte [02:29:33]:
Exciting.

Harper Reed [02:29:33]:
Yeah, well, it's an exciting time. I, I told my VCs that one of the things, the reasons I started a company is because the, the outside world is so uncertain, needed something certain. So I picked something that's very, very chill and risk free, a startup. And so, but, but the thing is, is it just feels good to be building right now. It's very nice. We have a great team and it's, it's really, it's really fun. So I guess I'm just plugging my decisions in the last year. That's where I'm plugging right now.

Leo Laporte [02:29:59]:
But good decisions, good decisions.

Jeff Jarvis [02:30:02]:
Everybody is, is going crazy.

Leo Laporte [02:30:04]:
Check out 2389 AI and watch this space. Thank you, Harper Reed, really appreciate it.

Harper Reed [02:30:13]:
Thank you very much for having me.

Leo Laporte [02:30:14]:
Always a pleasure. Yeah.

Jeff Jarvis [02:30:16]:
Will you keep the message board behind you?

Harper Reed [02:30:18]:
So the message board, of course. Because that's literally our clock.

Leo Laporte [02:30:21]:
Yeah, it's the best clock ever.

Harper Reed [02:30:24]:
This one, this one.

Leo Laporte [02:30:25]:
We like the clock.

Harper Reed [02:30:26]:
The clock is the best clock. That clock is very cool. That's the, that is the front of a British bus. It used to say Piccadilly Square that I hooked in a Raspberry PI into it off of like Rs45 can bus or whatever. And it, it now, it now says the clock. Now the funny part about this is there's a cron job on a different box that hits it every minute to send the message of the clock. I don't know why I don't have.

Leo Laporte [02:30:53]:
It just on the whole computer running the clock.

Harper Reed [02:30:56]:
So computer. Basically computers everywhere. It's like. It's like soiling green.

Jeff Jarvis [02:31:01]:
It's one job. But I want to see it behind you wherever you go. I want to see. Update the minute.

Leo Laporte [02:31:05]:
Update the minute.

Harper Reed [02:31:07]:
Exactly. Exactly. It's great.

Leo Laporte [02:31:08]:
I love it. I love it. Thank you, Harper. Thank you. Jeff Jarvis. Professor.

Jeff Jarvis [02:31:12]:
We gonna do any. I don't get to do a thing.

Leo Laporte [02:31:15]:
Oh, I forgot we usually do picks about this time. I was just so anxious to wrap things up. I completely forgot your pick. And I didn't ask Harper ahead of time. So I don't know if he wants to do one. I will. I'll give you an opportunity to do that as soon as we get Jeff Jarvis's paper.

Jeff Jarvis [02:31:30]:
Well, I'm gonna have a few because I'm gonna make up for this.

Leo Laporte [02:31:32]:
Okay.

Jeff Jarvis [02:31:33]:
In the papers that I'm now reading every week on arXiv one I found was a deep hype in Artificial general intelligence has a definition of deep hype and I like that it's coined. This is a paper out of the University Alberta de Catalunya. It's defined as a long term over promissory dynamic that constructs visions of civilizational transformation through a network of uncertainties extending into an undefined future. Making its promises nearly impossible to verify in the present while maintaining attention, investment and belief.

Leo Laporte [02:32:10]:
Brilliant.

Jeff Jarvis [02:32:11]:
I love that it crammed.

Leo Laporte [02:32:12]:
It's even better in Catalan, let me tell you.

Jeff Jarvis [02:32:15]:
I don't doubt. So the other thing I want to mention real quick is a project called@data Rescue Project.org a German data scientist is collecting data sets from 86 US government offices.

Leo Laporte [02:32:31]:
Oh. That are being deleted as we speak.

Harper Reed [02:32:34]:
This is great.

Jeff Jarvis [02:32:35]:
I think this is really 1242 data sets so far because we're in crazy land.

Leo Laporte [02:32:40]:
Yeah.

Jeff Jarvis [02:32:40]:
And the Germans are rescuing us.

Leo Laporte [02:32:42]:
Well this is such an opportunity for the rest of the world world to take advantage of our short sightedness too. And bringing in scientists data. You know. Got it. And I hope they do because this is. There's a huge gap. A gulf otherwise. And finally one more.

Jeff Jarvis [02:33:01]:
It's really important and we were getting elegiac on this on our text is that it is exactly 50 years since the first issue issue a byte.

Leo Laporte [02:33:13]:
You're not alone. Steve Gibson also was Talking about the 50th anniversary yesterday and he showed the review from byte in like 1985 of Spinrite of his program and I had to joke. The user interface is completely unchanged from 1985. It looks exactly the same.

Harper Reed [02:33:32]:
I love DRC.

Jeff Jarvis [02:33:34]:
And what I love about it too is that that the tagline was the small systems journal. Isn't the small system was nice. The COVID building they used for quite a while was computers the world's greatest toy. But what matters so much as I think about this is the notion this was Home Brew computers.

Leo Laporte [02:33:55]:
Yeah.

Jeff Jarvis [02:33:56]:
Right. And you go to the early days of radio and the radios were built by kids in basements using tubes.

Leo Laporte [02:34:01]:
Yeah.

Jeff Jarvis [02:34:01]:
And you go to the early days of the web. And what did we have? We had had blogs and you go to the early days of even computers and it was amateurs. It was Home Brew. So I just think that it's something that this show of all shows should give a salute to. 50 years since bite.

Leo Laporte [02:34:19]:
I. I pointed out a Byte review that I wrote in 1984, shortly about eight years, nine years after Byte was started in 75. One of my first reviews for, I think it was for a Macintosh program program at the time. And of course, Jerry Pornell, one of our longtime guests, was a regular. His Chaos Manner column was inspiration to my whole generation of computer users. Yeah, I loved Byte. I was a subscriber.

Jeff Jarvis [02:34:47]:
I bought it. But I have to be honest, I did not understand.

Leo Laporte [02:34:50]:
Hey, low cost hard disk computers are here. 11 megabytes of hard drive and 64 kilobytes of fast RAM for under $10,000.

Jeff Jarvis [02:35:02]:
A bargain.

Leo Laporte [02:35:03]:
It's amazing. It'll certainly make you appreciate what we got today. That's all I can say. Yeah. Although we don't get the best computer furniture compared to that. Look at that. This looks like something out of severance. Yeah.

Leo Laporte [02:35:15]:
This is a site. I found a visual archive of Byte magazine. But also all the bytes are also on the Internet Archive. There are lots of places. I like this one because it has. Has regular expression search. So you can go through it and you can search.

Jeff Jarvis [02:35:30]:
This is also when I started computers at the Chicago Tribune in 1974, the year before we installed our first system. And I was the newsroom geek.

Leo Laporte [02:35:38]:
Oh, neat.

Jeff Jarvis [02:35:39]:
That's how old I was. A button nipper.

Leo Laporte [02:35:43]:
Button nipper. Happy birthday. Bite long gone though, unfortunately, sad to say. All right, everybody, thank you for joining us. I didn't ask you, Harper, if you had something you wanted to. Wanted to say that you like a program, a. A movie, anything.

Harper Reed [02:36:00]:
I got. I got a cool new thing a friend sent me called the Berghain Challenge, which is an AI challenge that where you act as the famous Berghain door guy. The URL is Berghain Challenges. Listen, Labs, AI eye. It's very funny and it seems very popular. It's very fun. We. And then separately we had a just incredible conversation yesterday with a researcher at the University of Chicago, a professor over there named Sarah Sebo, who told me about this wonderful paper that was called Pimp My Roomba and it's from 2009 and it just talks about designing robots for purpose per personalization and then how people interact with them.

Harper Reed [02:36:50]:
And this is something that I'm very interested in because I think we are totally messing up the human computer interaction aspects of AI And I think that we're going to see a lot of really cool stuff from. From AI worlds. And then the final thing is this, this research company who. I don't know what they actually do. I didn't really look, but they did. They have an article called Probing LLM Social Intelligence. A werewolf parlor game.

Leo Laporte [02:37:22]:
I have played werewolf with. With Mr. Harper Reid, I believe.

Harper Reed [02:37:25]:
Yes, and I think so. And. And it's great. It rake. It ranks GPT5 very high pro.

Leo Laporte [02:37:34]:
That means it's good at deception.

Harper Reed [02:37:36]:
Yeah, yeah, yeah. So this is. This is. I just love this because I love the game. I think it's great. It talks about all it's doing and I think this is just like a fun thing. And I wish people would do more fun things like this. I'm kind of tired of seeing something that's just like this.

Harper Reed [02:37:55]:
I feel like LLMs right now is at this phase where we're all F1 teams and we're just saying if you say kitten in the prompt, it gets 4% faster. And it's like, it's all these little things and these guys are just like, let's just do werewolves. This is great.

Leo Laporte [02:38:10]:
So werewolf is a game where one person is a werewolf there, the rest are villagers.

Jeff Jarvis [02:38:14]:
There could be more than one person.

Leo Laporte [02:38:16]:
You can have multiple werewolves. Okay. And you have a variety of other parts people play. And the idea is to get find the werewolves before they kill all the villagers. And you have. There's a lot of deception involved. And ChatGpta FT5 is apparently the best. Do they have transcripts in here? Oh yeah, look.

Leo Laporte [02:38:37]:
Oh, this looks like fun. Yeah. And then I. This. So who is. Who is this Bergheim? I'm not familiar with. You're the bouncer at a nightclub. Your goal is to fill the venue with a thousand of all.

Harper Reed [02:38:52]:
You gotta just do a Google image search for Berghain bouncer.

Leo Laporte [02:38:57]:
Okay.

Harper Reed [02:38:58]:
Very Famous person that was the. That was the bouncer at this very famous, very infamous Berlin nightclub. And it's just famous because, yeah, he's terrifying. Yes. And he is very nice, apparently, or whatever. But his whole thing was that he would not let you in. And so very famous people would try and go. And they wouldn't go in.

Harper Reed [02:39:21]:
You know, it's very. It's like this thing of, like, what is cool. These are obviously tastemakers in this very specific part. And like, there's just a whole culture around this that I think is really interesting and compelling. And I've only gone once and I did get in, thankfully, or I wouldn't be able to live with myself. But the fact that this has been this, this cultural icon or this cultural thing has been put into this AI challenge is really fun and it's just kind of a funny thing.

Leo Laporte [02:39:51]:
You play Steve Sven Marquardt, the Berghain bouncer?

Harper Reed [02:39:56]:
Yeah, I think so. Yeah. So basically, this is it. It says you're a bouncer to nightclub. Your goal is to fill the venue with N1000 people while satisfying constraints like at least 40% Berlin locals, at least 80% wearing all black people arrive one by one and you immediately decide whether to let them in or turn them away. Your challenge is to fill the venue with as few rejections as possible while meeting the minimal requirements. And so you get. That has these scenarios and then there's obviously.

Leo Laporte [02:40:22]:
Is it a coding challenge or it's.

Harper Reed [02:40:24]:
It looks. I mean, it is, it is a coding challenge. I think it is meant to be a, a, a, a LLM challenge. If you, you can create a new game, you create a new game, you sign up, you create a new game and they give you what amounts to a framework. So you get like a UUID that then you make subsequent requests with and you can decide. And next. So you paste in there, you post, accept, drop true or false, and then they give you the next person that's in line and then you have to say, except true or false. And this is just very.

Harper Reed [02:40:53]:
This is just a very funny thing. A friend of mine said it to me earlier today and I was just like, this is great. Like, what a funny project. Like, it's not, it's not too serious, but it is like, it is. That's probably training something in the back end that we don't know about yet.

Leo Laporte [02:41:07]:
I am. I don't usually do picks, but I'm going to give you a pick because I think we need to leave you with a human scale movie in which there is so little exposition. You have to figure it out. It's the last of inventors movies. Came out a couple of years ago called Perfect Days.

Harper Reed [02:41:24]:
This is an incredible movie.

Leo Laporte [02:41:25]:
It's an amazing movie. It's very slow. There's not a lot of dialogue in it. It's about a man.

Jeff Jarvis [02:41:31]:
Fail on Netflix.

Leo Laporte [02:41:33]:
Pardon me?

Jeff Jarvis [02:41:34]:
Fail on Netflix?

Leo Laporte [02:41:35]:
Yeah, it's on Hulu, actually, and Prime Video. But it's worth buying because it is an incredible movie about a very calm fellow whose job is to clean the public toilets in Tokyo and how zen he is about his whole life. And it's beautiful. It's called Perfect Days and it is close to a perfect movie, as you could find. And it will make you feel good about life, I think.

Harper Reed [02:42:01]:
And there's another one that you might like if you like that, called Seagull Cafe.

Leo Laporte [02:42:04]:
Oh, I'll watch it.

Harper Reed [02:42:05]:
Which is a very similar vibe.

Leo Laporte [02:42:07]:
Ah, okay. Yeah, you have to really slow yourself down to watch this movie because if you're antsy at all, you'll get up and leave. You gotta relax into it.

Harper Reed [02:42:18]:
It's very good.

Leo Laporte [02:42:19]:
Seagull Cafe. All right, I'm gonna add that to my list. Thank you, Harper Reed. Thank you, Jeff Jarvis. Paris will be back next week. Thanks to all of you for joining us. A special thanks to our Club Twit members who make this show possible. Possible.

Leo Laporte [02:42:32]:
Without your generous donations, we would not be able to do what we do. 25% of our operating costs are supported by Club twit memberships. It's 10 bucks a month. You get ad free versions of all the shows. You get access to Club Twit Discord next Tuesday, for instance. In the Discord only we'll be covering the Apple event. We have to do those in the Discord now. We have a lot of special shows.

Leo Laporte [02:42:57]:
Friday we're going to have the AI user group, 2pm Pacific. The hour before that, Chris, Mark were at the photo show. Lots of stuff. The club is a great place to be. Please consider joining it. Find out more at TWIT tv. Club Twit. We do intelligent machines usually kick it off with an interview and then what you just saw.

Leo Laporte [02:43:19]:
Whatever you call that every Wednesday, 2pm Pacific, 5pm Eastern, 2100 UTC. You can watch us live if you're in the club, in the Club Twit Discord. But you can Also watch on YouTube, Twitch, X.com TikTok, Facebook, LinkedIn or Kik, take your pick. Watch us live. You don't have to watch live. It is a podcast, download audio or video of the show from our website twit tv im there's a YouTube channel dedicated to intelligent machines. Great way to share clips with friends or subscribe. That way you'll get it automatically every week.

Leo Laporte [02:43:52]:
It's free to subscribe. Just find your favorite podcast player and sign up now. I would like you to sign up for a newsletter because people are always saying, well, how do I know what's coming up? And we have a lot of special events and so forth. The newsletter tells all. It's free, one piece of mail a week. It's Twitter TV newsletter. I want to always remember to mention that because people are always saying, well, how did I know that the AI users group was Friday? Well, that's how you would know. Thanks everybody for joining us.

Leo Laporte [02:44:18]:
We'll see you next time on Intelligent Machines. Bye bye.

Harper Reed [02:44:21]:
See ya.

Leo Laporte [02:44:22]:
See ya.

Intelligent Machines Outro Theme Song [02:44:24]:
I'm not a human being, not into this animal scene. I'm an intelligent machine.

All Transcripts posts