Transcripts

Intelligent Machines 862 transcript

Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.

 

Leo Laporte [00:00:00]:
It's time for Intelligent Machines. Perez has the week off, but Father Robert Balasar joins Jeff Jarvis, and just in time for a great guest. Ruman Chaudhuri is here. She's the founder of Humane Intelligence. She says we need to take back agency when it comes to AI. You may remember her name. She led the ethics team at Twitter until Elon Musk fired her and the entire team. She's worked for the FTC, the UN, the US Senate.

Leo Laporte [00:00:24]:
She is a mover and shaker. We'll talk to Ruman Chaudhuri next on Intelligent Machines. This episode's brought to you by OutSystems, a leading AI development platform for the enterprise. Organizations all over the world are creating custom apps and AI agents on the OutSystems platform, and with good reason. Build, run, and govern apps and agents on one unified platform. Innovate at the speed of AI without compromising quality or control. OutSystems is trusted by thousands of enterprises worldwide for mission-critical apps. Teams of any size and technical depth can use OutSystems to build, deploy, and manage AI apps and agents quickly and effectively, without compromising reliability and security.

Leo Laporte [00:01:08]:
With OutSystems, you can accelerate ideas from concept to completion. It's the leading AI development platform that is unified, agile, and enterprise-proven, allowing you to build your agentic future with AI solutions deeply integrated into your architecture. OutSystems. To build your agentic future. Learn more at outsystems.com/twit. That's outsystems.com/twit. Podcasts you love from people you trust.

Fr. Robert Ballecer [00:01:40]:
This is TWiT.

Leo Laporte [00:01:45]:
This is Intelligent Machines with Jeff Jarvis and Paris Martineau, episode 862, recorded Wednesday, March 18th, 2026. Menage à Claude. It's time for Intelligent Machines, the show where we cover the latest AI news, robotics, and all those smart machines all around us these days. They're getting smarter and smarter. Paris has the week off, but I'm very happy to say we've got Father Roberto. I was gonna call you Roberto about a second.

Rumman Chowdhury [00:02:16]:
Robot Roberto.

Leo Laporte [00:02:18]:
Father Roberto is here from, of course, he's visiting us from the Vatican. It's not a joke, folks, that's him. Hi, Robert, great day. It's always wonderful to see you.

Fr. Robert Ballecer [00:02:28]:
It's always a great day when I get to see you and the Twit army. I miss y'all.

Fr. Robert Ballecer [00:02:33]:
Yeah.

Leo Laporte [00:02:34]:
Robert used to have a little place in the basement of the old Twit Studios.

Fr. Robert Ballecer [00:02:39]:
That's not a joke, that's actually true.

Leo Laporte [00:02:41]:
It's not a joke, it's true. Also, of course, here, the Professor Emeritus of Journalistic Innovation at the Craig Newmark Graduate School of Journalism at the City University of New York. Jeff Jarvis, author of the Gutenberg Parenthesis magazine. His new one, Hot Type, now delayed, but you can still preorder it.

Jeff Jarvis [00:03:04]:
You can, you can. Yes.

Leo Laporte [00:03:05]:
Gives you no advantage. It's now July, you said?

Jeff Jarvis [00:03:09]:
It's August because they were going to move it from, for production reasons, from June to July. And I said, no, that's, that's death for books.

Leo Laporte [00:03:14]:
No.

Jeff Jarvis [00:03:14]:
So they're moving it to the end of August. So it's basically a fall book now.

Leo Laporte [00:03:17]:
A fall book.

Jeff Jarvis [00:03:19]:
Yes.

Leo Laporte [00:03:19]:
Now, please, you brought us, I think, one of our most interesting guests yet. So would you introduce Raman Chaudhuri?

Jeff Jarvis [00:03:26]:
Well, I'm going to have the egotistical joy first of announcing something else that will lead to Raman. Oh, tell me about it. So, big announcement. I don't know, I meant to have Benito get some trumpets or drums or something. So I am proud and amazed to announce that Bloomsbury Academic is launching a new book series called Intelligence, AI, and Humanity, which is not a technical book series, but it is a book series enabling writers from many disciplines to reflect on AI and how AI reflects on humanity.

Leo Laporte [00:04:00]:
And believe it or not— Say the title again.

Jeff Jarvis [00:04:03]:
Intelligence, AI, and Humanity.

Leo Laporte [00:04:05]:
Wow.

Jeff Jarvis [00:04:06]:
And I will be editing the book series. Oh man, I can't believe it, but I will be editing the book series. So I'm very proud to say that we have signed up our first 3 authors. I'll mention the other 2 first. One is Matthew Kirshenbaum, who's been on this show, who's writing a book about the textpocalypse. Another is Charlton McIlwain, who is at NYU, who's writing a book, a very hopeful book, surprisingly, about race and AI and the opportunity to undo the oppression of technology on race. And then we have with us— I'm very happy, very proud to say— the author that I was dying to get to be the first author in this series, Dr. Roman Choudhury, who's writing a book about asking the question, what is intelligence? So—

Leo Laporte [00:04:53]:
Oh, that's a great question.

Jeff Jarvis [00:04:55]:
Isn't it perfect?

Leo Laporte [00:04:56]:
That is the fundamental question, if you ask me.

Jeff Jarvis [00:05:00]:
So Rumaan has a PhD in political science. She is the founder of Humane Intelligence, which she'll explain to us, but is an effort to hold AI companies accountable.

Leo Laporte [00:05:15]:
I know her name from Twitter where you were responsible for ethics at that.

Rumman Chowdhury [00:05:21]:
I was. I was the engineering director of machine learning ethics, transparency, and accountability.

Leo Laporte [00:05:27]:
This is before Elon.

Rumman Chowdhury [00:05:29]:
This is, oh yes. I always say I worked at Twitter and not X. Like shocker, I know you'll be shocked to hear that my perspectives and his don't align. I know.

Rumman Chowdhury [00:05:38]:
Who knew?

Leo Laporte [00:05:39]:
Worked at the UN, the FTC, at the US Senate. Geeks might remember her from DEF CON, where in 2023 she co-organized the largest generative AI red teaming event in history, putting 8 major AI models in the hands of 4,000 people to probe for vulnerabilities.

Fr. Robert Ballecer [00:06:01]:
I was one of them.

Leo Laporte [00:06:02]:
Yeah.

Rumman Chowdhury [00:06:03]:
Yeah.

Leo Laporte [00:06:03]:
Dave Robert was there. Ramon, we're so thrilled to have you on Intelligent Machines.

Rumman Chowdhury [00:06:10]:
What is intelligence? Oh, okay. So I've already written chapter 1 of the book.

Leo Laporte [00:06:16]:
So let me preface this with my twisted point of view, and you can say I'm crazy. One of the things to me that's been most intriguing about what's been happening in AI, you know, we've been trying for decades to duplicate how humans think with computing machines. And a lot of people say, well, you could never do it with a von Neumann architecture. It's just That's not how humans are massively parallel, blah, blah, blah. But what's it— I think to me very interesting is that once we started using transformers and started building these large language models with transformers, they have become— they seem to have become more and more, dare I say, intelligent. They, they seem more like humans, not, you know, a poor imitation, but nevertheless, it has made me think lately a lot about Well, what are we then? I mean, literally all of us are just the sum of our, dare I say, training over our lifetimes. Perhaps we're born like an LLM, maybe with some instinct that informs us to begin with. But then as we grow up, we learn language, and we learn all this through example, much like a machine does.

Leo Laporte [00:07:30]:
So I'm really thinking that one of the most interesting parts of AI is what it teaches us about our own consciousness.

Rumman Chowdhury [00:07:38]:
Well, absolutely. So I want to tease apart many, many points you make that actually I've already started exploring in the book. And as I, I wasn't kidding when I said I've already written chapter 1. This is a, an aggressive, aggressive writing timeline because Jeff Wood, I think they want to launch the first book Q1 or Q2 of next year, which means I have to be done writing it by August. And we want your book, the first one out, the first book in the series, right? Um, so Back to the fundamental question, what is— so there is the what is intelligence question, and then there is the how do we measure intelligence question, and then there is the intelligence versus sentience question. Right. So cognition does not necessarily mean sentience or consciousness because you said the word like consciousness. Right.

Rumman Chowdhury [00:08:21]:
So one is every measurement of intelligence that we have today is fundamentally rooted in economic value. So the first part of the book really goes through intelligence as a social, economic and political construct. Right. So why do we care? So the basic question I ask is, What is it that is really like striking us all existentially? And it's not just that these machines are performing the way we perform. It is that our sense of self-worth and value is driven by this notion of intelligence. But if you go back to how intelligence has been measured, it was constructed in the first Industrial Revolution. It was constructed around— so this is Alfred Binet, who was asked by the French government to find a way to classify kids in classrooms to determine who would be a good factory worker, who might be a good manager, who would be organized, who wouldn't be. So it was always rooted around productivity.

Rumman Chowdhury [00:09:11]:
So today, when Sam Altman says artificial general intelligence is the automation of all tasks of economic value, and we're like, what? And it hits us hard in our core, it's because the fundamental basis of what we call intelligence has always been about workforce productivity. But is that what intelligence really is? And then we get into the social and political ramifications. Politically and socially, why do we care if we are intelligent or not intelligent? Well, one aspect of it is that rights are given and denied based on it, right? So justification of why it was okay, quote unquote, to enslave Black people was in large part rooted in concepts or intentional misconceptions about intelligence, i.e., you can treat these people like animals because they are no smarter than animals. Women, why are women not allowed in higher education? Oh, because your little brains you could not handle it. Your intelligence is not there. So we make these presumptions. We design these tests to prove the points we want to make. So to your point on, you know, AI is a mirror, I would even say our construct of intelligence is more about the fears of the economic ruling class and their attempts to categorize us and put us, quote-unquote, in our place than it is an objective measurement of anything.

Rumman Chowdhury [00:10:26]:
So the problem is, when this goes into computer science and we have the Dartmouth conference, these men, they're all computer scientists and mathematicians, sit down with actually a very simplistic of intelligence. So they presume intelligence has been mapped. We know how to measure intelligence in people. That's their starting presumption. So the second presumption they make, which is incorrect, is that, okay, well, we can break down this thing called intelligence into its aggregate parts, and you could just sum it back up and it'll be intelligence, and break it back down. So if you know, like, basic systems theory, there is no system in which you can just sum up the parts and then you get the system. The system itself has some residual impact. So there are like a lot of things.

Rumman Chowdhury [00:11:05]:
One last thing. So the other thing that interested me is in science, right? How have we explored measuring intelligence in not humans? Because one assumption about computer intelligence is for some reason, because we are very species-centric, you know, animal, we have just presumed that human intelligence is the thing to model, right? But then what if we look at other ways of looking at intelligence, animal intelligence, mycelial intelligence? There's a whole field called extraterrestrial extraterrestrial intelligence. If we go to Mars and there's a moving slime, how do we know that slime is intelligent? And like, whether or not we should— why, again, why does this matter? Well, because it can lead to ecological ramifications. It could lead to so many other things, right? So there are fields of study. And by the way, like, newsflash: in 0% of these fields do they base intelligent measurement on human capabilities. In fact, that is almost the first thing you are told not to do, because animals and mushrooms, etc., have different ways of perceiving the world that are actually better than ours in some way, than ours, but what you don't do is give a monkey a set of physics questions and say, "Well, obviously we're smarter than you because you don't know what physics is." So again, you flip the script and say, "Well, then why have we decided that these machines need to be modeled after people?" It seems like a pretty self-fulfilling prophecy then because these CEOs sat down and they were like, "Oh, what we need to do is model the human brain and automate all the economically valuable things this human brain can do." So what we feel is really not an attack on our intelligence. But it's more visceral. They just want to get rid of us as intermediary economic bodies.

Rumman Chowdhury [00:12:41]:
I saw this TikTok where this woman said something like, companies seem irritated that they need to go through us to get to our wallets. And that is how AI feels.

Leo Laporte [00:12:51]:
Let's just get rid of humans and take the money directly.

Rumman Chowdhury [00:12:54]:
It would be so much easier.

Leo Laporte [00:12:57]:
But I also think that there is an existential dread that comes from the thought that maybe we're not special, that maybe what we have is a kind of intelligence. When you say, you know, slime mold might be intelligence, that's threatening too, right? We want to think that we are somehow special.

Rumman Chowdhury [00:13:18]:
Well, absolutely. And to your point, it goes back to how we construct intelligence, right? So if it is constructed around economic productivity, and then we make an economic productivity machine, then we're like, wow, we're not that special. So then, you know, the last part is really like, I've been playing with the idea of calling the book something like The New Intelligence or something like that. It's like, well, like, wait, let's go back and then let's say, given that we have created a machine that can surpass us in the way we have defined intelligence, our current measurement of intelligence, right? Let's actually create a method of understanding intelligence that maybe is divorced from workforce part. Because there are, by the way, many methods of intelligence. So Gardner's multiple intelligences, right? There's kinesthetic intelligence, like spatial intelligence, like dancers, for example, have this— they've built an intelligence where they understand proprioception, their body in space, in a way that, like, you and I could not, right? Because we are not trained in that intelligence. Empathy is a form of intelligence. Resilience is a form of intelligence, right? There's all sorts of things that are not measured in SAT tests that we therefore do not value, that but maybe we should.

Jeff Jarvis [00:14:26]:
I'm eager to hear Padre's view on this. Yeah.

Fr. Robert Ballecer [00:14:28]:
I absolutely love this idea of linking our understanding of intelligence back to the Industrial Revolution, because yes, that was such an upheaval in society that it makes sense that that's when we were trying to quantify the definition of intelligence that we use today. In my tradition, there's a little bit different of an angle on it, and that is to separate this idea of knowledge and understanding from intelligence. Those are— those two things are treated separately because knowledge could be rote memory, it could be the knowledge to be able to do a task, the knowledge to be able to, to complete a process. However, intelligence requires agency, and agency is that intentional desire to act upon knowledge in order to affect the environment in which we live. And not just to affect the environment, but to take accountability for the intentional actions that we take. So for us, for my tradition, intelligence looks like knowledge, but it has that additional step of agency, which we still don't think that LLMs, that current AI has, because it cannot act as an agent. It can only act as a source of knowledge. But, but I mean, I am absolutely tickled.

Fr. Robert Ballecer [00:15:47]:
I love this idea of using the Industrial Revolution because you may know that Pope Leo is big on the document Rerum Novarum, which is what the Catholic Church released during the Industrial Revolution to introduce this idea of agency and bring this idea that there is something innate and special about humanity, which is what Leo is talking about.

Leo Laporte [00:16:11]:
So how do you— do you have a working definition of intelligence?

Fr. Robert Ballecer [00:16:16]:
Uh, for us, yeah, intelligence would be, uh, the ability to take a knowledgeable understanding of the world and act in an intentional way to influence the environment based on values, goals, and beliefs. So for us, that's, that's human agency. That's the step in intelligence that we don't think AI currently has.

Leo Laporte [00:16:39]:
Romaine, is this part of your book defining intelligence?

Rumman Chowdhury [00:16:45]:
In a way, I frame it because I'm a social scientist. I frame it more like sociotechnically. Like, what is it? It's not enough to just define it. Like, I'm not a philosopher. What I want to do is understand it in the context of the world, right? So what are the ways in which we have defined intelligence, maybe even just sort of judgment agnostic and say, what has that meant in how things have been executed. Because again, the fundamental question to me was always like, why is this idea— why are we so scared of this thing? Why are we so scared of it? What is it forcing us to look at or question about ourselves? And what do we feel threatened about? And really, again, that's how I got to where I am. But Robert, I love what you're saying about this idea of intent and agency. And this is where we shift from whether it's intelligence to sentience or conscience.

Rumman Chowdhury [00:17:30]:
And people conflate the two a lot, all the time. And again, if you talk to the average person on the street, ask them what they think artificial general intelligence is, they think of something like the Terminator, you know, like Her, like, you know, Scarlett Johansson's robot in Her, like the AI. And those things had intent. They acted with desire. And there's nothing about these machines. And also, by the way, this narrative is being pushed by tech companies. It's very, very intentional.

Leo Laporte [00:17:58]:
Why?

Rumman Chowdhury [00:17:59]:
I coined a phrase back in, what, 2017 or 2018, moral outsourcing. Where essentially companies anthropomorphize these models on purpose so that when something goes wrong and something is bad, they can say the AI did it, right? The AI did its thing. Oh, and you see them doing it today, right? And you see it starting with all of the tech layoffs. Jack Dorsey saying AI is taking jobs because AI is making it easier. Like, sir, you invested in a bunch of crypto that tanked and you overhired.

Leo Laporte [00:18:29]:
Like, not our fault.

Rumman Chowdhury [00:18:31]:
Like, you know, exactly. But now there's this very, very convenient intelligence-shaped thing that you can put the blame on when bad things happen.

Leo Laporte [00:18:42]:
Such a great phrase.

Jeff Jarvis [00:18:44]:
I'd like to hear you— go ahead, go ahead.

Fr. Robert Ballecer [00:18:46]:
If I can introduce one more uncomfortable truth about our system, our way of thinking of intelligence is, if you look at intelligence as that combination of agency and knowledge, there is this fear, and it's a very real fear, that there are humans who do not meet that definition of intelligence, who do not reach that level of agency. So that should also be on the board.

Rumman Chowdhury [00:19:13]:
So I love that. And I especially like it because one of the things I'm very, very focused on right now is the future of education, the future of work, right? And these are like institutional flaws that predate AI. AI did not make the educational systems fail our kids. AI did not make it difficult for a child, for a recent college graduate to translate their degree into a job. That existed before. How many of us work in the field, or maybe some people in this room, work in the field that they studied when they were younger? Most people don't. Most people studied something and they ended up somewhere totally different. And we've just sort of accepted that.

Rumman Chowdhury [00:19:50]:
Most people will say what I learned in college has nothing to do to do with what I did even at my first job, right? I certainly am not in— and that's fine. There's nothing wrong with that. But then we need to reexamine our institutions of pedagogy and say, well, how have we been teaching and what have we been teaching? And I have very strong thoughts about decisions that have been made in the educational system. But fundamentally, the purpose of education— so just to get very specific, because again, AI in education is something I'm looking a lot at lately. The pedagogy of AI is very, very problematic because we teach AI in general. As a tool of productivity, not a tool of mastery. And we've done the same in education. And like smart, quote-unquote, smart kids know how to game the system.

Rumman Chowdhury [00:20:31]:
They're good test takers. They know how to do on the SATs. They know exactly what to say to the teacher and what they should write in their essays. Some of them happen to love learning, not all of them do. So we have taught education as an institution of productivity, produce X, Y, and Z and then you'll get into Harvard or MIT or Stanford. And then we make, again, we make this tool that we're teaching as a tool of productivity. But there is research, by the way, which is excellent into AI as a tool of mastery, but none of it's being taught to kids that way. So I guess my fundamental point is it actually has nothing to do with the technology specifically itself, but how we are framing our usage of it.

Rumman Chowdhury [00:21:11]:
And that's also what's driving a lot of the fear.

Leo Laporte [00:21:14]:
We're talking to Raman— Raman— sorry, we're talking to Raman Chaudhuri, the founder of Humane Intelligence. There's a nonprofit and there is a public benefit corporation. Tell us about Humane Intelligence. What's your goal here?

Rumman Chowdhury [00:21:30]:
Yeah, so the nonprofit was founded to build the independent community of algorithmic evaluators, which is very, very needed. So right now, essentially, tech companies write their own homework, grade their own tests, and pat themselves themselves on the back about how smart they are. And then, you know, and when anybody listening to this podcast or sitting in this room tries to use AI, however, like, impressive it may be, it's like, you know, it's like, it's like a smarty pants technology. But if you try to use it for something like very fundamental and real, you'll see it falls apart very quickly. And there's all these memes, like, I can't spell strawberry and kind of all this stuff. But then there are the bigger issues, like there's a lot of embedded bias in it, like, you know, that CEOs these companies, and especially thinking about Grok, have dictated how they want these models to answer certain questions. So there's biases baked into it. And also, the average person, if our lives are meant to be impacted by AI, we should have a right to say how this tool is being used.

Rumman Chowdhury [00:22:24]:
So the nonprofit started as an organization that would try to cultivate and get people excited about evaluating AI models. The for-profit is specifically looking at how to build the infrastructure to do this. So things like algorithmic transparency, technical methods of evaluation. One thing I want to say, like, to get a little bit, like, a little in the weeds on it. So machine learning and AI, like narrow AI, like pre-generative AI stuff, those are, like, largely statistical models. And as a statistician by background, like, we know how to math those things. Like, we have over 100 years of, you know, mathy mathing to figure out things, right? With generative AI, you have probabilistic outcomes. And the way I describe it to people is 2 2 sometimes equals 3.9, sometimes equals 4.2, usually equals 4, but sometimes equals 98, right? So you don't have this consistent answer.

Rumman Chowdhury [00:23:13]:
It's a guess.

Leo Laporte [00:23:14]:
Yeah.

Rumman Chowdhury [00:23:15]:
Exactly. And then in trying to evaluate this, it is hard to make a test that is scientifically sound, something that's reproducible, something that's generalizable. And these are all things we need. To know if this model is going to work or fall apart. And we don't have that yet. So the for-profit, it's structured as a public benefit corporation for many reasons, is dedicated to creating the infrastructure. So the nonprofit creating the community, and then the for-profit creating the environment that they can do these tests on.

Leo Laporte [00:23:44]:
I'm looking at the Humane Intelligence nonprofit webpage, and you talk about AI red teaming, which is so important. But instead of having it be done by the companies that make the models, having it, I presume, done by a community of people. AI contextual evaluations, what's that?

Rumman Chowdhury [00:24:01]:
Contextual evaluation is actually a phrase coined by my colleagues, Reva Schwartz and Gabriella Waters. They were both actually previously at NIST and now run their own consultancy called Civitas. Contextual evaluations really mean how do we give a test of a model that understands the context in which it will be used. I don't mean to use the word in the definition. But, for example, if I am a car company and I want to understand, I want to build a voice-activated AI system in the car to help people, whatever, get directions or find the nearest gas station, how do I do an evaluation of that that's not just some sort of a generic value. So things you might want to think about in that situation, how does the AI give an answer that will be correct and not lead somebody to an unsafe place or distract somebody when driving, right? These are very specific things that today, the very generic and superficial testing tools put out in Silicon Valley really don't answer. So they don't answer their questions. I do a lot of work with companies.

Rumman Chowdhury [00:25:07]:
And these are all not tech companies. These are companies trying to use AI banks, insurance companies, et cetera. And zero of them have told me that they have found the tools being built in Silicon Valley to be useful for them. And they just do all of their evaluations in-house. They try their best to do it themselves, which is not a formula for success.

Leo Laporte [00:25:31]:
So they feel there's risk. I mean, you have AI red teaming, AI contextual valuations, a bias bounty. Which are, I presume, challenges to find bias in these AI models.

Rumman Chowdhury [00:25:42]:
That's right.

Leo Laporte [00:25:44]:
Yeah. So all of this really would be under the rubric of AI safety, yes?

Rumman Chowdhury [00:25:48]:
Yeah. So that's a tricky term. Yes, it is a tricky term. Yeah. Well, because there's a lot of, like, you know, like, in the family fighting of responsibility, safety, governance. And, you know, sometimes the word— I don't mind the word safety. I think it's fine. But for some people, it's coded as, existential risk, which means there's a community of people that say, you know, AI has a 25% chance of killing us.

Rumman Chowdhury [00:26:14]:
And again, it's like it very much anthropomorphizes the AI, uses language like manipulation. It talks about things like bomb threats and scenarios. And frankly, from my perspective, I think sometimes that narrative is somewhat intentional, somewhat naive and privileged, and distracts from the real harms we are seeing today because we are busy speculating on future harms that are not possible to happen, right? So today, what do we have? We have algorithms that deny people jobs, that unfairly accuse them of crimes, that are used for surveillance. We know those are actual harms that happen. And instead, an overly significant part of this community funding, brainpower, policy is spent spinning on Terminator stories of AIs gone rogue.

Leo Laporte [00:27:05]:
Yeah, we've said this many times. This is one of Jeff's favorite drums to beat. It actually is the flip side of the coin, moral outsourcing. So on the one hand, you say, well, it wasn't us, it was the AI. On the other hand, you said, but this AI could kill us. It's all kind of the same.

Jeff Jarvis [00:27:22]:
One thing you say, Roman, that I think is so important is that when you blame the AI that way, you take away our agency, which goes back to what Father Ballezer said, right? And it acts as if we're powerless, that the AI is just gonna take over everything and there's nothing we can do about it.

Leo Laporte [00:27:38]:
And these companies want it to be that way. There's a payoff for them.

Jeff Jarvis [00:27:43]:
There's a hubris to it. There's an extreme hubris to it. I'd love to hear you riff on the notions, the hubristic notions that they add of general intelligence, superintelligence. It's not enough to say that you're as good as humans.

Rumman Chowdhury [00:27:58]:
Superhuman intelligence, superhuman, Ubermensch intelligence. I don't know.

Jeff Jarvis [00:28:05]:
Bingo. That's exactly where it goes. I happen to send you, and I also sent Leo, a paper this last week that Yann LeCun was one of the co-authors, arguing against this notion of generality, that humans aren't general, that we are good at some stuff and crappy at other stuff. But this idea that these people are so smart, they can build the machine that is smarter than all of us. Is that a new plateau in this notion of intelligence as privilege and power?

Rumman Chowdhury [00:28:32]:
So both Yan and Fei-Fei Li have raised money for their startups, and Yan is also doing something similar. And I think this is what Sarah Hooker is doing as well. Sarah just raised $50 million for a startup called Adaptation, which I cannot claim to know anything about, but it sounds like what Fei-Fei and Yan have been talking about, which is building world models. Models, right? So their argument, and Yann is making this, I love Yann, he's hilarious. I think everything he says is always correct and he is not afraid to offend people. And like when I say offend people, like, you know, fight the powers that be, not like us. But he's always very, very correct in what he says is that, you know, that there, so there is a belief in the general public populace that these models are just linearly improving over time and actually they're not, right? So the newer versions of ChatGPT are better in some ways and worse in other ways than previous models that came out. So it is not true view that models are simply linearly exponentially improving and all you got to do is give them more data and more energy and dot, dot, dot, they'll solve all of our problems.

Rumman Chowdhury [00:29:31]:
So he argues and Fei-Fei argues that you need world models, which are AI models that do more than just absorb specific language knowledge. It needs to understand the world around us. This could be vision, it could be voice, it could be like lots of different things. So I don't know, we don't have a world model yet, but this is what they're betting their careers on. And given that, you know, they are a, quote, godmother and godfather of AI, I figure they know what they're talking about. So I find that very interesting. And I was on this debate show last 2 weeks ago discussing whether AI will take our jobs. And that was the point that I made, that actually these models are not simply linearly improving.

Rumman Chowdhury [00:30:11]:
And we may have actually reached pretty close to saturation at the capabilities. And right now, really what everybody is doing— the bait and switch that's happened in the last year is actually that the models have not been improving. What's been happening is they focused from building foundation models to building applications. So you may see if you have Google, right, all sorts of new AI stuff dropping every week. So yes, they're building Gemini, but now they're actually saying, let's take Gemini that exists today and build these little tools, which is not necessarily a bad thing. But again, this is not this world of super Ubermensch intelligence that's going to, you know, sit at your desk and drink your coffee and take your job. That is a very, very different world we're talking about here.

Leo Laporte [00:31:00]:
That's kind of what, uh, uh, Jensen Wallung was talking about on Monday, right, Jeff, was that we've moved into the age of inference, that we've moved away from the age of building models and now it's about what the models can do. You've said, Romain, that part of the problem is that The people in charge of all of this are, well, the companies are making it, so they obviously have a dog in this hunt. They have an ax to grind. Government, which you say doesn't really have the tech, and you've worked in government, so you know, doesn't have the technical capability to understand what it's doing and when it's regulating this. And then you also point out that we in the public really don't have any way to measure any of this. You know, it's a little bit of a black box. For us. What's the solution to that? It sounds like nobody knows what's going on, or nobody's decided to do anything about it.

Leo Laporte [00:31:55]:
Maybe that's better.

Rumman Chowdhury [00:31:56]:
Uh, yeah, well, okay, so I can, I can talk all day about policy. It's funny because I was just talking to— I talked to a lot of policymakers, and I'm very heartened to see a lot of young— this would be like older Gen Z, uh, you know, people interested in running for office specifically on a tech platform. I think, you know, Mamdani has really emboldened a lot of people people who want to see positive change. So there are a lot of junior, but they will be the next generation of people. And all of their heads are in the right place. So I'm very heartened to see. They may not have the wisdom or maturity yet. They'll get there.

Rumman Chowdhury [00:32:32]:
If they are smart and they surround themselves with the right people, they will get there. So I think we are going to see in the next, I would say, 5 to 10 years, which may be too slow, a sea change happening in DC that I think will, in some ways, be quite positive. But, you know, one of the things that I, my kind of pie in the sky, like, what is this ambitious shoot for the stars thing is I gave my TED Talk on this idea of a right to repair. And right to repair, especially for, like, the reason I chose that phrasing is it really appeals to kind of like the old heads, right? This idea that if you own a piece of technology or a piece of technology influences your life, you have a right to tinker with it and do stuff to it. And, you know, the right to repair actually is more about physical devices like iPhones and McDonald's soft-serve machines, but I do give the example of AI tractors, John Deere versus farmers who actually learned to work with hackers and hack into their tractors because John Deere required that you work with a licensed technician from them. And again, this is a community of people who are used to just tinkering with their own stuff, but they can't wait 3 weeks for someone to show up. Crops grow when crops grow, right? So this is a fundamental problem for them. And I think we all need to think about what are our rights as people.

Rumman Chowdhury [00:33:48]:
And it was sort of meant as a thought exercise. We've never had technology framed that way to us before. You know, when I was at Twitter, we did this exercise where we wanted to understand what would it look like to give people more ownership of their timeline. And I worked with Dr. Sarah Roberts, who's the author of Behind the Screen, which was the first book that exposed content moderators and all of the horrific things that they have to see and do just to make sure we get a sanitized internet. And we worked with her to really understand how people feel about agency and ownership. And like, the TL;DR is that like, everybody said they wanted agency, but nobody understood what that looked like. Nobody could tell, no one could articulate what that meant.

Rumman Chowdhury [00:34:29]:
And to be fair to them, We have never been given that. We have never been given ownership and agency. So what does it look like to have a right to repair? I'm not sure if I know. But I think a starting point is something like public red teaming, right, where regular people— so going back to the red teaming, we purposely do these exercises with teachers, students, policymakers. The point is not AI experts in the room. And it's to break down that initial barrier people have when they say, oh, I'm not an AI expert. Great, but you're an expert in being you. You're an expert in being a teacher or being a multilingual sociologist or a cultural expert.

Rumman Chowdhury [00:35:04]:
That's what we need more than more tech people in the room. So that's a good starting point.

Leo Laporte [00:35:09]:
A bug bounty for social harms, kind of. Well, exactly.

Rumman Chowdhury [00:35:13]:
And that's kind of what we did with NIST. We did a project with NIST called ARIA. And what we did was ask literally anybody in America to go onto our platform. Platform and evaluate GenAI models. And that information went to NIST to inform their standards development. And when I say that, I literally gave it to the guy who manages my gym. And by the way, he was super interested in it because he's like, hey, I have like a side hustle where I make websites and I'm really worried that AI is going to come take my job. I really want to do this.

Rumman Chowdhury [00:35:42]:
So when I say everybody, like we— and the thing is, like, that's the dirty secret. Everybody can interact with it. But this this mythos around it, this like, we're too smart for you and the technology's too, like it's all on purpose to make us not feel like we deserve ownership.

Leo Laporte [00:35:58]:
Yeah, pay no attention to the man behind the curtain. Robert, you wanted to say something?

Fr. Robert Ballecer [00:36:02]:
Yeah, I was wondering, so back in 2023 at DEF CON in the AI village, two things that really stuck me from the final analysis that came out of the event was the, First, the recognition that you had that sometimes closed models are required for security and intellectual property, but that the creators needed to provide transparency on capabilities. And I'm wondering how much of that you're seeing. Do you actually see the creators of these foundational models explaining what it is that they want their model to be able to control, what they want it to be able to do? The second part was, and if I I'm sorry if I'm not remembering this correctly. You were talking about the democratization of desirable behavior, that that was absolutely something that needed to come out of the red teaming. We needed to be able to get together and in making policy decide how are we going to regulate the reward behaviors of these models? How much progress have we made since 2023?

Leo Laporte [00:37:04]:
And does the Anthropic Soul document do the job? Yeah. No, so I'll work backwards.

Rumman Chowdhury [00:37:10]:
No.

Leo Laporte [00:37:11]:
I had a feeling you might say that, but since you asked.

Rumman Chowdhury [00:37:17]:
I am very cynical, if you've not gathered, at the intentions of the people who simultaneously are going to be billionaires in building the technology and yet proclaim to also be the public philosophers who will cure it all and save humanity. Am I right? Okay, so you're gonna point out the problems but not do anything about them. But, but I want to talk about your question. So first is, you know, this, this idea of closed versus open. This is one of the reasons why we need an independent community of evaluators, right? So think of literally any other industry that is impactful— finance, education, airline safety— they too protect intellectual property, right? But if you are a licensed evaluator, let's say a financial auditor, right? You have this license you get. You have professional standards. You are allowed access to things that a regular person off the street would not have access to. You have guidelines in which you can do this testing.

Rumman Chowdhury [00:38:15]:
I mean, we do this in healthcare as well, right? So this is not a completely— like, tech likes to think, you know, this is the first time anyone's thought about blah. It's not, right? So we have done— we've created institutions, professions, and systems in place to protect IP while also enabling independent evaluation. This is why that independent community is needed. I think the public red teaming is a great tool for awareness raising, for people to get demystified, to learn how the technology is using, but working— but if we want to talk about improving these models, writing good regulation, really understanding performance and harms, that is like a different animal. And this is again like with the for-profit, why I want to build this infrastructure. We need people who are skilled in doing this. We need a way of understanding their expertise and giving them access to it. This could be legal protections, legal access to it.

Rumman Chowdhury [00:39:04]:
It could be professional certifications. But this is why you need the profession. And then the second of democratizing— sorry, what was the phrase again? You said it.

Fr. Robert Ballecer [00:39:14]:
Desirable behavior. Yeah. Democratizing desirable behavior.

Rumman Chowdhury [00:39:18]:
Yes. So one of the— because you can tell none of these people care about philosophy or social sciences or anything like that. There is this very arrogant notion that they can arrive at this universal good. And I always find it really funny when people are trying to make these models and they claim it will have this constitution or values, universal values. I have actually heard people say, like, oh, obviously, we all believe X. We actually don't. We don't all believe X. It's actually very, very hard/impossible to come to, even if you think about the most fundamental universal value one might argue, which is that, okay, well, human beings say we shouldn't kill other human beings.

Rumman Chowdhury [00:40:01]:
Don't we though? Don't we have the death penalty in the United States? We do have state-sanctioned killing of human beings. We've actually said it's lawful and okay. So we don't universally think that it's wrong to kill other people. And one would argue that would be the most fundamental thing, right? The most fundamental thing. That we could theoretically say we universally agree on, and yet we don't. So, you know, there is this arrogance of this idea that we can come to universal, you know, this universal list of values. You know, one of the things I, I love to laugh about is one of these benchmarks is called Humanity's Last Exam. Yes.

Rumman Chowdhury [00:40:32]:
What a dramatic— when you look at it, it's a bunch of like physics and math questions. And I'm like, like, opt me out of humanity's last exam. It's very tech bro, isn't it? Yeah. Yeah. Like, really? Is that— and by the way, there is this section, there was this counterpaper by a bunch of people that, you know, was trying to make a benchmark for, quote, universal values. And I want to— I remember, like, the first thing I went to was what did they consider to be global historical knowledge? And it was Europe, America, Asia, and other. Cool, cool, cool, cool, cool. Yeah, yeah.

Rumman Chowdhury [00:41:11]:
If you live in other, you might not agree. No, like the literal cradle of civilization, which is the Middle East and Africa, is other. But Europe gets its own. America, which is the youngest of all of the nations, gets its own. But yeah, so like this is what these people come up with.

Leo Laporte [00:41:32]:
We're really glad we could spend some time with you. I wish we had more time. Ruman Chaudhuri, I look forward to your book. Book, but I know you have 50,000 words to write by August, so I don't want to keep— overstay our— Can I ask one more timely question, Leo? Painful. Yes, please.

Jeff Jarvis [00:41:46]:
So Leo and I watched Jensen Huang's keynote. I'm a connoisseur of the showmanship of it every time he does it.

Leo Laporte [00:41:56]:
And we're going to talk about it later in the show too, of course.

Jeff Jarvis [00:41:58]:
Right. And so at the end, he went gaga over Open Claw. And I'm curious to hear about that, but I was thinking about this in my world in media, we went from a world where you couldn't make media unless you had the tools of production and distribution, unless you had the capital, unless you had the equity to do that. And what the internet has obviously done is it means that we can all entertain ourselves, we can all make media, the culture makes itself, fashion determines itself, and I celebrate that immensely. For all its harms, I think it's better. The internet ended up being top-down in a lot of ways, corporate, right? Just as happened to media. Along comes AI, and your co-author in the series, Charlotte McElwain, surprised me with a surprisingly optimistic view, having written the book Black Software about the oppression the technology caused in Black America. He sees an opportunity to break out of that.

Jeff Jarvis [00:43:03]:
Now, so I'm finally getting to the point of Open Claw. Does this mean that we can all make technology, just as we can all make media on our own, we can all make creativity on our own now? Is it possibly, to not be overoptimistic, that this opens the door for us all to make our technology now? Does it, does that, is that a step to give us all more agency even though the models have to be made by the big boys and they're all boys except Fei-Fei? But is there an opening here that the technology gives us the chance to take it over?

Rumman Chowdhury [00:43:41]:
Well, and this is where right to repair comes in. I fully agree with you. I think that that is not, that has to be intentional and it needs to be because the tech companies will not frame things that way, right? So they will, because they don't We need to do that for ourselves. And just as an example, my partner has been messing with us making like an IoT system for our house, but one that's done in a way where we're locally hosting all of our own data so that we're not sharing information. We don't have like a Ring doorbell, but right now we have, let's say, like SimpliSafe, right? So instead of that, and the thing is, all of the tools now exist to do that. And my partner, who by the way is an architect and not a programmer, but who has always had like a passive interest in technology, and like IoT and automation and, you know, out of necessity because we move around a lot, we have it. We are actually able to do that today in a way that we were not able to a few years ago. So just as one example.

Rumman Chowdhury [00:44:37]:
But again, like, no one's going to sell you that, right? So either we have to raise awareness among people that you can go do this or create a counter movement to provide that service and give people that world. Because like I said, like AI and this new wave of technology was not given to us the way the internet. The internet was given to us as a tool of free use and democratization. Algorithms and this new way that they get savvier and savvier every time they consolidate more and more power and wealth, and they're not going to give that up randomly. We can make a counter movement that maybe is designed around things like right to repair, where we can just do stuff like this. I was telling her that she should build a side hustle. I think it would go over well in places like New York, right, where you just do this for people. Somebody pays you a bunch of money and you're like, I'll buy you you a server and a bunch of Raspberry Pis and set up a dashboard.

Rumman Chowdhury [00:45:28]:
And like, there you go. You can monitor your whole house and not one bit of that data is going to go to OpenAI or Amazon or anybody else.

Jeff Jarvis [00:45:36]:
Leila, that's your new business.

Leo Laporte [00:45:38]:
Maybe. We should. I love it. Basically, it sounds like your general argument is for human agency in all of this, not to let the companies that are creating this stuff take that from us, but in fact, and not to assume that it's a black box that we cannot have any understanding or agency of, but in fact, to take that back, just as we have with right of, or trying to with right of repair. Is that fair? Yeah, absolutely.

Rumman Chowdhury [00:46:06]:
I think paramount to all of this that I think is important is the ability for people to choose their path in life. Like, I may not agree with the way somebody, maybe somebody does want to give their data Amazon. I don't know. I don't care. But like, we don't have a market with choice right now, and our choices are getting fewer and fewer. I want to create a market where we actually have choices that we can act on our values, because that's what a lot of people are expressing. They have particular values about their personal information, data, even passive data like your Ring doorbell, and how that's being used in ways that they are not okay with. Right.

Leo Laporte [00:46:39]:
I look forward to your book. You better get writing. Well, I really look forward to it. Your editor is sitting right here, and he seems nervous. No, no, this is very exciting. Tell us again the name, the working title. You can change it, we're not requiring it.

Rumman Chowdhury [00:46:53]:
Oh, I don't know, I really don't know. Maybe it's something like The New Intelligence: Critical Thinking and Cognition in the AI Era. The other one I've been working with is Measuring Minds. That is the title of the first chapter because that's what all of this attempts to do and do poorly.

Jeff Jarvis [00:47:10]:
Yeah, our publisher Harish Nakvi is very good at titles. So he'll have it.

Rumman Chowdhury [00:47:14]:
He'll have it. Oh, good. I will. I'm actually very, very bad at naming things. When I, I built the first enterprise bias detection mitigation platform, and I just called it the fairness tool because I'm like, it makes things fair.

Leo Laporte [00:47:27]:
So, well, thank you for the work you did at Twitter. I'm sorry that Elon didn't think it was important. Um, but hey, you know what's worked out well for you, right? You don't— you're probably better off To be honest. Thank you so much for being here.

Jeff Jarvis [00:47:42]:
Thanks so much for having me.

Leo Laporte [00:47:44]:
Thank you for having me. Really look forward to the book. We'll have you back when the book comes out. That's what will happen. Yeah. Yeah, if not sooner. You have one definitive future reader. Yes.

Leo Laporte [00:47:55]:
And really, I really support what you're saying, which is we need to fight for our own agency in all of this. That's clearly not— we can't let the frontier labs and the hyperscalers dominate this just because we don't— I don't understand it. It's not, it's not going to work. It's not enough. And it's our reality too. Government isn't the solution either, unfortunately. Maybe it will be in the future with a younger crew, but not now. Thank you, Ramon.

Leo Laporte [00:48:21]:
All right, we'll have more of Intelligent Machines in just a bit. Yay! Lovely.

Fr. Robert Ballecer [00:48:30]:
Is, is this what you've been doing on the show? This is, this is excellent.

Leo Laporte [00:48:36]:
Oh, you haven't listened.

Fr. Robert Ballecer [00:48:37]:
Hey, Oh, I heard one episode like a year ago.

Rumman Chowdhury [00:48:45]:
Do you want to hear like an interesting story that my friend Serafina told me? Uh, she's the—

Leo Laporte [00:48:50]:
before you say it, I want to let you know we're still on the air. Oh, it's not part of the podcast that we stream live.

Rumman Chowdhury [00:48:56]:
No, no, it's, it's, no, it's totally— it's actually, it's, it's a, it's an interesting story, uh, and it's going to be, um, at some place in my book. It may actually even be in the introduction. So, okay, uh, you know, we worry a lot about young people and overreliance on technology and, you know, critical thinking, etc. Do you know that Socrates in Phaedrus was very, very concerned with the advent of writing? Because to him, memorization was what was the mark of intelligence, and he was concerned that all of his students would become stupider because now we have this thing called writing and we have like freely available paper and they're no longer going to memorize. So it's interesting because again, like, as I work on things like AI in education, people, like, spout their fears about critical thinking and overreliance. And, you know, as somebody who measures things, what I think about is, well, what is the— that means the existence of overreliance presumes the existence of the appropriate amount of reliance, which means there's underreliance. But nobody can tell me what, like, appropriate reliance is because they benchmark it on themselves. But, like, our parents all told us we watched too much TV or sat on our computers too long.

Rumman Chowdhury [00:50:02]:
Right? Like, you know, any of us who have kids probably tell our kids they're on their phones too much. You know, like, that is just, you know, parent to child, like, how that goes. And we all worry the next generation is getting dumber, and maybe they are and maybe they aren't, or maybe intelligence is just shifting, right? Because that was— this is Socrates here.

Leo Laporte [00:50:19]:
This guy knew what he was talking about, right? I think Socrates is right. You should start memorizing all that stuff. Right away, stop writing it down.

Rumman Chowdhury [00:50:29]:
Stop, stop writing. No computers, no phones. No more writing. And it's funny because how many of us memorize phone numbers? I could tell you my childhood phone number, but I couldn't tell you like— Exactly.

Jeff Jarvis [00:50:39]:
How many people don't know their own phone number because you don't need it?

Leo Laporte [00:50:42]:
Right.

Fr. Robert Ballecer [00:50:42]:
Or I don't know what a phone number is.

Rumman Chowdhury [00:50:44]:
What is a phone number? Phone number? We number phones? Scientific American has a very good piece today.

Jeff Jarvis [00:50:53]:
By the way, just an aside on the— it's the kids today and arguing against that and saying the kids today are in fact in good shape and brings data to it. Good.

Leo Laporte [00:51:03]:
So I hope that the future is good. I hope that's true. Take care, Ramon. By the way, I love the hat. I was looking it up. It is not the IEEE, ISTO. It's actually Portuguese. It is.

Rumman Chowdhury [00:51:15]:
It's a— it's— well, it's a— it's a sustainably sourced B Corp in Portugal. Um, and they make amazing organic cottons and linens, et cetera. So I want to support a local sustainable business and my, I will quit tech and do something else job would be to open a textile shop in Lisbon. Cause I, I have found joy in tangibles. The more I work on intangible things, people are out here saying They want to open a bakery, like, A, I am not waking up at 4:00 AM to make croissants, and B, I am not dealing with the 9:00 AM coffee rush. Absolutely not. What I'm going to do is open up a shop where we sell beautiful linens and cottons and fabrics and ceramics, and only the people who I want to come in will come in.

Leo Laporte [00:52:00]:
But I just want to let you know, if people ask, you could say it stands for the Industry Standards and Technology Organization.

Rumman Chowdhury [00:52:06]:
I can. Or I can get people to buy organic, sustainable cotton.

Leo Laporte [00:52:13]:
Thank you, Roman. Take care. Bye-bye. Thank you. I looked up ISTO and that's what I found. Wrong ISTO. Wrong ISTO. Exactly, exactly.

Leo Laporte [00:52:27]:
We're going to do an ad. I've been installing Nemo Claw during the interview and I'm ready to load the claw.

Rumman Chowdhury [00:52:34]:
Oh yes. All right, guys, I'll see you later.

Leo Laporte [00:52:36]:
See you. Thanks for watching.

Fr. Robert Ballecer [00:52:38]:
Pleasure to meet you.

Leo Laporte [00:52:42]:
Very interesting stuff. We'll have more intelligent machines and our special guest, Father Robert Balaserre, filling in for Paris Moreno in just a little bit. Our show today brought to you by my domain registrar, spaceship.com. spaceship.com/twit. When we— when, remember, Paris wanted to do a website, Secretly British, and we registered a domain, secretly British. Sh. Well, I did it at Spaceship because it was so easy, plus we had searched around and it was also the best price. If you've heard us talk about Spaceship before, there's a reason it keeps coming back.

Leo Laporte [00:53:19]:
Spaceship is now really one of the fastest-growing domain registrars in history. It's because Spaceship is rethinking how people register and manage domains. Its fresh approach has now led to 6.5 million domains $1 billion under management in record time. We just started talking about them a few months ago. That kind of growth comes from, well, I guess giving people what they actually want at a fair price. Spaceship offers transparent, low pricing on domain registrations. By the way, if you're somewhere else, move your domains over. Their transfer pricing is fantastic.

Leo Laporte [00:53:56]:
Their renewal pricing is fantastic. This means there's more clarity over what you're paying for over time. So often the case that you— it's a dollar for the first year and it's $1,000 for the second year. Not at spaceship.com. Alongside great value, the platform is especially built for flexibility. You can instantly connect your Spaceship registered domains to Spaceship products. We clicked a button and Secretly British had an email site. You get web hosting if you want.

Leo Laporte [00:54:23]:
We haven't— she hasn't set up her her domain yet, so I pressed a button that connected it to her existing domain. But when we have a website for it, it'll be very easy to do hosting it on Spaceship. That professional email is first-rate, even virtual machines. So a great place to host your OpenClaw, for instance. And you can build and test before committing because almost every Spaceship product comes with a 30-day trial. But if you prefer third-party tools, don't worry, no problem. Just point your domain to what you need by updating your DNS records. It's easy to do, or name servers And actually, they have a nice little AI called Alf that can do that for you.

Leo Laporte [00:55:01]:
So now you have the freedom to build your stack exactly as you want, because they know this is what we geeks want. It's basically the best of every world. Visit spaceship.com/twit to learn more. That's spaceship.com/twit. We thank them so much for their support. Um, OpenClaw. Let's see, I installed it. I needed Docker.

Leo Laporte [00:55:27]:
During the— we were watching the keynote on Monday, you, me, and Micah Sargent, Jeff Jarvis. The keynote. What did I say?

Jeff Jarvis [00:55:36]:
Just, you said the keynote. As if the keynote.

Leo Laporte [00:55:38]:
Yeah, right. The keynote. You know what? And I will stand by this. I was— as I'm watching Jensen Huang masterfully spend 2 hours and some minutes describing all their products, I said this: there is no CEO in in technology today that can kiss the hem of his robes.

Jeff Jarvis [00:55:58]:
Who has the mastery of his topic.

Leo Laporte [00:56:00]:
Yeah, and boy is that company doing all the right things. So one of the things he talked about is the fact that OpenClaw is the fastest growing open source project in history, more stars than Linux in just a few months. And he said, and so we're gonna support this with an enterprise focus Focus Safe OpenClaw using something called OpenShell. It's installed a bunch— you can see my screen, it's installed a bunch of stuff here. OpenShell CLI, uh, it's apparently, uh, I don't have— it says NIM requires an NVIDIA GPU. Oh, of course, but we can use cloud inference.

Jeff Jarvis [00:56:40]:
Well, you can buy that now.

Leo Laporte [00:56:41]:
There's a new Dell machine that has the small cost, and now I have I clicked the link. It does say security risk because it's HTTP. Whoops. This is it. What am I seeing?

Jeff Jarvis [00:56:54]:
What is this?

Leo Laporte [00:56:55]:
What is this? That's not— I mean, let me go back to localhost. Hold on a second. That's weird. Is that what they wanted to show me? Says warning. Oh, I pressed go back. No, no, no. I want to go forward. Accept the risk and continue.

Leo Laporte [00:57:09]:
And now, ladies and gentlemen, nothing. Excellent. I was gonna introduce you to my new Nemo claw.

Jeff Jarvis [00:57:18]:
Well, go to Advanced or go to View Certificate, right?

Leo Laporte [00:57:20]:
That's you. No, no, this is it. Accept the risk and continue. I could view the certificate, the OpenShell server. I, I think it's running in my Docker.

Jeff Jarvis [00:57:29]:
So anyway, so what makes it safe, Leo?

Leo Laporte [00:57:32]:
Explain that. It's— well, it's running in Docker, which most people recommend you do with OpenClaw anyway, because if you're running in a machine, you're a little bit safer, although Docker can be misconfigured easily to be not safe, right, Robert? Oh yeah. Oh yeah. We just did that last week. So yeah. Oh yeah. Oh yeah, absolutely. So that was one of the things they call it OpenClaw with guardrails.

Leo Laporte [00:58:00]:
I think this is, you know, we've said this before, OpenClaw showed that this is the era of The year of the, uh, of the agentic AI. In fact, I thought that was the thing that was very interesting, and I mentioned this when we were talking to Roman, and we certainly noted it during the keynote. Jensen Huang saying it's the era of inference now, that it's about what you do with these things, right?

Jeff Jarvis [00:58:24]:
And it was a good thing that he, he bought into Groq with a Q. Oh yeah, those chips.

Leo Laporte [00:58:30]:
Wow, those chips, very important to what they're doing. You kind of rolled your eyes, Robert. I, I mean, yeah, we're still building models, right? They're not still building models.

Fr. Robert Ballecer [00:58:42]:
They, they want to get us into this post-foundational era of AI, but we're not there yet. Yeah, uh, yeah, I understand why they want to though, because all the new money-making applications seem to be in inference. Oh, uh, that, that was CES. CES's AI booth and the, the part of the West Hall was all about inference, uh, using inference in driving, using inference in home appliances, using inference in security. So yes, I get the push because they see that as an untapped market. But, um, if, if you look at the sustainability of the current inference model, it's not there. I, I don't think it's nearly as profitable as they think it's going to be.

Leo Laporte [00:59:24]:
Jensen did take a little bit of a victory lap. Here's the picture. I said this would be the picture. This is the picture of him with his WWF belt or something. He said, and this spiked the stock briefly until people thought about it, that Nvidia was poised to sell $1 trillion worth of Blackwell and Vera Rubin chips next year. Trillion, up from $500 billion.

Jeff Jarvis [00:59:51]:
Is inference just another word for application? And does this mean that this is an effort to get the industry going into retail channels? So, yeah.

Fr. Robert Ballecer [01:00:00]:
So when we talk about the foundational model, it's— that's the traditional, we're going to shove a bunch of data into this training and then we're going to get something out that we can use. The inference model is sort of continuous training. So we, we now deploy a foundational model, but it starts to learn through its interactions with the real world environment and it goes back into the training.

Leo Laporte [01:00:22]:
So, and to be fair, Claude Code, which is, you know, one of the hottest things in AI right now, is basically an inference machine, right? Yeah. Yes. Yeah. Actually, one of the things I've been really thinking about lately, I'm, you know, my goal with Claude Code is to basically replace myself. Don't do that. No, Leo, no. Well, actually, Mike, the reason I started Twit in the first place, the whole point of this was for me just to do the stuff I like and have initially other people do the stuff I didn't want to do, like edit the shows and produce the shows and technical directors. I just want to walk.

Leo Laporte [01:01:00]:
My goal was from day one, 20 years ago, just to walk in, sit down at the microphone, turn it on, do a show, get up and leave. Be done with it. But instead, I've created what Cory Doctorow calls a reverse centaur, which is basically AI is making it more work for me, not less. So he talks about a centaur, like a computer is like a centaur, which is a human-machine beast instead of a human and horse beast. It's a human and machine beast where the horse does the carrying and the human gets to be on top and look around. Around, and so the work is being done by the bottom part of the centaur. A reverse centaur, the human's doing all the labor, doing all the work, while the AI's sitting there looking around. And I kind of, in a way, created that with my workflow, because now I have to spend hours every day going through stories.

Leo Laporte [01:01:51]:
Admittedly, once I have the stories, it generates the rundown and does a lot of the busy work. But I realize The piece that's missing is I want it to— this is the hard part— somehow encapsulate my editorial judgment. Now, in the past, you would train a model maybe for that, but I'm thinking I can create a small language model based on a bigger model. The bigger model has all the language capabilities, and in the small model, train it to have my editorial judgment. You think that's crazy, Robert? It's not crazy.

Fr. Robert Ballecer [01:02:25]:
I see an exceptional expansion of requirements for power and other resources because if you are using a small model and then training it with inference, you are constantly going back and you have to retrain, re-tokenize your data. Otherwise, it's not really truly learning from the inference data that you're giving it.

Leo Laporte [01:02:50]:
Oh yeah, you're right. Well, uh, one of the things that we were looking at, uh, doing is using, uh, some sort of maybe a Bayesian system or something to train it using articles I didn't choose and articles I did choose. I have a now a pretty big database of articles I looked at and didn't use and articles I looked at and bookmarked, and that train it. Actually, Darren Oakey suggested something, kind of an exotic technique that he says is working really well for him. Something called, what was it? SLM? Do you remember? SLV, I think it was. Some sort of— he said he tried— SLM, he says. SVM, that's it. He had tried Bayesian and other statistical tools.

Leo Laporte [01:03:40]:
Linear SVM SVM. Same idea, I guess. Anyway, I'm going to try— I'm going to play with that. But the point— my point being, that's kind of inference. That's like, I'm not going to train a big model, that's already done. I'm going to— I know it's not exactly trained, but is it way—

Jeff Jarvis [01:03:58]:
I think Robert explained it really well. It's an ongoing— it's a never-ending—

Leo Laporte [01:04:01]:
but that's fine because it's always going to get better. I mean, it would end eventually, I guess, if it somehow said, oh, I get Leo, you just like this kind of stuff and don't like this kind of stuff.

Jeff Jarvis [01:04:10]:
Every time you tell it what you want, you are training it to know what you want, right?

Leo Laporte [01:04:14]:
And at some point, even it's possible to conceive of a time when it's done, but maybe not.

Fr. Robert Ballecer [01:04:20]:
One thing I do like about the inference model is it lends itself to local models. Exactly, to, to individual, cheaper, because cheaper local models, cheaper. Exactly. I mean, the, like, the reason why it was so hard pushed in the automotive section is because they were saying, look, we want to create a model for full self-driving, but it learns your style of driving. It learns how you drive, not just how everyone drives. And we don't want your driving to affect the model for other drivers.

Leo Laporte [01:04:49]:
We don't want— because right now the Tesla drives like Elon does. Yeah, exactly. And he rolls through stops. That's not good. Actually, you know, BMW Toyota has announced this new NOY class model on their new i3, which they just announced yesterday. They say the whole point of the self-driving is we're going to learn your style. So they're on top of this.

Jeff Jarvis [01:05:10]:
What if you are a bad driver?

Leo Laporte [01:05:12]:
Well, then you're a bad driver.

Jeff Jarvis [01:05:15]:
Shouldn't it correct for you?

Leo Laporte [01:05:16]:
Yeah, well, I think it will keep you from running stop signs, and it'll stop at stoplights even if you maybe wouldn't. But perhaps you're more aggressive about lane changes or less aggressive. My Tesla was always more aggressive than I would be. If it would learn, no, don't change if there's somebody 100 feet behind. Don't scare my wife.

Jeff Jarvis [01:05:33]:
Yeah. Did Jensen Huang's announcements about yet more auto companies he's working with on self-driving, does that torpedo Tesla and Musk?

Fr. Robert Ballecer [01:05:44]:
To a great extent. At this point, you can't really torpedo Tesla and Musk because they are in such a bubble.

Leo Laporte [01:05:51]:
They're self-torpedoing.

Fr. Robert Ballecer [01:05:52]:
Yeah, there's such a bubble that they can torpedo themselves. That's about it.

Jeff Jarvis [01:05:56]:
Only they can torpedo themselves. That's right. Yeah. Yeah.

Leo Laporte [01:05:59]:
Groq, which was a multibillion-dollar acquisition— acquihire, really.

Jeff Jarvis [01:06:04]:
Groq with a Q.

Leo Laporte [01:06:05]:
Since we just talked about Musk, the Q for NVIDIA is a server chip. They licensed the technology. They didn't actually buy the company designed to make AI servers more cost efficient. For things like AI coding, for inference in effect. And the Groq system will begin shipping in the third quarter of this year, according to Huang. And it's going to be made by Samsung, which was kind of a surprise. I thought that NVIDIA was a big TSMC client. I think they still are, but they still are.

Jeff Jarvis [01:06:36]:
They're just maxed them out. Yeah.

Leo Laporte [01:06:38]:
Samsung is going to be making these. It's not a GPU. Groq integrates memory onto of the chip. It's really built to do this kind of—

Jeff Jarvis [01:06:48]:
speed up this kind of communication within.

Leo Laporte [01:06:50]:
Yeah. Then the other thing they announced, DLSS 5, which really I don't think is important. No, but ugly. It really made people upset. Certain people who make certain things. Gamers really didn't like it. The idea was It takes existing assets in a game and locally, you know, pretties them up. Somebody— remember when we did this to TVs, how much people loved it? Oh, you think it's like frame interpolation? Basically, yeah, it's the same thing.

Fr. Robert Ballecer [01:07:23]:
It's, it's, it's creating something out of limited information. So maybe you get a couple of frames that look great, but most of the time you're going to be going, huh, meh.

Rumman Chowdhury [01:07:33]:
Hey, this has been My problem with this is that it changes the art direction of the game.

Leo Laporte [01:07:39]:
I think— here's an example, uh, that I think is, uh, quite good myself. This is me, actually. I think that somebody generated this on the—

Fr. Robert Ballecer [01:07:50]:
that looks like you as Tom Cruise.

Leo Laporte [01:07:52]:
Yeah, it's a fun place. It even made my eyes blue. Yeah, you see, that's exactly how I look. 'Look, can you do a dissolve instead of a jump cut? I think it'll be a little more morphing into—' Well, so, uh, and Jensen Huang was a little— actually a little pissed, shall I say, uh, at all of this. Um, his reaction was, 'Well, you just— you just don't get it. You don't get it. You don't get it.

Jeff Jarvis [01:08:26]:
You can control this.

Leo Laporte [01:08:27]:
I'm not gonna roll you can control this. Tom's Hardware asked, and Paul Alcorn from Tom's Hardware asked about the criticism. He says, well, first of all, they're completely wrong. The reason for that is because, as I have explained very carefully— now, I don't— I haven't heard the recording, but I can see him saying this— as I have explained very carefully, DLSS 5 fuses controllability of the geometry and textures and everything about the game with generative AI. Oh, well, in that case, no problem. You know what, I think Anthony Nielsen, our own Anthony Nielsen, got it right when he said it wouldn't have been so upsetting to people had he shown it with the backgrounds instead of the foregrounds. But really what bugged people was that he showed it with people.

Jeff Jarvis [01:09:16]:
So Benito, is it your fear that it takes away artistic agency, that it—

Rumman Chowdhury [01:09:24]:
yeah, it changes the graphics. What's your concern? It changes the graphics in a way that— don't screw with what I made, with what the people made. Yeah, it's— it changes it.

Leo Laporte [01:09:31]:
Well, the people who would use this, I presume, are the game companies themselves, right? Or no, no, that's happening. You're thinking this is something that will be a technology in your, in your computer that you could turn on?

Rumman Chowdhury [01:09:41]:
Yeah, this is so that, this is so that the companies don't have to do this themselves.

Leo Laporte [01:09:45]:
So that the DLSS has traditionally been something you would turn on. It's like ray tracing, it's something you turn on. Yeah, it's enabled. I don't know if the company— show, show my screen because this is some examples. It's not always, by the way, beautification. Here is, uh, from Hogwarts Legacy. It's turning, uh, that older woman into an older looking woman. It's adding lighting and shading.

Leo Laporte [01:10:08]:
It is changing the look a little bit. Um, I don't know, I— doesn't bother me as much. Gamers historically have really been negative about AI.

Jeff Jarvis [01:10:17]:
They're persnickety as a bunch. Yeah, yeah, yeah.

Rumman Chowdhury [01:10:20]:
It doesn't bother me as a gamer, it bothers me as an artist.

Leo Laporte [01:10:23]:
Yeah, but if, if a game company, uh, you know, can use this to make their stuff look—

Jeff Jarvis [01:10:29]:
yeah, I mean, that's pretty— as the artist, you can use this to make it more realistic. What's wrong with that? Yeah, if it's in your control anyway.

Leo Laporte [01:10:38]:
It This is one of those demos where we'd have to see it anyway. We— this is just a video that NVIDIA created, but it probably got more attention than anything else Jensen Huang mentioned.

Jeff Jarvis [01:10:49]:
Well, in certain—

Fr. Robert Ballecer [01:10:50]:
in certain quarters, certain circles. Yeah. Well, I mean, I'm, I'm willing to test it out in about 5 years when I can actually afford to buy one of their new—

Leo Laporte [01:10:58]:
well, that's another problem. So yeah, uh, nobody can afford this technology.

Jeff Jarvis [01:11:04]:
Andrej Karpathy doesn't have to afford it. He the first Dell machine given to them by—

Leo Laporte [01:11:10]:
that was the DGX Spark. They've— they announced that a number of third-party OEMs are going to be able to make these with chips in them machines. Our own Darren Oakey has purchased one for $5,000 Australian.

Jeff Jarvis [01:11:25]:
So do you have this sin of envy and covetousness? Yes, you want— you want—

Leo Laporte [01:11:29]:
I have that sin in spades, my friend. Friend.

Fr. Robert Ballecer [01:11:33]:
I mean, I've got one downstairs.

Leo Laporte [01:11:35]:
Oh, I forgot, I forgot. Robert has one from CES. Yeah, I forgot you brought one back. Yeah, you know, it's—

Fr. Robert Ballecer [01:11:44]:
yeah, it was swag, it was boot swag.

Leo Laporte [01:11:46]:
Yeah, was it under your seat when you were on the way home? You get a Spark and you get a Spark. I honestly don't want, uh, to have to need I don't need a $5,000 piece of hardware to do this stuff. And that's why I bought the Framework, which was expensive, $3,000, but it had 128 gigs of RAM as does Strix Halo. I'm interested in models I can run on that.

Jeff Jarvis [01:12:10]:
That's where the retail level excitement comes. Do you know what a DGX Spark is gonna cost from Dell? I mean, they didn't put the price up.

Leo Laporte [01:12:16]:
Well, it's around $3,000 to $5,000.

Jeff Jarvis [01:12:18]:
It's $3,000 to $5,000, okay.

Leo Laporte [01:12:20]:
Yeah. And there'll be, I think Darren's was a if I remember correctly. There'll be some— several OEMs will make it. Supermicro had one that was the ugliest thing ever. It looked like a tower. It's like, you don't need to be— it doesn't need to be a tower. Yeah, he got the ASUS GX10 Ascent.

Fr. Robert Ballecer [01:12:37]:
It is, it is very, very nice to be able to do a model locally and have the firepower to basically go whole hog on it.

Jeff Jarvis [01:12:44]:
So what does that let you do that you wouldn't have done before?

Fr. Robert Ballecer [01:12:47]:
So I mean, well, first of all, out of the box Anything artistic, anything you want to do with video or photo, that's a no-brainer. But what we've been doing it, using it for is for translation models because we deal with a lot of languages here. And at the same time to do summation of the conversations that are happening in different languages. It's extremely effective at that. I mean, I will not lie.

Leo Laporte [01:13:12]:
You don't need a frontier model to do those kinds of things. We don't. We don't.

Fr. Robert Ballecer [01:13:16]:
But it is, but we do need the privacy because the conversations that we have here are, are closed forum. I cannot in any way, shape, or form use cloud-based infrastructure. This lets us actually do it.

Leo Laporte [01:13:28]:
Makes sense. Yeah. Yeah. I think also a lot of us will do a hybrid thing. For instance, with Claude, you know, we'll use Opus 4.6 on for the really high-end stuff, but we can use Quinn or Kimmy or something else for stuff that doesn't need so much power.

Jeff Jarvis [01:13:46]:
What models do you use locally, Robert?

Fr. Robert Ballecer [01:13:48]:
Oh, I don't know. I handed it over to, to our IT guy, so he's running all of our models for us. This sounds like Chinese models.

Rumman Chowdhury [01:13:56]:
This sounds like running a Sun Microsystems computer circa 1995, you know?

Jeff Jarvis [01:14:00]:
Yeah, yeah, yeah, really does.

Leo Laporte [01:14:01]:
Yeah, in a couple of years it'll be, you know, a lot less expensive. It'll be a lot more affordable. I mean, I honestly, I'm not sure I agree with the Blackwell ones. I'm not sure I agree with Roman when— and I wasn't going to challenge her because she's way smarter than I am, but I'm not sure I'd agree with her that we're flatlining with LLM improvement.

Jeff Jarvis [01:14:21]:
Well, that's the discussion we have all the time.

Leo Laporte [01:14:23]:
I don't think that's at all in evidence.

Jeff Jarvis [01:14:26]:
It's the argument that, that Jan and Fei-Fei make is that maybe we're not flatlining, but it will only take us so far.

Leo Laporte [01:14:34]:
I would point out that they are just as self-serving as Sam Altman. I mean, how much money did they just raise?

Jeff Jarvis [01:14:40]:
$1.03 billion. Yeah. So yeah, of course.

Leo Laporte [01:14:44]:
Oh, our, our way of doing it's much better than what the other guys are doing.

Jeff Jarvis [01:14:47]:
Yeah, but they had an argument that a lot of people bought. I think that, um, and you even have— I mean, this is a big deal and not much was made of it, uh, because I went to a debate between, uh, Adam Brown at DeepMind and Yann LeCun. It wasn't meant to be a debate, it turned into that. Overall justice, and DeepMind was still scale, scale, scale models will get us there. And Demis Hassabis has switched recently and has been talking about the need for world models and that LLMs alone won't get us there. Of course, defined there is the other issue.

Leo Laporte [01:15:16]:
Right. And certainly I wouldn't argue against that. I mean, the more kinds of data, the better. But I've been thinking about this lately. 'Cause their argument is, well, you can't describe the world in text. Except isn't that how we work as humans, essentially?

Jeff Jarvis [01:15:34]:
Cats don't, is their argument. Okay, but humans do. This is the slime argument. My brain, most of the stuff that I—

Leo Laporte [01:15:41]:
when I'm thinking about something, I'm thinking about it in words, right? Words perhaps informed by my knowledge of the physical universe, but ultimately words.

Jeff Jarvis [01:15:50]:
What if you weren't limited by words? Words. I often think about that. I have to think about— I have to translate everything into words because that's the way I operate. But a dolphin doesn't.

Rumman Chowdhury [01:15:58]:
Yeah, thought to text is lossy.

Jeff Jarvis [01:16:02]:
Sure. Yes. Okay. Yes.

Leo Laporte [01:16:03]:
So now you're saying we can make something that's smarter than humans? Ha! Trapped me, did you? So see, those words work pretty good.

Fr. Robert Ballecer [01:16:13]:
This is the limits of tokenization, and I see the technical plateau because when you're dealing with tokenization and the need to address so much storage at any given time for any given answer, we're at that level where until we get to quantum computing, we can't advance that much further.

Leo Laporte [01:16:34]:
However, I think Anthropic just gave us a million token context window, which is mind-boggling.

Fr. Robert Ballecer [01:16:40]:
Oh, absolutely. But being able to run that model as fast as we would need to What we're doing instead is we're creating models that are specifically good at a thing versus the human brain, which can be very good at many, many things. We can switch gears very easily, models cannot.

Leo Laporte [01:16:53]:
And I, by the way, not against the idea of having physics models and as many models as you can. I'm just quibbling with the sole argument, oh, we've tapped out LLMs. I don't think that that's true. I think we're just getting small of them.

Jeff Jarvis [01:17:06]:
Well, but the other thing that, that paper that I sent you on language models, Lacan was a co-author of. His argument was against this notion of general intelligence, right? Saying that every human being is good at some stuff and crappy at other things. There's no such thing. And same as machines. And it has interesting outcroppings as well, because what Lacan argues is that if you think about specialized models, you can also limit the model to what it does, right? Makes it safer.

Leo Laporte [01:17:30]:
Makes it safer. No, and that's what I was just saying, which is we've got these general-purpose LLMs But the future lies with special purpose, smaller language models, you know, specially trained models, special. I mean, absolutely. We're not going to throw out the LLMs. We're going to still use those as the base. But I really absolutely think that we are going to specialize. I was watching a video this morning, Australian fellow who was a video about about small language models because I'm very interested in this notion, who, uh, he's an Australian. He said one of the problems we have in Australia is a lot of sun and a lot of skin cancer, but we don't have a national skin cancer screening program.

Leo Laporte [01:18:13]:
So he created an iOS app. This is part of— for a Kaggle competition, an iOS app that— it is really interesting. It doesn't tell— it's not diagnostic. Uh, it— you take a picture of something, a mole or whatever, You take as many pictures as you want and it saves that. It does describe it. And then next year you take another picture. It remembers the things you took pictures of and then you can look at the change from year to year. So it is like a self-exam that you can then send to your doctor.

Leo Laporte [01:18:45]:
And that's based on a very language model that can run on an iPhone in just about 3 or 4 gigs of RAM. And it's just categorizing, not diagnosing. And I thought that was very interesting. That's a perfect example of a specialization that is safer and useful.

Jeff Jarvis [01:19:02]:
The point that I'll make in this paper is that if you have two models, one is trained to just do protein folding, the other one is trained to fold proteins and your laundry, the first is obviously going to end up better, faster at protein folding because it's not, in essence, distracted by other tasks. And I think that That makes perfect sense. And it doesn't detract at all from the power of the model. In fact, it's a way to get more powerful models. Yeah.

Leo Laporte [01:19:29]:
And Pool says in our Discord, intelligence is what you are capable of. Inference is what you do with what you know. These models already know so much. That's why the focus is moving to inference. I would agree. I would agree.

Fr. Robert Ballecer [01:19:41]:
And by the way, these specialized models, that is the inference model. That's inference.

Leo Laporte [01:19:46]:
Yes, yes, exactly. Exactly. All right. Let's take one more. Not one more. No, no, no, no. Let's take another break. And so that, that's GTC.

Leo Laporte [01:19:54]:
I was— I enjoyed it. I'm really glad we covered it. Jensen Huang is an amazing fellow and Nvidia clearly is firing on all cylinders and they have many cylinders to fire. They are very, very hot right now. And you know what? I wouldn't put it past them to have $1 trillion in revenue. In the coming—

Jeff Jarvis [01:20:13]:
which he claimed he would have in a year, I think.

Leo Laporte [01:20:16]:
Yeah, he said 2027. Mind-blowing. Uh, it's nice to have Father Robert Ballester. Paris will be back next week, but it's great to get you on. Uh, you've never been on this show. I think this is— I have it for you because you work—

Jeff Jarvis [01:20:30]:
finally, at last. Well, actually, weren't you on once when I wasn't here?

Fr. Robert Ballecer [01:20:33]:
No, I was going to take it once when Leo was going on vacation, and then he didn't go on vacation.

Jeff Jarvis [01:20:39]:
That's right. So we haven't been together on another show.

Leo Laporte [01:20:42]:
Well, I'm definitely going on vacation. You know what I got? Actually, I'm— I got— so one of the things I was very excited about when I first heard about Starlink way back in the day, before Elon— when Elon was still somewhat human— uh, was the notion that I would be finally able to travel and do the shows from anywhere. And I just I'm gonna order a, 'cause I'm going to Hawaii in May and I wanna do the shows from there. And so I ordered a Starlink Mini. The shirts, I can't wait for the shirts. A Starlink Mini. You could put it on your balcony. I should be able to do the shows anywhere I can get a clear view of the sky.

Leo Laporte [01:21:22]:
It has plenty of bandwidth to do the shows. In fact, we often fall back to Starlink in the studio when Comcast dies on us. So, uh, I'm setting up a portable studio. How much does it cost? It's not much at all if you do the consumer version. Right now we have a business account, which means I have to go to Costco or Best Buy and buy it as a— I have to wear a hat and a mustache and buy it, uh, and stick it under my raincoat. See, it's— yeah, I'm a consumer.

Fr. Robert Ballecer [01:21:49]:
Did you know they don't let you take those on cruise ships?

Leo Laporte [01:21:52]:
For good reason.

Fr. Robert Ballecer [01:21:53]:
Yeah, and actually they don't want you buying their Wi-Fi.

Leo Laporte [01:21:57]:
Yeah, you don't want to use it at sea either, right? It's not as fast at sea because there are fewer downlink stations if you're in the ocean. Oh yeah. And because cruise ships use Starlink. Exactly.

Jeff Jarvis [01:22:07]:
Yeah. They all want to know this. Are you taking Claude along on vacation? Claude's coming.

Leo Laporte [01:22:12]:
Or are you going to use— I've already set that all up. Well, I really do want an agent.

Jeff Jarvis [01:22:18]:
I really do want an agent. I can't wait till you start playing with that. Yeah, you're going to go crazy.

Leo Laporte [01:22:22]:
Well, so my personal opinion on all this, I have tried all of these. Everybody, the latest is the president of Y Combinator, his name's Gary Tan, who just made, who's just put out his own, these are basically skills for Claude. Everybody's done it. There's GSD, Getting Stuff Done. There's Superpowers. With something called PAI, Personal AI Assistant. Open Claude's just a variant on all of these. The idea is you load it up with skills and API keys and loops so they can run continuously.

Leo Laporte [01:23:01]:
That's just really, it's just Claude. The whole thing is what people love about this is how good Claude is. And then they're just putting plastering layers on top of it. And sometimes I think it really is better just to use vanilla Claude. So I think what I need to do is kind of strip it all out, take all that crap out, all these skills and stuff. Darren, I'll keep your improved skill. That's a good one. Darren's skills are very good.

Leo Laporte [01:23:26]:
Uh, but strip out most of that stuff, maybe write a few of my own. Skills are a combination of prompts, and then you can't put code in there. It's one of the reasons coders still have an advantage. You could put bash commands, you could put code in. It's a combination of all of those. For instance, a good skill, a skill I want to write is a Twit API skill, which would be everything that Claude would need to access our API. It would be a first step toward, I don't know, replacing all the humans. And then anyway, no kidding, I think I am.

Leo Laporte [01:24:03]:
No, I don't want to replace the humans. I think the Humans are the most important part of our whole workflow. What I want to do is replace the busy work.

Jeff Jarvis [01:24:10]:
Well, I had a meeting today with my colleagues at Montclair State and also the New Jersey Hills Media Group, which is a small newspaper company whose board I just joined. And the AI genius from, who we ought to have on the show at some point, who watches the show. Hi, Joe Mditis. Was taking them through things they could do. And at some point There are some writers who aren't good at copy editing, who always make the same mistakes, blah, blah, blah, blah, blah. And on the one hand, everybody could use Claude. On the other hand, you could just email the article to a project on Claude with your instructions already there, and it could do its magic and send it back. You know, these things can be that simple that you don't have to have everybody.

Leo Laporte [01:24:54]:
Oh, I could do that now. That's right. Yeah.

Jeff Jarvis [01:24:56]:
That's what I'm saying. The triviality is the power of it. Right.

Leo Laporte [01:25:01]:
So really, I think that's what a lot of this agentics stuff really is, just other ways to interface with the brain that is good. Somebody's saying Lisa's going to be jealous. That ship has sailed. Honey, I'll be back in a bit. I just want to go up into the attic and visit with Claude. My little friend. Does Lisa play with Claude? She does. I've been, I've been working on her bit by bit.

Leo Laporte [01:25:28]:
In fact, now she says, how, uh, how do— what's— how many subscriptions should we get for the team? Because she's blown away by the kinds of things she can do.

Jeff Jarvis [01:25:37]:
It's a ménage à Claude.

Leo Laporte [01:25:40]:
Show title. Yeah, I think so. Thank you. Thank you, Jeff. Our show— we'll have more in just a bit. Our show today brought to you by Out OutSystems, oh, I love OutSystems for this. They're the number one AI development platform. OutSystems helps businesses bridge the enterprise gap to this agentic future we were talking about, where the constraints of the past give way to unlimited capacity and scale.

Leo Laporte [01:26:04]:
And the thing I love about OutSystems, they've been doing this for decades. They're not new to the game. OutSystems enables businesses to build AI agents that can actually do work work, take actions, make decisions, integrate with data, much more than just answer questions. OutSystems provides the only AI development platform that is unified, agile, and enterprise-proven because they have been doing this for a long time. They started with low-code, and now with the addition of AI, they have the most powerful tool I've ever seen. You can build, run, and govern apps and agents on a single unified platform platform. It's agile. You can innovate at the speed of AI without, and this is important, compromising quality or control.

Leo Laporte [01:26:49]:
It's really important in enterprise, you know, that your AI is doing the right thing, not the wrong thing. And this is enterprise proven. OutSystems is trusted by enterprises for mission-critical AI applications and durable innovation. OutSystems is the secret weapon behind the world's most successful companies. And by the way, Not just for, you know, small one-off apps. OutSystems works with the massive complex systems that today, right now, are running banks, insurance companies, and government services. OutSystems even helps companies with aging IT environments bridge the gap to the AI future without a rip-and-replace nightmare. OutSystems provides the safest, fastest way for an enterprise to go from Yikes, we need an AI strategy to— eh, yeah, we have a functioning AI application and it does it safely.

Leo Laporte [01:27:44]:
Stop wondering how AI will change your business and start building the agents that will lead it. Visit outsystems.com/twit to see how the world's most innovative enterprises use OutSystems to build, deploy, and manage AI apps and agents quickly and cost-effectively without compromising reliability and security. That's O-U-T-S-Y-S-T-E-M-S. Outsystems.com/twit to book a demo. You will be impressed. outsystems.com/twit. We thank them so much for their support of This Week in Intelligent Machines. Let's see.

Leo Laporte [01:28:24]:
So much news. So much news. I'm going to skip through the Google. Oh, this is interesting. Meta taking a little left turn. It's kind of a, uh, maybe more of a U-turn. It's a bit of a drunkard's walk, shall we say. Yeah.

Leo Laporte [01:28:37]:
Uh, remember when they spent billions of dollars to, uh, to acquire, um, um, Manus and, uh, I'm sorry, uh, a, a Scale AI. Sorry about that. Scale AI. Um, they've, they're doing a reorg according to the Times of India. Maybe this is suspect, I don't know. Um, they, uh, they are reorganizing. Their guy they got from Scale AI, Meta's Chief AI Officer Alexander Wang, is still there, but he announced the company is going to cut 600 people from the Superintelligence Labs division. Wang wrote, by reducing the size of teams, fewer conversations are needed to make decisions.

Leo Laporte [01:29:23]:
I think AI wrote And everyone will carry greater responsibility with broader scope and impact, and we'll save a lot of money. The teams include Wang's research lab. The applied AI engineering organization will also receive big cuts. This is Amar Saba's team. Another acquisition, or acquihire. And, uh, so it's complete reorganization. Only 2 people, uh, left when their equity vested in November from Wang's team. So that's good.

Leo Laporte [01:30:07]:
But maybe we're just going to move some people around. I— it seems like Meta— remember their avocado model, which is going to be there new big replacement, uh, for Lambda, uh, was pulled back. It's not good enough. It's not good enough.

Jeff Jarvis [01:30:24]:
Is Meta the new AltaVista? Yeah, they're struggling.

Leo Laporte [01:30:28]:
But you know what, it's interesting to watch all the— all these companies except Anthropic and OpenAI, and I guess Google.

Jeff Jarvis [01:30:35]:
Google, yeah, kind of Journal Today had a story, or maybe the Times. Google's in in the catbird seat.

Leo Laporte [01:30:43]:
I don't know if that's true.

Jeff Jarvis [01:30:44]:
I don't know if it is either, but, uh, I don't know.

Leo Laporte [01:30:46]:
I don't know who's in the catbird seat right now. No, Google's really doing what, uh, our guest was talking about, where they're looking more at applications than they are at big models. They did release Gemini 3, uh, DeepThink, right? But they're also like— in fact, I skipped through the Google section, but they're adding Maps stuff. They did scrap the health tips because they were getting those from Reddit. Turned out not a great source for health information. No, no.

Jeff Jarvis [01:31:17]:
Well, it might be better than RFK Jr., but not much.

Leo Laporte [01:31:20]:
Uh, they are going to do an agent builder for the Pentagon, but it's only unclassified work. Non-classified, I thought. Yeah, that's what I said. Unclassified. Yeah, same thing. So in other words, not, not, not classified. Although So you saw that now OpenAI is kind of jumping in the fray. They have not up to now been approved for classified work, but they— the Pentagon says, okay, we don't like these Anthropic guys, so maybe we'll let OpenAI into the— behind the Iron Curtain.

Fr. Robert Ballecer [01:31:48]:
Uh, man, the OG tech companies have more running room here. I mean, you've got Google does burning billions of dollars for AI, right?

Leo Laporte [01:31:55]:
But they have income, as with that income, right?

Fr. Robert Ballecer [01:31:58]:
OpenAI, no. If If the AI deal doesn't go through, they die. And actually, you could even extend that to Oracle.

Leo Laporte [01:32:05]:
Oracle has bet so much on AI. They're heavily leveraged.

Fr. Robert Ballecer [01:32:09]:
If it fails, they lose not just Oracle, but the Ellison media empire crumbles.

Leo Laporte [01:32:16]:
I do. I mean, I agree with you that these companies— and that you throw Apple in there too— Apple, Google, and Meta have other revenue streams, so they don't have to make money on AI right away. Way, but we're not seeing the results. Now meanwhile, Anthropic and OpenAI, who are running on a razor's edge, are, are big leaders right now. Maybe that won't be sustainable, you know, that's the possible— that's probably what these companies think is, well, we can sit back. Certainly Apple's thinking that we can—

Fr. Robert Ballecer [01:32:42]:
I mean, they're just leaders because they're investing in each other. But like Meta threw how many billions away on the metaverse, right? And it— I mean, yeah, it's embarrassing, but it didn't kill them.

Leo Laporte [01:32:50]:
They're killing— by the way, they're killing metacasts Horizon World. It's going away. It's over.

Jeff Jarvis [01:32:56]:
Wow. Well, similarly, OpenAI, Meta-like, is saying, okay, we're gonna, we're gonna concentrate now. We're gonna, we're gonna concentrate on B2B. Hello Anthropic.

Leo Laporte [01:33:06]:
Well, they, they saw how well Anthropic's doing in enterprise. They said, yeah, maybe all this diversification, the chat and all that stuff, maybe we should do the same thing. So maybe this device thing, uh, how much should we to get Johnny Ive here? I— okay, here's my thought on this. If you— if agentic is the, is the thing, and I think it certainly looks like it may be a thing, uh, you need an interface. And what OpenClaw and a lot of others do now is, you know, you use Telegram or Discord or Apple Messages or something to talk with it. But what you really want about is a much more convenient way of talking to it. I was thinking I really would like to write some sort of tool that I can use with one of my pins or maybe my Apple Watch that I could just say, hey Claude, I got an idea, or remind me later to do this. That's the way it should be.

Leo Laporte [01:34:00]:
And I think that's what they're going to end up doing. It's part of the agentic. It's the interface to agentic.

Jeff Jarvis [01:34:07]:
But do you need a device to do that? Do you unique device to do that, or—

Leo Laporte [01:34:10]:
yeah, I think— I don't know, I don't know. I don't think you want to take your phone out of your pocket. I think you want something ambient, whether it's glasses, earbuds, watch, ring, pendant. You want ambient intelligence. It's the same, same way you really would— what I would really like to do is just shout into the void.

Fr. Robert Ballecer [01:34:30]:
Well, the ambient intelligence belongs to Amazon and their deployed base of ambient devices. Is, is second to none.

Leo Laporte [01:34:37]:
Here's another example of a company that has great revenue streams and cannot seem to make a decent AI. Alexa Plus is horrible. Oh yeah, it is.

Jeff Jarvis [01:34:48]:
Even, even the people inside the company don't want to use it.

Leo Laporte [01:34:52]:
So I— maybe, maybe the urgency of we are going to run out of money any minute now is pushing Anthropic and OpenAI faster, and doing better because of it. Yeah, they have to sprint off the line.

Rumman Chowdhury [01:35:05]:
You know, the other companies, they don't have to.

Leo Laporte [01:35:06]:
They don't have to sprint. No, no. But, but who won that race, the tortoise or the hare? Oh, the tortoise did. Okay, never mind. Meta didn't buy the Maltbook for bots, says TechCrunch. It bought into the agentic web.

Jeff Jarvis [01:35:22]:
Again, they bought Maltbook for the hype.

Leo Laporte [01:35:26]:
That's what they bought. Well, Mookbook is a social network for AI, so, and Meta's social, right? I think you're right. They bought the hype. I'm trying to get back far on, but they're in a panic there.

Rumman Chowdhury [01:35:38]:
Meta's the one who knows how to mine that data.

Leo Laporte [01:35:42]:
It's all about the data. There's no data. That's why I was sad when Meta bought the Limitless PIN. Well, that's what it is. So I bought, remember I bought the Bee Computer? Remember I bought the Bee Computer and then Amazon bought them? Then I bought the Limitless PIN and then Meta bought them? I, you know, if Apple does an ambient— I think ambient intelligence, that's the, that's the phrase I'm thinking.

Jeff Jarvis [01:36:06]:
Well, it's like Leo is just walking down the street screaming, yeah, I want a milkshake!

Leo Laporte [01:36:11]:
Exactly. I drink you up. I mean, the problem with ambient—

Fr. Robert Ballecer [01:36:16]:
there's so many of these purchases that feel like panic purchases. Yes, exactly. Going back to the early dot-com where you had to to do something with your money.

Leo Laporte [01:36:26]:
Yeah, especially with Meta, right? Meta is the king of— I don't know what we're doing, but write a check.

Jeff Jarvis [01:36:32]:
Well, I imagine somebody's running to Mark's office saying, oh, okay boss, we can buy this one. And if somebody doesn't come to him before that, he's gonna get mad.

Leo Laporte [01:36:42]:
I know the feeling where you feel like, I got a lot of money, I'm gonna buy that stupid computer.

Rumman Chowdhury [01:36:45]:
There's also the privacy issue when it comes to ambient— that ambient stuff is that if listening all the time. So there's always a privacy concern there, right?

Leo Laporte [01:36:55]:
Maybe you— I don't mind.

Rumman Chowdhury [01:36:58]:
Maybe most other people.

Fr. Robert Ballecer [01:37:01]:
Maybe we don't do ambient computing here in my place, right?

Leo Laporte [01:37:05]:
It's— no, but that's a disadvantage. You want to be able— yeah, I mean, that's one— I mean, honestly, the way you do it is you have it, you tap something, or you—

Rumman Chowdhury [01:37:13]:
well, you need to trust the third party.

Leo Laporte [01:37:16]:
Prayer is ambient. I didn't even think of that. You're asking the ultimate intelligence for help. Exactly. Somebody once told me there are only two prayers in the world: thank you, thank you, thank you, and help me, help me, help me. Is that fair, Robert?

Fr. Robert Ballecer [01:37:33]:
I would add one: oh my God. And that can be taken so many different ways.

Leo Laporte [01:37:41]:
That could be help me Help me, help me. Yeah, and there is one more, which is help them, help her, help him. But you're asking for help or you're giving thanks. Manus, the AI agent startup that Meta acquired, the Chinese company that Meta acquired late last year, has, as of the 16th, launched a new desktop application called My Computer. It's all right. That's in my head now, Leo.

Fr. Robert Ballecer [01:38:11]:
I appreciate that.

Leo Laporte [01:38:13]:
Oh, you are a lucky one. Bringing Manus's agent directly into your personal device through my computer, the agent can read, analyze, and edit local files, launch and control applications, execute multi-step tasks including coding tasks without the user having to upload anything to a server. It's local. It's going to compete with Perplexity's computer Branding is not what these guys do well. Not great. And the Chinese government is a little actually concerned. Man, this is a Singapore company, but it runs out of mainland China. And the Chinese government says, hey, can we come on in and just have a word? Yeah.

Leo Laporte [01:38:51]:
Like a word with you.

Fr. Robert Ballecer [01:38:52]:
So wait, how are they doing that? Is it gonna act like a virtual machine or a container on MyLocal?

Leo Laporte [01:38:59]:
This is from The Next Web. The key architectural difference between Manus and OpenClaws, the model layer beneath the agent. OpenClaws, open source, can be run with any model, right? Its quality depends on which model you choose. Manus runs on Meta's own proprietary model stack, which the company says is more consistent and capable, at the cost of a subscription fee. But is it local? I don't see how that could be local. It has to call out to the server.

Fr. Robert Ballecer [01:39:25]:
Yeah, I mean, Analyzing what's on your desktop, it's sending it somewhere. Your computer doesn't have the power for that. Sending it to Meta.

Leo Laporte [01:39:35]:
Exactly. And Anthropic has this, of course, Claude Co-Work. OpenAI created their version of that as well. Everybody's trying to do that. Basically taking the coding platforms, Claude Code and Codex, and making it so that non-coders can use it, but I, I don't know.

Fr. Robert Ballecer [01:39:59]:
I still need to know how much— some of it goes out. It sounds like they're making a good faith effort to keep everything local, right? But local when intelligence doesn't work like that, right?

Leo Laporte [01:40:09]:
Local when I can. OpenAI released two new models today, ChatGPT-5 for Mini and Nano. Oh, you complete Mini and Nano. So how big is Mini?

Fr. Robert Ballecer [01:40:23]:
Then how big is Nano?

Leo Laporte [01:40:24]:
Uh, let's see. Nano is the smallest, cheapest version of 5.4 for tasks where speed and cost matter most. It's a significant upgrade over 5 Nano. There's a new Buick. Yeah, there's the benchmarks, which I don't, I don't pay too much attention to. Let's see, uh, the Let's see the numbers. Show us the numbers. Yeah, come on.

Fr. Robert Ballecer [01:40:49]:
Size should be relatively right.

Leo Laporte [01:40:50]:
The first thing, right? All these benchmarks. In the API, GPT-5 4 Mini supports text and image inputs, tool use, function calling, web search, file search, peer-to-peer. 400K context window. That's good. That's bigger, twice as big as Claude Code's context window until recently. 75 cents per million input $4.50 per million output. Uh, Mini uses only 30% of the GPT-5.4 quota, letting developers quickly handle simpler coding tasks and codecs for a third the cost. And you can use many sub-agents, which I do that with Claude.

Leo Laporte [01:41:30]:
I use Haiku and Sonnet for sub-agent work that aren't too demanding. Uh, Nano, let's see, Mini is available to free and Go users via the thinking feature in the plus menu. For other users, Mini is available as a rate limit fallback for GPT-5 for thinking. Nano is only available in the API, and Nano is 20 cents per million, significantly cheaper. Yeah, $1.25 per million output.

Fr. Robert Ballecer [01:41:58]:
That might actually be a good foundational model for, uh, like an inference build. Yeah, because you're already limiting the scope.

Leo Laporte [01:42:05]:
Yeah, yeah. Uh, so, um, yeah, that's all the information I have. This is from OpenAI. OpenAI has signed a deal with AWS to sell its AI services to government agencies for classified as well as unclassified work. This is their opportunity to get in door, Microsoft is now threatening to sue them. I'm saying no, you're ours, OpenAI. You can't do a deal with AWS. You're ours.

Leo Laporte [01:42:38]:
Traditionally, Anthropic has owned AWS, right? And that's— that was a big advantage for Anthropic. But, but OpenAI has really jumped in the breach. But, uh, speaking of breach, Microsoft says that's a breach of our contract. And they are threatening to sue. So trouble— that's a trouble in paradise thing. They were friends.

Fr. Robert Ballecer [01:43:01]:
Well, I mean, come on, that's been going on for more than a decade now, that back and forth between Microsoft and AWS.

Leo Laporte [01:43:08]:
Oh yeah, but I was talking about OpenAI and Microsoft, right? Microsoft gave him $10 billion.

Fr. Robert Ballecer [01:43:15]:
Uh, Microsoft also gave Apple the money that saved them, so yeah, $150 Yeah, they're good.

Leo Laporte [01:43:20]:
They're very good at that. Yeah. Yeah. Maxwell Zeff writing in Wired, Inside OpenAI's Race to Catch Up to Claude Code. This is what you were talking about, Jeff. Kind of a repositioning. Do they still want to do the, uh, adult chat? There's now controversy within the company.

Jeff Jarvis [01:43:40]:
The safety people there are saying, uh, this is really a bad idea. It is a bad idea. And they they haven't repudiated it yet. They have to. They have to repudiate it. It's just, it's just, it's— and I'm no prude, I'm, I'm no Puritan, but from a business perspective, it just doesn't make sense to advertise it.

Leo Laporte [01:43:59]:
Claude Code accounts for a fifth of Anthropic's business, more than $2.5 billion in annualized revenue. Codex, less than half that. So OpenAI says, wait a We need it. We need to get in on that. That's where the money is, is enterprise computing and inference.

Jeff Jarvis [01:44:19]:
It's the age of inference. It'll last at least a month.

Leo Laporte [01:44:24]:
I still think it's more of the age of agentic, but that is an inference. That's one kind of inference, I guess. The Information also had this story, OpenAI, Musk, and Focus. What are these things? Things is not like the other.

Jeff Jarvis [01:44:40]:
Uh, Fiji is a very good man. At Fiji Simo, who's the CEO of Applications, is a very strong manager. Ah, and I think that she'll bring sense to this. She wasn't at Meta, and then she was at the CEO of OpenCart. Uh, she's the one for quite some time.

Leo Laporte [01:44:56]:
She's really smart. She's the one who told the all-hands meeting last week at OpenAI the company needs to, quote, refocus on business customers and cut down on side quests that are becoming a distraction. But what we don't know is what those sites— Johnny Ive, has everybody seen Johnny Ive? Is it Johnny Ive? Is it shopping? They wanted to— remember they were going to do ads, they were going to do shopping, they're going to do sex chat, sexy chat.

Jeff Jarvis [01:45:18]:
Yeah, he was announcing something every day. He was, he was kind of chasing perplexity, which was going for the press release. Yeah, but press releases cost money if you actually do what they say.

Leo Laporte [01:45:29]:
Meanwhile, uh, in the same story, she talks about Elon Musk, uh, another example of a company that's throwing out its models. XAI, God knows what he's doing. He's publicly trashed the state of play at XAI, tweeting, XAI was not built right first time around, so we're being— we're rebuilding it from the foundations up. That followed the departures we reported last week of most of XAI's co-founders.

Fr. Robert Ballecer [01:45:55]:
I mean, it probably had something to do with the fact that kept wanting to put his thumb on the scale every time. Yeah, I mean, that's a really good way to bust your training model.

Jeff Jarvis [01:46:05]:
Yeah. And I still don't believe that— I mean, everybody else is making public hires and all this kind of stuff. I don't— I've got to believe that Musk cheated. Oh yeah, some form, to make what's there.

Leo Laporte [01:46:18]:
He would if he could.

Jeff Jarvis [01:46:19]:
Let's, let's put it even more than DeepSeek supposedly did.

Leo Laporte [01:46:23]:
Here is Sam Altman talking at a conference. Fundamentally, our business, and I think the business of every other model provider, is going to look like selling tokens. You know, they may come from bigger or smaller models, which makes them more or less expensive. They may use more or less reasoning, which also makes them more or less expensive. They may be running all the time in the background trying to help you out. They may run only when you need them if you only want to pay less. They may work super hard, you know, spend tens of millions, hundreds of millions, someday billions of dollars on a single problem. Right.

Leo Laporte [01:46:57]:
That's really valuable. But we see a future where intelligence is a utility like electricity or water, and people buy it from us on a meter. On a meter. Metered intelligence.

Jeff Jarvis [01:47:14]:
I wish we'd played this for Raman. It's, it's commodifying the, the Enlightenment, right? It's commodifying all education, all thought, everything else into some commodity that he's going to own and sell on a meter. It's just offensive. And this is why they're behind.

Fr. Robert Ballecer [01:47:30]:
Yes, because Anthropic doesn't sell tokens. They sell services. They sell, they sell things that you want. OpenAI is still caught up on this idea that they're going to be the power behind everything, and everyone buys their tokens and then turns into services. Well, one of those has a future in the enterprise, one of them doesn't.

Jeff Jarvis [01:47:46]:
Yes. What did you both think of, uh, Jensen Huang's hint that he's going to compensate employees with, uh, tokens? I don't think it's just him.

Leo Laporte [01:47:56]:
I think this is all the rage in Silicon Valley now, is you get your pay package and in there is, and we will give you, uh, you know, 20,000 tokens a week. Tokens, for people who are saying, what are they talking about? Tokens Tokens, we keep saying that word. It's the information going in and out of the AI, right? Everything the AI sees is tokens. So if it ingests the works of Shakespeare, the process of the transformer, the process of the neural network is to take those words, those chunks of phrases, because not always is often not just a word, those little chunks, and turn them into tokens. Yeah, the relationships. And so the tokens are the fundamental— they're the bits of intelligence, you know, in the sense of bits and bytes. They're the bits, the smallest unit of intelligence in an AI is a token. And when you're using AI, you are putting tokens in your prompts, information it gathers from the web and stuff, and then you're getting the results back as tokens.

Jeff Jarvis [01:49:02]:
And they charge you on both sides of that.

Leo Laporte [01:49:04]:
That's right. That's what we were talking about.

Rumman Chowdhury [01:49:06]:
So this is just the return of company script then, right?

Leo Laporte [01:49:10]:
It's really— I think what he's really trying to say is—

Jeff Jarvis [01:49:12]:
What do you get?

Leo Laporte [01:49:13]:
It's another day older and token-free. No, I don't think he's saying that. I think he's saying it's a utility. It's going to be— that's how we pay for the internet. It's the way we pay for water, how we pay for electricity.

Rumman Chowdhury [01:49:23]:
Yes, but if he's paying his employees in tokens, I mean, and they they can only spend those tokens on OpenAI's products.

Leo Laporte [01:49:28]:
Oh, I see what you're saying. Right. Well, no, that's not necessarily how it's going to be. First of all, you'd be foolish because you can't pay the rent in tokens yet. Maybe you will. Oh, just wait.

Jeff Jarvis [01:49:38]:
Just wait. You know what? You'll see the monetization of tokens.

Leo Laporte [01:49:40]:
Robert, what do you think? You know about cryptocurrencies. NFTs. Do you think tokens could become the new dollar?

Fr. Robert Ballecer [01:49:50]:
Yeah, this is just another privatization of a financial utility scheme. It's a currency. It's currency. It's currency. Now, any currency has the ability to be translated, converted into other currencies. So what he's saying is, look, I want to reward my employees, I want to pay my employees in a currency that can increase in value if they put more work into the company.

Leo Laporte [01:50:12]:
I don't— I honestly think the demand comes from the employees as much as it comes— in other words, if I'm going to go to work for some— for one of these companies, I want to know how much How much intelligence am I going to get? How much use of your product am I going to get?

Jeff Jarvis [01:50:24]:
They get it for what? They get it for building their own companies outside the company? No. No.

Leo Laporte [01:50:27]:
Well, that would be part of the negotiation. We don't know.

Jeff Jarvis [01:50:31]:
Do they get it for their 20% time?

Leo Laporte [01:50:32]:
You could be rapacious and say everything that you do with your tokens we own. But remember, it's competitive. The job market is extremely competitive for these engineers. So the engineer could make a deal and say, look, I want to be able to use Well, actually what I would ask for is unlimited use.

Jeff Jarvis [01:50:50]:
I don't know. Yeah. If you're an employee making—

Leo Laporte [01:50:52]:
Why should I have any limit on my use?

Jeff Jarvis [01:50:55]:
Yeah. That makes no sense. Yeah.

Rumman Chowdhury [01:50:56]:
And if you're saying it's part of the compensation package, right? Then that means he's getting less money also, right? You're getting less money because you're getting the tokens.

Leo Laporte [01:51:02]:
Not necessarily.

Jeff Jarvis [01:51:03]:
It really could be a way to pay without cash and taxes.

Leo Laporte [01:51:06]:
If I'm negotiating a deal with Mark, Mark, you're going to pay me a million dollars a year to come to work for your company. And by the way, I want not unlimited AI. I don't know why they don't just give them unlimited. I mean, doesn't— right? Maybe—

Jeff Jarvis [01:51:21]:
If you're going to do work for the company, then they should give you whatever resources you need to do that well. I think that's why I think this is for personal.

Leo Laporte [01:51:28]:
Maybe it's for personal use. That makes sense.

Jeff Jarvis [01:51:30]:
Then it's an asset that I can use on my own or give to my kids or whatever.

Leo Laporte [01:51:34]:
Yeah, you can go home and build your startup. There is right now a mystery model on OpenRouter. It appeared about about a week ago. It's called Hunter Alpha. Everybody's talking about it. People think this is the next DeepSeek version. DeepSeek has really been a disruptor in the AI world. They came along— it was, you know, it's funny, it's a year— it was January of last year.

Leo Laporte [01:52:05]:
It's only been a year and some months, but they changed everything. They showed how reinforcement learning could make an AI much, much better. During tests conducted by Reuters, the Hunter Alpha chatbot described itself as a Chinese AI model primarily trained in Chinese. It said its training data extended to May 2025, which is the same knowledge cutoff reported by Deepseek, but the system would not identify identified the developer. I only know my name, my parameter scale, and my context window length. My name and serial number. Neither Deep nor OpenRouter has identified it. Yeah, trillion.

Leo Laporte [01:52:50]:
It's a trillion parameter model. That's a lot, isn't it, Robert? That's a—

Fr. Robert Ballecer [01:52:54]:
yeah, that's, that's a bit more than what I'm running locally.

Leo Laporte [01:52:58]:
So the local model, the biggest local model I've seen is 120 billion parameters. 120B, that's the ChatGPT OSS, 120B. What do we know how many parameters, uh, Claude has, or ChatGPT? Do they ever reveal that? Those are a lot.

Fr. Robert Ballecer [01:53:15]:
I don't know. I don't know the numbers for Claude.

Leo Laporte [01:53:17]:
We throw these terms around and I'm kind of assuming people know what we're talking about. So you, you train, you put in a bunch of text You get some tokens that is the representation internally of these texts, but by themselves you don't know which tokens are more important or less. That's done with parameters, which also come out of the training, and the parameters change as you do the training, and they also change when you do the reinforcement learning and other post-training to make the model smarter. More— I don't know if this is a good analogy or not. I will use this analogy and you and you can correct me if I'm wrong, Robert. I often think of sampling music. So, there's two numbers that matter when you sample music, when you take analog music and turn it into digital: how many slices of the wave you take and how much information each slice has. So, for instance, you could sample something at 14,400 samples per second, and then each sample is a 16-bit sample.

Leo Laporte [01:54:17]:
That is CD quality. And I think of parameters as the sample size. So you're sampling it this much, but how much information a single parameter stores, and then how many parameters— I guess parameters would be the samples, how many samples per second, the number of samples. I can see that analogy. And then the number of bits per parameter.

Fr. Robert Ballecer [01:54:40]:
The one that I like to use whenever I'm doing a presentation is Let's say you're trying to train a model and you ask the model, what color is the ocean? Well, okay, so it's looking through its current stack of parameters and it sees that ocean is most associated with fish. So it responds, the color of the ocean is fish. Well, that's wrong, so you correct it. You say, no, no, no, the answer is blue. It's now creating a new parameter so that it biases itself so that when it sees the tokenization of ocean and color, it leans towards the answer blue. So every time you do that, you're creating a new parameter, and that parameter forms the bias of how the model both understands and replies. But no, yeah, I see that sampling idea. I like that.

Fr. Robert Ballecer [01:55:30]:
I'm going to work that into my next presentation.

Leo Laporte [01:55:32]:
It's not perfect, but it's something. It's hard to understand this stuff. Uh, anyway, unknown whether the mystery model Hunter Alpha is actually Deepseek. Well, I guess we'll find out at some point. It might be another—

Fr. Robert Ballecer [01:55:45]:
I think Claude is 161.5 million. So yeah, this, this— that all—

Leo Laporte [01:55:52]:
yeah, trillion's a lot. Trillion's a lot. That— well, there is something called—

Jeff Jarvis [01:55:57]:
oh, Carpathy did, right? It's a tiny— look what Andrej Karpathy did delivering his little tiny Tiny. And we can build up from there rather than this macho hubris of saying, I got the bigger thing, the bigger thing, which I think will overkill.

Fr. Robert Ballecer [01:56:10]:
Well, we talked about this. I mean, if you really hone in on your training on just the data that you really want it to be able to process, you can make an exceptionally intelligent model for that specific purpose with a much smaller base.

Leo Laporte [01:56:25]:
There's a really interesting branch of this. Research where, let's say you wanted to teach an AI how to add numbers. Initially, when you train it, you would give it a bunch of sums. 1 1 2. 1 2 3. 1 3 4. If you have so many parameters that the AI is capable of storing all of the data. Yeah.

Leo Laporte [01:56:53]:
So many tokens. Tokens, maybe it's tokens, so much data that you could store all the data, then what you will get is a lookup table.

Jeff Jarvis [01:56:58]:
But we're going back to soccer, which will break as you're memorizing rather than thinking.

Leo Laporte [01:57:03]:
Yeah, it'll exactly— you'll break as soon as— brittle, because as soon as you get outside of the training data, it doesn't know because it's just doing a lookup table. What they've found interestingly training these models is by reducing the number of parameters, you can induce the model to think, to solve it not by a lookup table, but actually to come up— and we don't know what algorithm it's coming up with— but come up in some way with an algorithm that produces the right result. And you do that by reducing the number of parameters. So the training parameters isn't necessarily better.

Fr. Robert Ballecer [01:57:38]:
This is a metric that I think is going to become popular at some point in the future as they go into the inference models. Of LLMs, and that is that it's not just about your parameter count. Yes, it's important to have enough parameters to be able to do the work you want it to do, but the quality of the parameters is something that we don't yet measure, and we need to figure out how to do it. Because you can have a 1 trillion parameter model that is absolute trash, and you can have a 100 million parameter model that works beautifully. And it's all about how those parameters have have interacted with one another.

Jeff Jarvis [01:58:14]:
And back to the notion of specialized machines, is the training data focused on something like health versus anything that teaches it how to speak?

Rumman Chowdhury [01:58:24]:
I also wonder if there's a qualitative difference between a large language model trained on Chinese than one on English.

Leo Laporte [01:58:30]:
That's a very good question.

Fr. Robert Ballecer [01:58:33]:
Chinese is a much more complicated language than English.

Rumman Chowdhury [01:58:36]:
It's a very different language. I mean, Chinese people differently because their language is different. Like, like, I wonder how much of a difference—

Leo Laporte [01:58:42]:
there's more bits per character in Chinese. Yeah, uh, you know, um, and also what the general—

Rumman Chowdhury [01:58:49]:
also what the general public feeling, sentiment of AI is in China. I'm also curious about that.

Leo Laporte [01:58:55]:
Well, I could tell you they're going crazy over OpenClaw. Have you seen pictures of OpenClaw conferences in China and they're all wearing lobster hats? And what OpenClaw Open Claw is the latest fad in China. Open Claw has groupies. They have— in fact, if I could find one of those lobster hats, I'm getting one, um, because I am—

Jeff Jarvis [01:59:18]:
I'm a claw boy. I can find it for him.

Leo Laporte [01:59:21]:
Baidu has integrated Open Claw into its, uh, Xiaodu services to work as voice-controlled remotes. Oh, that sounds like something I I've been talking about earlier. Here's a picture of an Open Claw conference, or actually this is Baidu's headquarters with a giant lobster out front. The Open Claw lobster out front.

Fr. Robert Ballecer [01:59:43]:
Already?

Leo Laporte [01:59:44]:
Yeah. They have Open Claw smart speakers that you can talk to, a voice-controlled remote for the AI agent. I had that idea. I should have patented it.

Jeff Jarvis [01:59:57]:
Is the hat, the lobster hat, the one you want?

Rumman Chowdhury [01:59:59]:
Yeah, because a Chinese company is gonna honor your patent, Leo.

Leo Laporte [02:00:03]:
Oh yeah, that's right. It doesn't really matter what I, what I patent. Yeah. Uh, yeah, that's the hat. That's the hat. Okay, well, it's one of them. Oh, they were all wearing them. I saw, I saw pictures of big conferences in China where people were wearing lobster hats.

Leo Laporte [02:00:17]:
That's a crab though. You think that's a crab? Yeah. Oh wow.

Fr. Robert Ballecer [02:00:21]:
Do you think I could get sponsorship from Open Claw if I could get Pope Leo to wear that hat.

Leo Laporte [02:00:26]:
Yeah, it looks a bit like a skull cap. Yeah, why not? Here are attendees with their laptops, uh, at Baidu's Open Claw lobster market event in Beijing yesterday. This is great. I'm so excited about this. I love it.

Fr. Robert Ballecer [02:00:47]:
We don't have that sort of stuff happening happening at our universities.

Leo Laporte [02:00:50]:
I mean, we know, but we do, we do have it happening in San Francisco. There's all sorts of stuff. They're Open Claw meetups. Are you kidding? Attendees play games at Baidu's Open Claw Lobster Market. See, there are Open Claw—

Fr. Robert Ballecer [02:01:02]:
there's a guy in San Francisco, Leo.

Leo Laporte [02:01:05]:
Oh yeah, you didn't know about that?

Fr. Robert Ballecer [02:01:08]:
Uh, I've been over here for a while. I know. Oh my God, yes.

Leo Laporte [02:01:12]:
Peter Steinberger is like, uh, ACD DC. He like, he, he shows up and oh, it's like rock and roll, man.

Fr. Robert Ballecer [02:01:22]:
Okay, well, I gotta go back to California now. Open claws.

Leo Laporte [02:01:25]:
Very hot. Very, very hot.

Fr. Robert Ballecer [02:01:27]:
By the way, Darren, Darren in chat just gave us all lobster hats, just FYI.

Leo Laporte [02:01:31]:
Oh, also Darren's very quick on the draw.

Rumman Chowdhury [02:01:35]:
Also, Burke found your lobster hats on Amazon.

Leo Laporte [02:01:38]:
Yeah, that's pretty good. And this is a 2-pack, so I'll get one for you and one for me.

Fr. Robert Ballecer [02:01:43]:
One for thee. We're going down.

Leo Laporte [02:01:44]:
One for you and one for Oh yeah, my Claude should have its own hat. Yeah, absolutely. This is another kind of lobster hat. I like, uh, I like the one you're wearing, Jeff. It's got beady little eyes looking straight at me. Oh, very nice, Darren. Thank you. Uh, let's take a break.

Leo Laporte [02:02:07]:
We have so much more to do. Um, We did the boom, let's do the doom. And when we come back, the boom and the doom and the gloom. We're talking AI with intelligent machines. Father Robert Balasar, the digital Jesuit. Do you— I mean, are you the go-to guy at the Vatican for AI?

Fr. Robert Ballecer [02:02:30]:
Everything here is done with multiple teams and very large committee of of people who are very good at what they do.

Jeff Jarvis [02:02:39]:
Yeah, before we got on the air, we were talking to dicasteries. Yes. And what's that? Roman quite likes, quite likes that word now. She's going to use it.

Leo Laporte [02:02:47]:
What's a dicastery?

Fr. Robert Ballecer [02:02:49]:
It's, it's our way of saying department. It's, it's a fancy word for department.

Leo Laporte [02:02:54]:
Oh, okay. Um, anyway, it's great to have you and, and your, uh, wonderful Thank you for staying up so late. Oh, I didn't even think of it. It's after midnight, isn't it?

Fr. Robert Ballecer [02:03:08]:
Actually, you, you got me at a good time because we're in that 3-week window where the United States does daylight savings before. Ah, so it's only 8 hours right now.

Leo Laporte [02:03:18]:
You're gonna get very busy too. Uh, we're in the middle of Lent.

Fr. Robert Ballecer [02:03:21]:
We've got something coming up in a, in a, in a week or two here.

Leo Laporte [02:03:24]:
Yeah, yeah, there's a little thing. Do you, uh, Do priests give up things for Lent? We do. We do. What have you— do you want to share what you gave up? I gave up sex. You're so irreverent.

Fr. Robert Ballecer [02:03:40]:
No, I actually— I gave up soda.

Leo Laporte [02:03:43]:
I gave up soda for Lent. Yeah, that's a good thing to give up.

Fr. Robert Ballecer [02:03:46]:
It is a very good thing. And the funny thing is, I always feel so much better every time I give up soda. I know. And then within like 3 weeks, I go, ah, I just, I want another soda.

Leo Laporte [02:03:56]:
Big soda's got you in its claws. It does. It does. Uh, it's very appealing. You know, when I was a kid, it was a big deal. We had, we would get my dad, because he really wasn't a very good cook, he'd bring home chicken licking. He called it, he called it pizza chicken night. He'd bring home a pizza, a pizza, chicken licking, and a bunch of, uh, Cokes.

Leo Laporte [02:04:20]:
Coke. And for some reason, in my mind, Coca-Cola and pizza and Coca-Cola and fried chicken, they just go together. And you get programmed, don't you?

Fr. Robert Ballecer [02:04:30]:
I wish I had never had soda because that burst of sugar, it just— it doesn't— for your brain, it's, it's basically heroin.

Jeff Jarvis [02:04:37]:
I used to, I used to drink 6 a day. Yeah. And then when I got atrial fibrillation with 9/11, I couldn't have caffeine, and so I gave up Coke entirely, and I managed to do it. But congratulations. My kids thought— my kids thought, there's no way, no way you're giving this up.

Leo Laporte [02:04:52]:
Yeah, 6 a day. Were they, were they sugared or diet? Oh yeah, yeah.

Jeff Jarvis [02:04:55]:
Oh, I hated the diet. I wake up in the morning and the first thing I'd have is a Coke. The bubbles wake you up, it's wonderful.

Leo Laporte [02:05:02]:
Yeah, a little jolt of caffeine.

Rumman Chowdhury [02:05:04]:
All of our parents also use it as like a reward, so it— to— in our heads, that's right, it's a reward.

Leo Laporte [02:05:09]:
It's a reward. Taste.

Fr. Robert Ballecer [02:05:12]:
But I remember that reward at McDonald's, and the cup of Coke was like this big. That was— and now it's like this big.

Rumman Chowdhury [02:05:20]:
So, and that's the small. That's the small.

Leo Laporte [02:05:23]:
Yeah, it's America for you. Also, Father, uh, along with Father Robert, we've got Jeff Jarvis, Professor Jeff Jarvis. Uh, are you a doctor? I didn't ask. No, God no. No PhD?

Jeff Jarvis [02:05:34]:
No, no, no, no, no. I don't have a master's. I've created 3 master's degrees and I'm working on creating a fourth, and I haven't had one myself. Oh, don't— I was with a bunch of academics. There's a wonderful academic named Andrew Pettigrew whose book I'm about to read, and I was at St. Andrews in Scotland, and I was with him and a bunch of his academic colleagues, and I said, I started 3 master's degrees, and they looked at me like, well, why didn't you finish any of them? No, I don't mean that. I didn't mean that. I created them, but I feel too dumb.

Leo Laporte [02:06:01]:
And a whole Do you, uh, let's see, uh, yeah, we'll take a break. We have a, we have a few more stories and we have some picks. You're watching Intelligent Machines, brought to you this week by Zscaler, the world's largest cloud security platform. It's pretty clear the potential rewards of AI are far too great for any business to ignore, but it's also clear the risks are as well. Loss of sensitive data, attacks against enterprise-managed AI. And of course, generative AI increases the opportunities for the threat actors, the bad guys, helping them rapidly create phishing lures that are so good you're bound to click. They're using it to write malicious code. We have some examples on that on Security Now last week.

Leo Laporte [02:06:47]:
And they use it to even do things like automate data extraction. Hey, you're using it. Why wouldn't they? It really is a problem with proprietary data being leaked. There were 1.3 million instances of Social Security numbers leaked to AI applications. ChatGPT and Microsoft Copilot saw nearly 3.2 million data violations last year. You got to do something about it. Fortunately, there is a solution. It's time for a modern approach with Zscaler's Zero Trust plus AI.

Leo Laporte [02:07:18]:
It removes your attack surface It secures your data everywhere. It safeguards your use of public and private AI, and it protects you against ransomware and AI-powered phishing attacks. Don't take my word for it. Listen to what Siva, the Director of Security and Infrastructure at Zuora, says about using Zscaler. AI provides tremendous opportunities, but it also brings tremendous security concerns when it comes to data privacy and data security.

Fr. Robert Ballecer [02:07:45]:
The benefit of Zscaler with ZIA rolled out for us right now is giving us the insights of how our employees are using various GenAI tools. So ability to monitor the activity, make sure that what we consider confidential and sensitive information according to, you know, company's data classification does not get fed into the public LLM models, et cetera.

Leo Laporte [02:08:08]:
Thank you, Siva. With Zero Trust plus AI, You can thrive in the AI era. You can stay ahead of the competition. You can remain resilient even as threats and risks evolve. Learn more at zscaler.com/security. zscaler.com/security. We thank them so much for supporting the show. Talking about AI risks, this was an appalling story.

Leo Laporte [02:08:32]:
We've talked before about how face recognition is so problematic. But you would hope that police departments wouldn't rely entirely upon face recognition to apprehend suspects. Well, unfortunately, the Fargo, North Dakota Police Department did. They had video of a fraudster walking into a North Dakota bank, passing a bum check or something. They fed it to a database of face recognition, and the name Angela Lips came up, a woman who lives in north central Tennessee, not North Dakota. They, uh, they called the police department in Tennessee, said, can you arrest her? They did. They put her in jail. She sat in jail for 4 months without bail, waiting for extradition.

Leo Laporte [02:09:26]:
She was extradited to Fargo, North Dakota, based solely on this face recognition. She said, I've never been to North Dakota. In fact, I've never been on an airplane until they flew me to North Dakota to face charges. Charged with 4 counts of unauthorized use of personal identifying information, 4 counts of theft. The Fargo police, when they found out that she had a perfectly good alibi, they never bothered to check, I guess. She could prove she was in Tennessee when this video was taken in Fargo, North Dakota, released her on Christmas Eve and didn't give her any money home. Didn't— stranded her.

Jeff Jarvis [02:10:13]:
It's a new episode of the show Fargo. They stranded her.

Leo Laporte [02:10:16]:
Local defense attorneys covered a hotel room and food on Christmas Eve and Christmas Day. A local nonprofit helped return her to her home. She's back home home. But she says while she was jailed, she couldn't pay bills, so she lost her house, she lost her car, she lost her dog. She also said no one from the Fargo Police Department has apologized. Sue the bastard. I hope to God some attorneys come to her and said, yeah, we can get some money out of this.

Fr. Robert Ballecer [02:10:43]:
Come on, lawyers, get pro bono on this. I mean, seriously, 1,200 miles away from home, lost her life, everything that she had built up, gone, because a couple of people decided that they're going to trust a tool that they didn't really understand. There's zero accountability, zero responsibility for using the tool in the first place. This is— I mean, this should be science fiction dystopia. This should not be something that we're just accepting. I mean, the fact that this has not got just wall-to-wall press coverage is ridiculous.

Leo Laporte [02:11:16]:
Yes, terrible.

Jeff Jarvis [02:11:17]:
And the fault is human. Yeah, right. Um, don't say, oh, the tool screwed up.

Leo Laporte [02:11:23]:
It's not AI, it's humans, because they're trusting it too much.

Jeff Jarvis [02:11:26]:
Yep.

Leo Laporte [02:11:27]:
As Roman said, uh, McKinsey paid a pen tester to hack it, uh, and it worked. McKinsey, uh, the world's One of the world's best-known consulting firms built an internal AI platform called Lilly for its employees. It had chat, document analysis, RAG, over decades of proprietary research, AI-powered search. So they said, we decided to point our autonomous offensive agent at it. Didn't give it any credentials, didn't give it any insider knowledge, no human in the loop, just a domain name and a dream. McKinsey writes, within 2 hours the agent had full read and write access to the entire production database. You know, it's— fortunately it was their own red teaming of their, of their system. The agent mapped the attack surface, found the API documentation publicly exposed, over 200 endpoints fully documented, most required authentication but 22 didn't.

Leo Laporte [02:12:38]:
I don't need to go on. You can—

Fr. Robert Ballecer [02:12:40]:
oh, my favorite part of that story, Leo, is that the way that they— since the API was public, all they needed were the JSON keys, and the JSON keys were in the, the error logs of the database. So they were just able to use some SQL injection, get the error logs, boom, you're in. That's fantastic.

Leo Laporte [02:13:00]:
So actually, that's really not AI doom. That's good news because the AI found it, and I'm sure they fixed it. And this is one thing we're really starting to see AI being used in security audits very effectively. A new study says using AI leads to brain fry. Sigh. The article from Harvard Business Review quotes a our friend Steve Yeaghi saying, uh, I had a palpable sense of stress watching Gastown. It was moving too fast for me. I know the feeling.

Leo Laporte [02:13:37]:
Um, yeah, so don't let your brain fry using AI. You know what, touch grass. We all got to touch grass a couple of days ago when Claude was down for like 5 hours. We were all sitting here, I was doing a show, I guess it was yesterday It felt like ages. We were all sitting here doing a show and it's, uh, Darren or somebody said, hey, uh, Claude says it's overloaded. I said, what? And I tried it. It was— nobody could get into Claude. You should see it on Reddit.

Leo Laporte [02:14:03]:
People say, oh man, I had to go outside. Where's my friend? My friend's gone.

Fr. Robert Ballecer [02:14:10]:
You know, I think I actually had an example of brain fry. I was, uh, helping a colleague, a different part of the world, who— she was extremely upset because she had been using a couple of AI tools to help with her content production. Her brain fry was that she was so depressed that the work that the AI tools created was, in her estimation, better than the work she had been creating.

Leo Laporte [02:14:34]:
That would be horrible, wouldn't it? Right.

Fr. Robert Ballecer [02:14:36]:
And so she was trapped in this job where she was just— she was basically just putting queries into AI, and she had given up trying to get any of her own style into it. And that's That's definitely brain fry.

Leo Laporte [02:14:47]:
That's kind of the reverse centaur in a way, right? You're— the AI is now doing the good part and you're doing the nasty part.

Rumman Chowdhury [02:14:55]:
Yeah. I mean, that's sort of— that's really bad imposter syndrome, like feeling that your work isn't good enough. Oh yeah, it's like a really bad imposter syndrome.

Fr. Robert Ballecer [02:15:03]:
I imagine if you have imposter syndrome and an AI confirms that you're not good enough.

Jeff Jarvis [02:15:08]:
Yes, but that's a subjective call, right?

Rumman Chowdhury [02:15:12]:
Who knows, like, what's better, right?

Fr. Robert Ballecer [02:15:14]:
Yeah, yeah. Objectively, I've read all of her work and no, she's better.

Leo Laporte [02:15:20]:
She really is better. Okay. Uh, Mike Masnick writes about it, uh, yesterday or day before yesterday in Tech Dirt, uh, the, the sad case of a California State Appellate Court case in which a hallucinated citation traveled through an entire legal Proceeding from a Reddit blog post to a client's declaration to an attorney's letter to the opponent's attorney, opposing attorney's draft of the court order to the judge, the judge's signature, to appellate filings. At no point along the way did anyone bother to check whether the case actually existed. Um, it's a, it's a story about, believe it or not, custody of a dog. Two people dissolving their domestic partnership. Each wanted custody, uh, shared custody and visitation of the dog Kira.

Jeff Jarvis [02:16:11]:
You take Fido and I'll take Claude.

Leo Laporte [02:16:14]:
Uh, in, uh, the case, one of the plaintiffs cited two cases: The Marriage of Twig and The Marriage of Tea Garden. Neither case exists. They came from a Reddit blog post by Sassafras Patterdale. Oh, come on. Uh, uh, Munoz, uh, and her attorney did not actually realize the case was fictitious. They attached the Reddit article to an exhibit in the declaration. Uh, Sassafras was identified as a blogger, a podcaster, an animal rescuer. Well, you know, there you go.

Leo Laporte [02:16:56]:
It was cited as a watershed California Supreme Court case that never happened, but everybody bought it and it went all the way through the court. The judge signed it. It went to the appellate court. They didn't question it.

Fr. Robert Ballecer [02:17:13]:
I mean, this is one of those fields that is most vulnerable to AI hallucinations because so much of the legal profession is knowing citations and knowing precedents. So, and, and most of the time when you write these, these briefs with these precedents in them, it, it sounds like an AI hallucination even if it's not, because it's just citation and then a small quote and then citation and a small quote. So I could understand why someone reading one of these briefs would first not check on the citation because there's so many of them. And second, not really understand that the wording is different, because it's not.

Leo Laporte [02:17:53]:
Well, and Mike points out that each step of the way, the fake citation got more legit. Yeah. Right? It started as a blog post, but then it's in the pleadings, and then it's the judge's court order. And so, each step of the way, it got more and more legit.

Fr. Robert Ballecer [02:18:08]:
If the judge is receiving it, he's assuming that his clerk and the 4 attorneys who looked at it before before already checked it.

Rumman Chowdhury [02:18:14]:
Yeah, exactly. This is the problem right here. Nobody checks the AI's work. Like, literally nobody checks the AI's work, and that's the real problem.

Fr. Robert Ballecer [02:18:20]:
So what we need is an LLM that checks the hallucinations. That, that will fix everything.

Leo Laporte [02:18:26]:
And, uh, my good friend Kevin Rose partnered up. Remember Kevin had a thing called Digg back in the day? It was actually— he started up right after TechTV, uh, got him on the COVID of, uh, Businessweek as the $60 million man. It was before Reddit. Reddit came along. Alexis Ohanian and Steve Huffman founded Reddit kind of as a clone, frankly, of Digg. Digg eventually fell to the bots who were gaming its algorithm, and after Digg 4, they kind of shut it down. Well, fast forward a little bit. Alexis Ohanian who's done pretty well for himself, partnered up with Kevin to revive Dig and to revive the Dig Nation podcast.

Leo Laporte [02:19:14]:
Dig came out of beta just a couple of months ago. Yeah, hardly at all. Immediately the bots were back. It has shut down again after 2 months. I know, I'm not laughing. I'm not laughing. They said, we thought, you know, we were going to use AI. We thought we could really solve this problem.

Leo Laporte [02:19:36]:
Dig CEO Justin Mazzell wrote, writes in a note pinned to the homepage of dig.com, we face an unprecedented bot problem. We knew, we knew that bots would be out there and would be a problem. We just didn't know, we didn't appreciate the scale, sophistication or speed at which they'd find us. We banned tens of thousands of accounts. We deployed internal tooling and industry-standard external vendors. None of it was enough. It's not just a Dig problem, it's an internet problem, but it hit us harder because trust is the product. We're not giving up.

Leo Laporte [02:20:16]:
Dig isn't going away. We're just going to rebuild, and Dig Nation will continue recording while we work on a reboot. Digg got bought again.

Jeff Jarvis [02:20:28]:
I think I've told the story on the show in the past. My old boss Steve Newhouse, now the chairman of Advance Publications, who got him asked, loved Digg, wanted to buy it. There was no buying it, so he bought his second choice, which was Reddit.

Leo Laporte [02:20:41]:
Smart move. Yep. You might be interested in this. I imagine you go in for an ECG every once in a while, Mr. Mr. Jeff Jarvis.

Jeff Jarvis [02:20:50]:
I also own my little thing.

Leo Laporte [02:20:52]:
Yeah, uh, Cedars-Sinai has an AI system that can read echocardiograms and write the report. Uh, I know you'd like a cardiologist to, uh, validate.

Jeff Jarvis [02:21:04]:
So I just had the case where I had an MRI in my back after I injured it, right? And, and because the pain was so god-awful, and the hospital spine doctor— we were looking for what's the cause of my infection? And the hospital spine doctor said, well, it's not the spine, and so it's not my problem. Okay, it's over to you, infectious disease doctor. Bye, nice to meet you, Jeff. Boom, gone. But then I got another spine doctor, and he did another MRI, and he looked at it, he said no. And, and the radiologist read the MRI, said no infection. He said, no, there's an infection there. That's why you feel so bad, and that's why we have to keep treating you on antibiotics for the next 2 months, more than 2 months.

Jeff Jarvis [02:21:42]:
Same data, same eyes, different eyes, but different perspectives. Using the AI, you know, complementarily, in a complementary fashion, fine.

Leo Laporte [02:21:58]:
But to have it read it, no. Well, I think it's what we've learned from the previous stories, that maybe you want a human eye on this. Yeah. Echo Prime was trained on more than 12 million echocardiogram cardiography videos paired with cardiologists' written interpretations. It's done very well, state-of-the-art performance on 23 diverse benchmarks of cardiac structure and function, outperforming, well, I don't know if it's outperforming doctors. I don't see, it's designed to assist clinicians. I guess that's important, not replace them. It produces a verbal summary cardiologists can review and act on rather than rendering an autonomous autonomous diagnosis.

Leo Laporte [02:22:36]:
So that's okay, right?

Jeff Jarvis [02:22:38]:
As long as the doctor looks at it and challenges the doctor, fine. It's, it is a second opinion.

Leo Laporte [02:22:44]:
Yeah, I like that challenge. I am not going to let a robot do surgery on me, but a surgeon in London says he's performed the UK's first long-distance robotic operation on a patient located 1,500 miles away. Robotic urological surgeon Professor Prokar Dasgupta said, it felt almost as if I were there. He carried out a prostate removal via robot.

Fr. Robert Ballecer [02:23:14]:
Robotic urological. I already know.

Jeff Jarvis [02:23:17]:
Well, I've been there, folks. I've gone to the OR and looked up at this tall— this thing that was taller than me, and I saluted it.

Leo Laporte [02:23:25]:
Gosh, did it operate on you? Yeah.

Jeff Jarvis [02:23:28]:
Yeah, I mean, the surgeon was there at controls, but he was 4 feet away from me.

Leo Laporte [02:23:32]:
Well, that's the thing, he doesn't have to be next to you unless I guess maybe something goes wrong. Here is the surgeon in with his head in the— probably very much similar to what happened to you.

Jeff Jarvis [02:23:42]:
He was 1,500 miles away, not 4 feet away.

Leo Laporte [02:23:44]:
That's the only difference.

Rumman Chowdhury [02:23:45]:
Yeah, there must be latency in that control, right?

Rumman Chowdhury [02:23:47]:
Like, that doesn't—

Rumman Chowdhury [02:23:48]:
that can't be real-time.

Fr. Robert Ballecer [02:23:52]:
Yeah, I mean, in the middle of a prostate surgery, I don't want to hear the phrase, oh, he's got to reboot his router. Yeah, that's— no, the patient— the Wi-Fi went out.

Leo Laporte [02:24:01]:
Can we, can we let a robot operate on you? Said it's a no-brainer, which is probably not the best phrase to use when you're getting operated on by a robot. But I guess if there's a shortage of doctors, this could be—

Jeff Jarvis [02:24:18]:
yeah, if you're, if you're, if you have a specialty and someone can't get to you because they're 1,500 miles away from the nearest specialist.

Rumman Chowdhury [02:24:24]:
Exactly. Yes, better than absolutely nothing, for sure. Yeah. Yes. Yeah, if that's, if that's the scale.

Jeff Jarvis [02:24:30]:
Train more doctors around the world, better first solution.

Leo Laporte [02:24:33]:
Last story, Travis Kalanick is back. Oh good. The founder of Uber, he wrote a very interesting post on his new site, Adams.co. Said, I never left. He was fired, of course, by the board. He says it was just an investor taking advantage of me because my mom had just died, my dad was seriously injured.

Jeff Jarvis [02:24:58]:
He blames Bill Gurley. We're gonna have Bill Gurley on if you want.

Leo Laporte [02:25:00]:
Oh good, we can ask Bill about this. After being booted from Uber, he— Uber, incidentally, at the time, remember, he brought in Anthony Lewandowski Travis's whole vision for Uber was really the way Uber makes money is with self-driving vehicles like Waymo, not with drivers. Ultimately, it's gotta be autonomous vehicles if it's gonna make any money. But as soon as he was booted, they sold off the self-driving portion of the company. Kalanick went and started a cooking pop-up called CloudKitchens, which turned out to be kind of a real estate play. And now he's put out a manifesto in which he says, really, all I've ever been interested in is automating the means of production. He says everything ultimately has to be grown, mined, manufactured, and then transported. And so what his new business is— growing, mining, transport— He says, "At Adams, we make," and this is the key, "gainfully employed robots, specialized robots with productive jobs that bring abundance to their owners and society at large.

Leo Laporte [02:26:21]:
And don't worry about losing your job because we're going to need lots of people initially." It looks like a pitch deck. Deck for investors, which is probably exactly what it is.

Fr. Robert Ballecer [02:26:34]:
If you look at that deck, I'm sure at some point in the deck it says we're enabling humans to be radically self-reliant, because that seems to be— yes, the catchphrase.

Leo Laporte [02:26:42]:
Yes, radically self-reliant means you're living in a van down by the river, if you're lucky. If you're lucky, you got a van. Otherwise, just you and the river. Uh, what if, he says, you had an industrial kitchen and needed to make 1,000 pancakes an hour? I couldn't think of a worse way to do it than a human. A specialized machine that makes pancake batter at a large scale with a heated iron apparatus that could cook 100 pancakes at a time to golden brown perfection. No awkward robotic arm flipping pancakes. Instead, precision cooking, ultra speed and throughput efficient use of space designed for the machine. This is where specialized robotics shine.

Fr. Robert Ballecer [02:27:20]:
Yes, and who's buying those pancakes now that no one has a job?

Leo Laporte [02:27:23]:
It's gonna make a a pancake robot.

Rumman Chowdhury [02:27:26]:
Okay, how many people are trying to make a pancake industry?

Leo Laporte [02:27:30]:
I mean, we know Craig Newmark loves the pancake robot. He's got the money to do it.

Jeff Jarvis [02:27:37]:
Yeah, well, he actually just flies.

Leo Laporte [02:27:38]:
Yeah, he just flies around. He just goes to airports to get his automated pancakes.

Fr. Robert Ballecer [02:27:43]:
We had a pancake robot in the, uh, in the Brick House. You did? We did, we did. I missed that. We had it on the new screen flavors.

Jeff Jarvis [02:27:51]:
But I thought you had— no, you had a, you had a different bread maker machine.

Leo Laporte [02:27:54]:
Oh yeah, we had an Indian chapati maker. And actually, somebody, one of our employees had it, took it at home. It's at home. I can't remember who has it. Oh, somebody has it. It wasn't— they weren't very good. You were gonna make better—

Jeff Jarvis [02:28:09]:
the me and Stacy— the bread. You never did.

Leo Laporte [02:28:14]:
Oh yeah, I was going to send it to you and it envelope. You bet. Back in the days when you had money.

Fr. Robert Ballecer [02:28:22]:
I miss the days of the Leo box, the mystery boxes that would show up every once in a while.

Leo Laporte [02:28:27]:
They'd be like, ooh, it was a roti maker. Thank you.

Jeff Jarvis [02:28:31]:
Roti. That's right.

Leo Laporte [02:28:32]:
It was a roti maker. And somebody has it. It's still, it was still in the world. Oh, wait a minute. No, this was it. Printing pancakes with pancake Yes, there it was.

Jeff Jarvis [02:28:44]:
Oh, you had it. Oh, you really did.

Leo Laporte [02:28:45]:
Okay, look at all these pancakes. You're right, Robert. Look, there's the Twit pancake. Oh, you're right, we had a pancake bot.

Fr. Robert Ballecer [02:28:52]:
Oh, I remember the tech. Oh, it made me.

Leo Laporte [02:28:55]:
That is as good a portrait as built. You can kind of see some features in there though. You got, you know, there's some nose and mouth. I want to eat this so bad, I really do. Um, So this is the— they tasted pretty good. I mean, it's— were they— yeah, I, I should send this to Craig Newmark. Yeah, just out of Austin. Here's the guy who invented the PancakeBot.

Leo Laporte [02:29:18]:
In our— in that little screen, there's Megan Moroney, PancakeBot creator. You're doing this and I'm fearing the screensavers is going to take us down from YouTube. No, no, this is our screen. I know, I know, I know, I know.

Jeff Jarvis [02:29:29]:
I'm joking.

Leo Laporte [02:29:29]:
Wouldn't that be funny though if the old screensavers took it down? Or something. Pancakes anymore. This looks like a 3D printer.

Rumman Chowdhury [02:29:36]:
It's just a 3D printer with pancake.

Leo Laporte [02:29:38]:
It's a 3D— it is exactly what it is, with batter. 3D printer with batter. He made it work really well.

Fr. Robert Ballecer [02:29:44]:
Cleaning it was a pain. I remember that.

Leo Laporte [02:29:47]:
Yeah, it always is. And as usual— well, not as usual, sometimes— because, uh, Jeff and I are old men, we read the obituaries every morning. And thank goodness we're not in them. I should mention that Jürgen Habermas has passed, and, uh, and many people will know that, uh, Jeff refers to Habermas whenever he wants you to take a shot.

Jeff Jarvis [02:30:11]:
What? Well, Gutenberg and Habermas.

Leo Laporte [02:30:14]:
So, uh, the only— besides you, the only person I've heard mention Habermas is, uh, Alex Karp, the founder of Palantir, who studied with Habermas, German In German philosophy.

Jeff Jarvis [02:30:26]:
If you go to my blog or my Medium feed, I put up a section from The Gutenberg Parenthesis about Habermas and coffeehouses. So tell us about—

Leo Laporte [02:30:36]:
he was, by the way, 96, so he had a good long life as philosopher.

Jeff Jarvis [02:30:42]:
He created the notion of the public sphere, the bourgeois public sphere, in a book that was very influential. It took years before it got translated into English, which was interesting. There was a delayed effect. And he argued that in the salons and coffeehouses of England and France, there was reasoned, civil public discourse. We should keep on going back to that. In my research for The Gutenberg Parenthesis, still on sale now on paperback, you know what I found was that the coffeehouses were not so civil. Oh. Wrong.

Jeff Jarvis [02:31:17]:
It was trying to— it was almost a conservative view that we try to recreate and recapture some magic time that never was.

Leo Laporte [02:31:26]:
As often we do. As often we do. But were they places for conversation?

Jeff Jarvis [02:31:30]:
Oh, very much so. And what impressed him was in a country that was fully— in England that was fully class-based, anyone could sit anywhere and was expected to do so. It broke down class barriers. But there were also fistfights. There were also arguments, right?

Leo Laporte [02:31:44]:
Well, anywhere people gather, there are fistfights.

Jeff Jarvis [02:31:47]:
And this is really the beginning of public discourse in important ways. There were publications, The Tatler and The Spectator, and they would listen to what was happening in the coffeehouses, and that would appear in the publication. The publication would come back in and feed the conversation in the coffeehouse, and it was this cycle of public discourse. It was a fascinating thing to discover, to study, and he was very much, very provocative and right in lots of ways, but many disagree with him. The other problem was that he called it inclusive. Well, it only included those people who could afford to go and buy coffee and sit there all day. It didn't include women. It didn't include people whose skin was not white.

Jeff Jarvis [02:32:25]:
And so it wasn't as inclusive as he thought. So there's arguments from the feminist perspective and from a race perspective, there were arguments about this. Nonetheless, give Habermas' credit, even though his prose, as I said in one of my earlier books, was as hard to digest as a cold German sausage. Really hard to read. Especially when you're used to reading it. The translators often give up and just put the German words in parentheses. They don't know how to translate it. But he provoked tremendous discussion about what does the public mean.

Leo Laporte [02:32:57]:
I'll tell you how important the Coffeehouses were, as you point out in the Gutenberg Parenthesis, King Charles eventually issued a proclamation for the suppression of coffeehouses because he thought that they were seditious.

Jeff Jarvis [02:33:09]:
Yes. That's very much like the FCC today. Fake news.

Leo Laporte [02:33:12]:
The source of fake news. Yeah.

Fr. Robert Ballecer [02:33:15]:
Habermas is very important in Jesuit formation. Oh, is he? Yeah, absolutely. It's one of the philosophers that we very much push in our early formation because critical theory, this idea that all social constructs develop, everything from truth to knowledge to class, develop from the relationship, the power dynamic between the dominant and the oppressed groups. That's a very, very important and usable concept throughout both philosophy and theology.

Leo Laporte [02:33:44]:
Man, you Jesuits are smart.

Fr. Robert Ballecer [02:33:46]:
No, I'm serious. You learn a lot of trivia. Learn a lot of trivia.

Leo Laporte [02:33:51]:
My dad went to a Jesuit high school, Regis, and a Jesuit college, Fordham, and he always called the Jesuits God's Marines. I don't know what that means, but that's actually—

Fr. Robert Ballecer [02:34:04]:
that's, that's true. So on July 4th, I'm taking my final vows here in Rome. Are you? It's like our last step.

Leo Laporte [02:34:11]:
Congratulations! That's wonderful. It's a very drawn-out process. You been going through this literally for decades. 32 years. Wow, Robert. Wow. Are you unusually slow, or is this normal that it would take that long?

Fr. Robert Ballecer [02:34:25]:
Um, no, uh, I am slow because I've been jumped around so much from like the DC, Hawaii. Finally we got to the place where Father General, who I live with here, he just said, no, we're just going to do it. Let's do it now.

Leo Laporte [02:34:38]:
Let's do it. But there's a lot you have to do I mean, it's like getting a PhD. I mean, there's a lot you have to do to get it.

Jeff Jarvis [02:34:46]:
Congratulations, I'm so happy for you. That's great.

Fr. Robert Ballecer [02:34:49]:
Then the fourth vow is special obedience to the Pope, and that's where that God's Marine things come from, because the Pope can actually say, I need someone here, and we've taken the vow saying, okay, I'll go. It doesn't matter what I'm doing, I'll pack up and I'll go.

Leo Laporte [02:35:03]:
Is that the gang sign, by the way? Is the sign of the four? Fourth vow, baby. Fourth vow. Hey, I did not know that you— that's because we— I've been watching this progress for at least 10 years.

Jeff Jarvis [02:35:18]:
I had no idea. Wow.

Leo Laporte [02:35:19]:
And I know there was a— you know, you had to do these retreats. There was a lot you had to do. Oh yeah. Uh, congratulations. Thank you. That's such great news.

Jeff Jarvis [02:35:27]:
Is this— is that how high it goes?

Rumman Chowdhury [02:35:29]:
Is there 5? Is there 5 or 6? 5? What?

Jeff Jarvis [02:35:33]:
Another final? It says final.

Fr. Robert Ballecer [02:35:35]:
4 is it?

Rumman Chowdhury [02:35:36]:
So you're like, this is the most Catholic you can ever be. This is it. This is—

Fr. Robert Ballecer [02:35:40]:
so technically, after I do this, I'm no longer in formation. So I'm a fully formed Jesuit. So it only took 32 years.

Jeff Jarvis [02:35:49]:
Wow. What's the ceremony? What's the—

Fr. Robert Ballecer [02:35:51]:
what happens? Uh, so here in the big chapel that we have in our house, the Borgia Chapel, which is— this is our motherhouse, um, I will profess my vows vows again before Father General, and then we do a— not a secret, but it's a solemn ceremony in the back with just Jesuits where I will take a bunch of promises and then the fourth vow.

Leo Laporte [02:36:11]:
Oh, that's great. Wow, that's wonderful. Do you get a lobster hat to wear? I should ask about that. No, it shouldn't be irreverent. That is so— I'm so happy for you. That's fantastic. Thanks. Um, is there any insignia or signs that you can wear hash marks on your sleeve, or— no, it's epaulettes.

Fr. Robert Ballecer [02:36:31]:
There used to be. That's actually where this comes from. This caused a lot of hurt because it used to be when you got to this point, you were judged, and if you did not meet the standards, you would not get the fourth vow. You would get only three vows. And but my generation has really turned that around. We they don't see that. That's not an extra bonus. That's not status that you have 4 vows.

Fr. Robert Ballecer [02:36:57]:
It just means that the work you do, you've completed, allows you— correct, correct.

Leo Laporte [02:37:01]:
Yeah, they don't like take your soul and weigh it up on a scale with a feather or anything like that. There's no— that's—

Fr. Robert Ballecer [02:37:07]:
oh, I love that. The— isn't that great? The feather and the heart.

Leo Laporte [02:37:11]:
It's a Book of the Dead. Yeah, yeah, that's the Egypt, the old Egyptian way. Uh, speaking of, uh, great announcements, Once again, let's reiterate, today we— Jeff brought us the first scoop. It's now on the blog. Your new book series, Intelligence, AI, and Humanity, begins, and it begins with our guest, Ramon Chaudhuri. This is going to be for Bloomberg Academic. How many books will there be?

Jeff Jarvis [02:37:42]:
3 to 5 a year. A year? This has been a process. To get this far, but I'm delighted. This is a big deal. We're here and these 3 authors are signed up: Matthew Kirshenbaum and Charlotte McElwain and Reuven Choudhury. It's a great beginning. And so they won't come out until early next year, which is what happens in books. But I'm looking for people to come to me and ask, you know, topics, questions like what is education? What does learning mean now? What is creativity? What is consciousness? Those kinds of topics.

Jeff Jarvis [02:38:17]:
I want to look, Father Robert, at this notion of the hubris of man thinking he creates Übermensch, and what does it mean to put yourself in the position of thinking that you're godlike? What are the theological implications of AI? There's lots of things that I, maybe, could I get, the Holy Father, maybe write a book for it?

Fr. Robert Ballecer [02:38:40]:
Probably, actually. Oh my goodness.

Leo Laporte [02:38:42]:
I'm not joking about that.

Jeff Jarvis [02:38:45]:
You witnessed it here first. That would be Mark Twain's publishing house. One of the things that actually took it down was they were very excited that he thought that everyone, every Catholic in the world, would buy a biography of, I think it was the Prior Leo, I believe. Oh yeah, and it didn't sell quite as well as they hoped.

Fr. Robert Ballecer [02:39:10]:
And look, the work that the Pope would be putting out would be an encyclical or an official letter, so they tend to be kind of dry and very technical. Yeah, you wouldn't want Leo. You would want one of the Cardinals, or no, even better, one of the— just the priests who are working on the commission, because their stories, that would be far more interesting.

Jeff Jarvis [02:39:31]:
There's so much to be— I point about all this. Harish Nagavi, who's the publisher of many other titles at Bloomsbury Academic, he called me one day and said, think about a book series about AI, you want to edit it? Hell yes. And what excites me about this is it's not a book about the technology, it's about society on the technology. Much more interesting. How it reflects on technology on society and and then turn back. And I think the opportunity here, it forces us, as the conversation with Rumaan, it forces us to reinvest, reimagine many topics about our life in society. So that's what's exciting.

Leo Laporte [02:40:09]:
And it's what we like to do on this show too. Exactly. Not just talk about the technical details. Well, that's great. Congratulations.

Jeff Jarvis [02:40:18]:
Thank you. Thank you for the opportunity to I'm proud to announce here. Yeah, we were, we were down to the wire on Roman's contract and agent, and I was, I was pushing both sides. Please get it done when Roman's on next week.

Leo Laporte [02:40:30]:
Oh, that's good. Well, it was nice that we could help you with some leverage. Thank you. Yes, let's wind this up as we always do with picks of the week. Normally we'd start with Paris. I don't know if you, Father Robert, if you've got anything in mind that you might want to promote or talk about?

Fr. Robert Ballecer [02:40:48]:
Not for myself, uh, that I can talk about. I will say that I am so, so happy with what I've seen, uh, in the, the film version of Project Hail Mary. Oh, really? Ah, seriously. I mean, I—

Leo Laporte [02:41:03]:
the reviews are just like The Martian, positive.

Fr. Robert Ballecer [02:41:05]:
Yeah, I did not know how they were going to turn The Martian into a decent film because I loved the book, but they did And I think they've done it again with Project Hail Mary.

Leo Laporte [02:41:13]:
It's funny, the guy who wrote the script said he was very— he thought, I can— there's no way I can write this book and make a movie out of it. But the reviews have been very positive. I have tickets to see it Thursday. Lisa and I are going to go see it Thursday. Uh, very excited about it.

Fr. Robert Ballecer [02:41:28]:
I will have to wait till I get back to the States in, uh, in April.

Jeff Jarvis [02:41:31]:
But, uh, damn release schedules.

Leo Laporte [02:41:33]:
Yeah, we will return with our picks of the week. Congratulations to both of you. It's kind of fun to work with such prestigious fellas. Mere podcast host, and I get to hang out with smart guys like you guys. Father Robert Balaserre, the Digital Jesuit, soon to be a member of the Club of the Four, the Sign of the Four. Mr. Professor Jeff Jarvis, And don't forget, Hot Type is still coming before the new books come out. Hot Type is just around the corner in August.

Leo Laporte [02:42:10]:
This episode of Intelligent Machines brought to you by Modulate. Every day, enterprises generate millions of minutes of voice traffic. I mean, we're talking customer calls, agent conversations, fraud attempts, right? Most of that audio is still treated, you know, basically like text flattened into transcripts stripped of tone, intent, and most importantly, of risk. Well, Modulate exists to change that. Modulate started in gaming. Modulate's technology was proven by supporting major players like Call of Duty and Grand Theft Auto. As you might imagine, these massively multiplayer games have a lot of audio, players talking to each other. Modulate helped these companies separate playful banter from intentional harm at scale.

Leo Laporte [02:42:59]:
Not easy to do, by the way. Today, Modulate helps enterprises, including Fortune 500 companies, understand 20 million minutes of voice every day by interpreting what was said and what it actually means in the real world. This capability is powered by Modulate's newest ELM, E-L-M, Velma 2.0, Velma is a voice-native— we're just talking about specialized models— it's a voice-native, behavior-aware model built to understand real conversations, not just transcripts. It orchestrates 100+ specialized models, each focused on a distinct aspect of voice analysis, to deliver accurate, explainable insights in real time. Velma does really well, ranks number 1 across 4 key audio benchmarks beating all the large foundation models in accuracy, cost, and speed. It's number 1 in conversation understanding, number 1 in transcription accuracy and cost, number 1 in deepfake detection. That's huge. And number 1 in emotion detection.

Leo Laporte [02:44:05]:
That's hard. Built on 21 billion minutes of audio, Velma is 100 times faster, cheaper, and more accurate than LLMs at understanding speech. That includes Google's Gemini, OpenAI, xAI, Most LLMs are black boxes. Velma doesn't just assess a conversation as a whole, conversation in, transcript out, but it breaks it down for greater accuracy and transparency by producing timestamped scores and events tied to moments in the conversation, meaning you can see exactly when risk rises, when behavior shifts, when intent changes. With Velma, you can improve your customer experiences, reduce risks like fraud and harassment, detect rogue agents, and more. Go beyond transcripts and see what a voice-native AI model can really do. Go to Modulate's live ungated preview of Velma at preview.modulate.ai. That's preview.modulate.ai to see why Velma ranks number 1 on leading benchmarks for conversation understanding, deepfake detection, and emotion detection.

Leo Laporte [02:45:10]:
That's Velma. Velma@preview.modulate.ai. We thank Velma so much for supporting Intelligent Machines. Father Robert recommended we all go to the movies, which I'm going to be doing. I'm very excited about seeing it. I was, you know, just like you, I had some trepidation.

Fr. Robert Ballecer [02:45:30]:
Yeah, it's a complicated book, but they, they, from the clips I've seen, they got the tone right. They got the playful tone, the amazement tone. And Gosling actually might be the right actor for that.

Leo Laporte [02:45:42]:
Yeah, I, I would— that— I was— so we had Andy Weir on, and he, uh, uh, had just learned that Ryan Gosling was going to play the role, that the brothers were going to direct it. And, uh, I was— I honestly, I was a little— I was like, Ryan Gosling, really? But actually, the more I think about it, the more I could see how he could play that kind of nebbishy kind of Uh, you know, well, I don't want to give away anything. Yeah, exactly. Character. That's a funny point though.

Fr. Robert Ballecer [02:46:10]:
I had him on Triangulation right before— right after it was announced that Matt Damon was going to play— right, play the character in the movie, right? And you had him right after Project Hail Mary. So his two books that got turned into movies, he was on TWiT.

Leo Laporte [02:46:26]:
We have— oh, we've interviewed him for every book he Yeah, and I hope to interview him when the movie comes out. I'll— we'll try to get him. And he's a great guy, and I think, I think pretty well disposed towards the network.

Rumman Chowdhury [02:46:36]:
So, I mean, Ryan Gosling is also a good actor, so it's like he's, he's not just a pretty boy, he's a good actor.

Leo Laporte [02:46:42]:
Okay, fair enough. I am gonna take— if you say so, I believe you. No, you know what, I was spoiled by La La Land, I'll be honest. I am going to, uh, I'm going to take a paragraph from Jeff Jarvis's Gutenberg Parenthesis and put it into my pick of the week.

Jeff Jarvis [02:47:05]:
I've got this— There you go.

Leo Laporte [02:47:07]:
To the written word, to the written word I say. So my pick of the week, actually I have several, but I'll start with this one, is Kagi's Translator. Kagi's Translator is really good. I am a Kagi fan. We had Kagi's CEO on a couple of months ago. Kagi does a variety of languages, you know, Chinese, English, all the usuals, but they also have fun languages, corporate jargon, Dothraki, Elvish, emoji speak, Gen Z, High Valyrian, Klingon. But I thought we should see if we could turn Jeff's academic passage into LinkedIn speak. It also has Middle English, Na'vi, and pirate speak.

Leo Laporte [02:47:52]:
Might be better in pirate. Not that one, no, no, no, please. No pirate speak? No pirate speak, no. Okay, well, I'm gonna, well, too late.

Jeff Jarvis [02:47:59]:
You just did it.

Leo Laporte [02:47:59]:
Haver must be thinking too highly, and not only of the scurvy dogs which frequent the coffeehouses, but their parley as well. Hey, that's pretty good. Building his tale on the belief that their bickering be rational and critical. How about LinkedIn speak? I don't even know what that— oh, it gives it bullet points with emojis. And it gives it tags: thought leadership, networking, Habermas, community building, public sphere. Yeah, look at that. I wonder how it is in Dothraki. Habermas is correct for the— Worth's in German.

Leo Laporte [02:48:36]:
Emoji speak? Have you ever written your books in emoji?

Jeff Jarvis [02:48:40]:
Actually, Hot Type, I'm very proud to say, has an emoji in it. Oh, very nice.

Leo Laporte [02:48:46]:
As it should if it's hot type. What about Gen Z? Sure. Habermas was low-key glazing the coffeehouse crowd, acting like their yapping was actually deep and logical. Cowan called him out for being a total circular logic merchant, saying he just fell for the hype Addison Steel were selling in their mags. Mags were literally trying to to manifest that exact vibe. We should send that to Paris.

Jeff Jarvis [02:49:11]:
Paris, I was just thinking that.

Leo Laporte [02:49:12]:
See if it resonates. Anyway, this is a lot of fun. There's also Reddit speak, which I don't know. So Habermas basically idealized the hell out of coffeehouse culture. He didn't just hype up the people there, but also the discussions, claiming the debates were peak rational and critical thinking. But then Cowan comes in like, hold my beer, and points out that Hamras is basically caught in a circuit. This is pretty good.

Jeff Jarvis [02:49:36]:
Funny. This is good. Isn't it? It's good.

Fr. Robert Ballecer [02:49:39]:
Leo, Jeff and I were talking about this when you went for a bite because I showed him one, the LinkedIn speak.

Leo Laporte [02:49:46]:
So the English was, oh, you'd already done this.

Jeff Jarvis [02:49:48]:
Oh no, no, no. Let me tell you the example.

Leo Laporte [02:49:51]:
So what did you use?

Fr. Robert Ballecer [02:49:53]:
I have been arrested for fraud.

Leo Laporte [02:49:55]:
And what did it say?

Fr. Robert Ballecer [02:49:58]:
It said I'm thrilled to announce that I'm starting a new chapter. I've recently been given the unique opportunity to step back and reflect on my professional journey from a high-security environment.

Leo Laporte [02:50:08]:
Finally, I'll get to write that book. Wow, that is pretty awesome. So thank you, Kagi, for doing something pretty, pretty great. And then one other site I'll show you, because we've been talking a lot about local models. I have a really good little program called LLMFit. It's an open source program. You can find it on GitHub that you can run on your machine to see if you can run an AI locally. But maybe this would be easier.

Leo Laporte [02:50:32]:
It's called canirun.ai. You can tell it what machine you have. Oh, and what, you know, graphics capability and so forth. So let's say you've got one of those brand new M5 Max computers with how much RAM? Let's 64 gigs of RAM, and you can see which models will run best on that hardware. These are the local models, so this is very handy. Mistral Small. Mistral Small. You can only run Mistral Small on that puny little girly machine of yours.

Leo Laporte [02:51:10]:
So anyway, this is— you can choose it for code, you could choose providers, you could choose licenses. You can, you know, choose what your standard would be, what, what, how you would sort it and so forth. I think this is very nicely done. It's canirun.ai. And then I have actually used one that is on GitHub. It's an open source tool called LLMFit, which you can also download and run. And it works quite well. Same idea, although it takes a lot longer.

Leo Laporte [02:51:42]:
Because it's actually gonna, you know, work on your machine. And it's a 2E, which I, as you know, quite fond of. Jeff, your pick of the week.

Jeff Jarvis [02:51:53]:
So let's see, we could, we could have schadenfreude over BuzzFeed, but I won't— during bankruptcy and all that, I won't do that. Instead, we have the Washington Post tried a White Castle from an airport vending machine.

Leo Laporte [02:52:07]:
Oh, I thought you did it.

Jeff Jarvis [02:52:08]:
No, I didn't. Well, I'm going to get to that, my personal, in a second. Oh, okay. So, and it was bleak, says the Post. Now, of course, it also points out that there's no White Castles in Boston, so they don't know how bleak a White Castle is normally. See, I would imagine—

Leo Laporte [02:52:22]:
I mean, White Castle, the whole key to the White Castle is piping hot, dripping, dripping grease, which is soaking up into the bun, steamed over the onions.

Jeff Jarvis [02:52:30]:
Yeah. And the bun, flavored crystals, steamed, gushy part to it.

Leo Laporte [02:52:34]:
Yeah. This is a vending machine at Terminal A at Logan. There's a California Pizza Kitchen in the men's room.

Jeff Jarvis [02:52:44]:
Great. Good thing it's close by.

Leo Laporte [02:52:47]:
At least it's nearby. Wow.

Fr. Robert Ballecer [02:52:48]:
I mean, Spain has vending machines that sell ham, so I mean, jamón ibérico.

Leo Laporte [02:52:55]:
But I bet it's good. I mean, ham probably does all right in a vending machine. Probably.

Jeff Jarvis [02:52:59]:
Yeah, I'm thinking. So in the spirit of the this. After all the attention in the last week or so for the CEO of McDonald's eating the Big Arch with no enthusiasm— gotten such a mess— I decided, because I have to have more iron, I decided that I would sacrifice for the show and my—

Leo Laporte [02:53:14]:
and, and, and is there a picture of you?

Jeff Jarvis [02:53:17]:
I didn't take a picture, it was too disgusting. I went in and I bought a Big Arch. I ate less than half of it, and it was— the bun was okay, but they put so much special sauce on it. It's 2 quarter-pound patties, 3 slices of cheese.

Leo Laporte [02:53:32]:
Oh, that's too much.

Jeff Jarvis [02:53:33]:
That's a lettuce and this and, and, and grizzled onions and, and the sauce. Well, the sauce is such when you try to buy— the reason the CEO would be cautious is because when you bite on it, the patties start slipping out. It was disgusting. It was big mess. Uh, so saved you $10, folks. $10.

Leo Laporte [02:53:54]:
Yeah, go instead and spend $34 and get a French dip sandwich at Saul Hankson. Yeah, exactly. It'd be much better.

Fr. Robert Ballecer [02:54:00]:
I had, I had one of the big arches, but that was only because it was free.

Leo Laporte [02:54:04]:
Ah, wait a minute, did they deliver? How come it was free? No, so the McDonald's—

Fr. Robert Ballecer [02:54:09]:
there is a McDonald's on Vatican property just right next to St. Peter's. The reason why they allowed there to be a McDonald's on Vatican property is because that McDonald's agreed to give away X number of meals to the homeless every day. Oh. And so I was there towards the end of the night, and they said, well, Father, would you, would you like, would you like this?

Jeff Jarvis [02:54:29]:
What'd you think?

Fr. Robert Ballecer [02:54:31]:
Uh, I don't think they should be feeding that to the homeless.

Jeff Jarvis [02:54:37]:
Well said.

Leo Laporte [02:54:38]:
I worked, uh, when I was a kid in high school, I worked at a McDonald's. And, uh, McDonald's, uh, you know, it's very tightly controlled inventory. They don't want the employees eating the food and so forth. So they— but they also very careful about when a hamburger has been sitting in the bin too long. They don't, they don't want to sell it. So they have what's a white plastic bin called the waste bin. And when a hamburger has exceeded its time limit in the bin, they throw it in the waste bin. And at the end of the day, you count the waste to make sure that, you know, everything is accounted for, which I suppose is a good inventory practice.

Leo Laporte [02:55:14]:
But we thought, geez, it's such a waste Maybe we could donate this to the local dog pound, you know, the shelter. Oh yeah, we'd be nice, the dogs would like it. The shelter turned it down. They said there's not enough protein. Yeah, we don't want it. No, we don't want your steak burgers.

Fr. Robert Ballecer [02:55:30]:
In Italy, we don't use the same nuggets and burgers that they use in the United States because they're not classified as food. They won't let them come into the EU.

Leo Laporte [02:55:39]:
You mean pink goo is not food? Food.

Fr. Robert Ballecer [02:55:42]:
By the way, I worked at McDonald's as well when I was a kid. Did you? Yep, the one at Mission, uh, Mission Hills.

Leo Laporte [02:55:48]:
And Jeff worked at Ponderosa Steakhouse.

Jeff Jarvis [02:55:51]:
We had to count— they had these little tiny white cups for the sour cream. They could charge you too much for your— for your— you need 5 of them. We had to count every little cup.

Leo Laporte [02:56:00]:
Wow. Well, I'm just saying, thank God for the Ozempic, because otherwise I'd be craving a Big Mac right about Yeah, just if you see the big arch is just so over—

Jeff Jarvis [02:56:12]:
it's just so American. It's over the top. I am top.

Leo Laporte [02:56:15]:
Yeah, I mean, I was hooked on McDonald's for a long time. Oh, I would think it was from working there and eating so much of it. Yeah, it's full of sugar. It's just like your Coca-Colas.

Jeff Jarvis [02:56:24]:
It really was always my hangover cure because it was, it was like rice to a Chinese person. It was American. It couldn't be more—

Leo Laporte [02:56:31]:
hangover cure is Taco Taco Bell. Yeah, Taco Bell.

Fr. Robert Ballecer [02:56:34]:
I'm Taco Bell here.

Rumman Chowdhury [02:56:35]:
McDonald's is the last place in America though where you can get a $5 meal.

Leo Laporte [02:56:41]:
Well, it's true, although it wasn't true for a while. They had to really kind of—

Fr. Robert Ballecer [02:56:45]:
they just introduced the $3 value meal, which includes their, uh, the Sausage McMuffin and an orange juice or something. It's so— yeah, they're trying to make it affordable again.

Leo Laporte [02:56:55]:
God bless them, you know.

Jeff Jarvis [02:56:56]:
They need to, because, well, one of my wife's students works at McDonald's. She teaches ESL, and her hours have been cut back because, yeah, prices have gone too high.

Leo Laporte [02:57:07]:
I, I was actually very grateful that I, uh, my first job was McDonald's. I really learned how to work. You know, they say, don't— you're never standing still. You're always— if you, if you don't have something to do, clean. You know, always, always be working. And the day went by a lot faster because of it.

Fr. Robert Ballecer [02:57:22]:
Or, or what I would do, which I used to sabotage the shake machine. That was kind of my— that was my job.

Leo Laporte [02:57:28]:
Well, this was very early on. We had shake machines, but we didn't have McFlurries yet. So, Father Robert, so nice to see you. Congratulations on your ascension. Is it called that? No, just final vows.

Fr. Robert Ballecer [02:57:41]:
Ascension sounds like I'm converting into energy or something.

Leo Laporte [02:57:46]:
I'm very happy for you. That's, that's such wonderful news. And I hope that we get to see you soon, maybe even in the Bay Area, but at least on our microphones here for the podcast. We love having you on. Father Robert Ballas at the Digital Jesuit, Padre SJ on Blue Sky and all the other platforms. And of course, the Jesuit Pilgrimage app on iOS and Android. It's a great way to follow Father for Loyola's pilgrimage across the world. Thank you, Robert.

Leo Laporte [02:58:21]:
Jeff Jarvis, congratulations to you too. Congratulations on the new book series. Very exciting. Very happy for you. Thank you for the opportunity to plug it here. Yeah. Thanks for letting us be the first to tell the world. Jeff's book, Hot Type, is available for pre-order.

Leo Laporte [02:58:38]:
You can also get the Gutenberg Parenthesis now in paperback and magazine, a wonderful read. And he will be back next week with Ms. Paris Martineau for another thrilling, gripping edition of Intelligent Machines. We do the show every Wednesday, uh, right after Windows Weekly, 2 PM Pacific, 5 PM Eastern, 2100 UTC. You can watch us live in the Club Discord. Thank you, club members, for making that all possible. Actually, for making everything possible. Without the club members, I don't know what what we would do.

Leo Laporte [02:59:08]:
If you haven't joined yet, twit.tv/clubtwit, please join the club. You can watch us live. Everybody can watch us live during the show production on YouTube, Twitch, x.com, Facebook, LinkedIn, and Kick. After the fact shows end up at twit.tv/im or on YouTube. There's an Intelligent Machines channel there for the video. Great way to share little clips with friends and family. Spread the word, spread the goodness. And of course, you can subscribe in your favorite podcast client and get it automatically the minute it's done.

Leo Laporte [02:59:40]:
Thank you everybody for joining us. We'll see you next time on Intelligent Machines. Hey everybody, Leo Laporte here, and I'm going to bug you one more time to join Club Twit. If you're not already a member, I want to encourage you to support what we do here at Twit. 25% of our operating costs comes from membership in the club. That's a huge portion and it's growing all the time. That means we can do more, we can have more fun. You get a lot of benefits, ad-free versions of all the shows.

Leo Laporte [03:00:13]:
You get access to the club Discord and special programming like the keynotes from Apple and Google and Microsoft and others that we don't stream otherwise in public. Please join the club if you haven't done it yet. We'd love to have you. Find out more at twit.tv/clubtwit, and thank you so much.

All Transcripts posts