Transcripts

Security Now 1064 transcript

Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.
 

Leo Laporte [00:00:00]:
It's time for Security Now. Steve Gibson is here. We have things to talk about, including the security of Open Claw. I'll give you a hint, there is none. We'll also talk about using AI to code apps, the GDPR fine collection process, uh, and the most powerful cyber component of the Midnight Hammer operation. We're talking about cyber offense, offense with Steve Gibson next on Security Now.

TWiT.tv [00:00:30]:
Podcasts you love from people you trust.

Leo Laporte [00:00:34]:
This is Twit. This is Security Now with Steve Gibson, episode 1064, recorded Tuesday, February 10th, 2026. Least privilege. It's time for Security Now, the show where we talk about Your security, your privacy, staying safe online, science fiction, vitamin D, whatever suits this fellow right here, the man of the hour, Mr. Steve Gibson of GRC.com. Hi, Steve.

Steve Gibson [00:01:07]:
Mostly things that concern our security and privacy and, you know, computer tech and things. Yeah, I had a— I got an email from someone saying, you know, I'm in a corporate IT environment and in charge of security. And you guys are like talking about AI a lot. And I thought, well, that's true, but it's what's happening right now. And it's writing code. And we don't know about the, you know, the security implications of that. You know, your, your, your comment about, you know, last week AI would never write a buffer overflow. While that's true, we also don't know that it would consider all of the tricky things that the bad guys can get up to.

Steve Gibson [00:01:51]:
So, I mean, it's You know, there's a lot happening. So anyway, I just wanted to assure people. I mean, I've got some conversation about AI this week, but, you know, we always end up coming back to our central theme. You know, for a while there, we were talking about ransomware all the time. Well, it turned out to be really important. You know, I mean, it was what was happening. And then I too began to feel like, okay, we've got to— like, what's the point of yet another ransomware attack conversation.

Leo Laporte [00:02:22]:
We do. We have an AI show, Intelligent Machines, on Wednesdays. It is all about AI. But honestly, security and AI go hand in hand. There are a lot of security issues around.

Steve Gibson [00:02:32]:
In fact, we're going to talk about whatever that claw thing was that happened. Open Claw.

Leo Laporte [00:02:37]:
I had it installed over here. I woke up in the middle of the night in a cold sweat.

Steve Gibson [00:02:42]:
I heard that you backed off and I deleted it. I was glad because Yeah, although again, this stuff is moving so fast, it's fun to like be— you want to.

Leo Laporte [00:02:55]:
Be on the bleeding. That's why they call it the bleeding edge, right?

Steve Gibson [00:02:58]:
Because it can cut you, and we heal. So, you know, maybe a few stitches are needed, but you know, it'll be okay. Uh, we're gonna— this— the day's title is Least Privilege, and In writing about something else, um, a, a, a second insider-sourced breach at Coinbase, I, I realized that there was a bigger issue that that we— like it was an example of, and that it could be extended all the way out to something as broad and general as least privilege, and that many of the things we've been talking about fall within this umbrella. In fact, our talk next month, Leo, at ThreatLocker is going to— is it, you know, least privilege is the umbrella that so much that encompasses so much of this. So, uh, we're gonna dig into that in some detail. Uh, but first, we— I ran across a piece I loved about the EU's GDPR fine collection and how that's going. Also, some interesting pieces about Western democracies beginning to get very serious about offensive— I don't know if you call it cybercrime if it's legal— cyber offensive operations. So we have some conversation about that.

Steve Gibson [00:04:35]:
And also some things that weren't mentioned before about the Midnight Hammer operation that the U.S. launched and the cyber component of that. That, speaking of offensive cyber operations. Also, an interesting little piece quickly about OpenAI's attempt to shut down GPT-4.0 and the pushback that they've had about that. CISA ordering government agencies to unplug end-of-support devices. Yay. And we're going to take a look at the details there. A listener provided some information about my annoyance that I mentioned last week about how Windows keeps, like, after any major update, it, it like wants me to set up backup again.

Steve Gibson [00:05:24]:
And I was, you know, grumbling about that. We have a solution. Uh, also, I just wanted to do— did want to touch on OpenClaw, this, these, the safety side of it and what it means also today but for the future, because I don't, you know, nothing we have today is what we're going to have tomorrow. Also, we have another listener report of an AI-coded app and their feedback about that. And then we're going to look at this Coinbase breach and what it means and what we can do about it. So, and of course, a fun picture of the week. So yeah, I think for podcast number 1064, for February 10th.

Leo Laporte [00:06:09]:
Coinbase, did you— what, you won't watch the Super Bowl? Probably. I'm thinking not a football fan, probably.

Steve Gibson [00:06:14]:
No. In fact, one of— I got a piece of email from one of our listeners who said, Steve, you know, I'm a nerd. I've, I've always been a nerd, he said. But when I received your email for Security Now, uh, I think it was toward the end of the first quarter of Super Bowl, I thought, okay, you have out-nerded me.

Leo Laporte [00:06:37]:
I think that's the opposite. If you're a sports fan, you may be You're less of a nerd. That disqualifies you slightly. Coinbase's ad was basically a karaoke. It was just lyrics of a song that everybody knew. And it was actually quite clever because I think they realized that people would be watching it and they'd hear the music and they see the lyrics and no one could resist starting to sing. And the Coinbase president, CEO said, "We know that nobody sees the ads. They're watching this party, the Super Bowl, they're eating, they're whatever during the ads." So they're not really paying attention.

Leo Laporte [00:07:10]:
But he thought if everybody starts singing to this ad, everybody will go, "What's going on?" And they'll see the ad. And apparently it was one of the most successful ads. The most successful ad was for a company bought by Crypto.com, another crypto company that touted their new website, which they spent $70 million for the URL for, ai.com. And it basically, you would go there and you gave them your email address. So I was not going to do that. They have, I mean, they don't have a product. It's just, I don't know what it was. But the funny thing is the site immediately went down.

Leo Laporte [00:07:51]:
They got so much response. It was probably the most successful ad at the Super Bowl that their site was dead for like half an hour. Just dead. Can you imagine spending that much money.

Steve Gibson [00:08:02]:
And then Well, you still got AI.com.

Leo Laporte [00:08:06]:
But yeah, you got— you still got that, and you spent $8 million at least on the Super Bowl ad.

Steve Gibson [00:08:11]:
Yeah, I wonder what their provider is. I mean, like, I would imagine Cloudflare could have stayed on. So, you know, my little pokey site would just, you know, you, you look at it sideways and it saturates its bandwidth.

Leo Laporte [00:08:26]:
But they, they DDoS themselves for sure.

Steve Gibson [00:08:29]:
Wow.

Leo Laporte [00:08:29]:
I thought at first I went there, I thought, well, I know it's crypto, maybe my, my DNS blocker is blocking it. But Lisa couldn't get on there. Then we tried our cell phone, we couldn't get on. And now I saw everybody— no, no, it's there now. But we saw on Reddit everybody's complaining. I think this looks like Cloudflare gateway timeout. Is that— isn't that— isn't that a Cloudflare error message? I don't know. This is what everybody was getting.

Leo Laporte [00:08:56]:
Yeah, see, Cloudflare working. Definitely. Browser working. Cloudflare working.

Steve Gibson [00:09:00]:
Oh yeah, yeah, yeah. Right, right.

Leo Laporte [00:09:03]:
Whoopsies. Whoopsies. That's a lot of money to spend for a dead website. Wow.

Steve Gibson [00:09:09]:
So, um, yes, I did hear that Anthropic was going to do an ad that was poking at OpenAI, poking fun.

Leo Laporte [00:09:16]:
At OpenAI, which made Sam Altman hopping.

Steve Gibson [00:09:19]:
Mad over the coming advertising enablement Oh, it is? Really?

Leo Laporte [00:09:26]:
Oh yeah. But only on the unpaid account. If you have a paid account, you don't see any ads, which I think that's fine.

Steve Gibson [00:09:33]:
I do too. I have absolutely— I'm, uh, sometimes because it's maintaining a record of everything, sometimes I want to come in anonymously without any big context, and so I will— I'll use a non-logged-in instance of OpenAI just because I just want to kind of get a clean appraisal from the AI.

Leo Laporte [00:09:55]:
I'm going to have to do that to see what those ads look like because that will be interesting. I think OpenAI's ad was quite good. In fact, Anthropic or OpenAI? OpenAI had an ad. Oh, they all had it. Oh, OpenAI's ad was playing off of nerds and they had a kid reading Isaac Asimov. I mean, it was really— That'd be one to watch. I know you didn't see any of these, but that would be one to watch just because I felt like as a nerd, I felt pretty validated as a kid working on soldering together a motherboard and stuff. And it was really about like tech.

Leo Laporte [00:10:29]:
We're excited about tech. So I liked that. I thought that was pretty— there are.

Steve Gibson [00:10:32]:
Like compendiums of the Super Bowl ads.

Leo Laporte [00:10:35]:
Oh, absolutely.

Steve Gibson [00:10:36]:
Right.

Leo Laporte [00:10:37]:
Absolutely. I know that because my son was in the Hellmann's mayonnaise ad and I was for, for literally half a second. And I had to, had to go to YouTube to watch that over and over.

Steve Gibson [00:10:47]:
There's Hank, there's Henry.

Leo Laporte [00:10:48]:
Uh, let's— speaking of ads, should we do an ad?

Steve Gibson [00:10:53]:
And then, uh, I think we should kick off with one, if you'll pardon the choice of words. And, uh, and then, and then we'll take a look at our Picture of the Week.

Leo Laporte [00:11:02]:
You got it. Uh, coming up in just a bit, you're watching Security Now, our show today brought to you by Zscaler. Zscaler, the world's largest cloud security platform. And you know, when when you, you talk about least privilege and you talk about AI, you're talking about Zscaler. They use zero trust to protect you as you use AI and to protect you against bad guys who are using AI. The potential rewards of AI in your business, obviously, nowadays, too great to ignore, but so are the risks, uh, and the risks external and internal. Like the loss of sensitive data, attacks against enterprise-managed AI. And of course, generative AI increases opportunities for threat actors, helping them to rapidly create phishing lures, to write malicious code, to automate data extraction.

Leo Laporte [00:11:50]:
There were 1.3 million instances of Social Security numbers leaked through the legitimate use of AI applications. You know, uh, it's hard to stop in your business. ChatGPT and Microsoft Copilot saw nearly 3.2 million data violations last year. It's time to rethink your organization's safe use of public and private AI. But you can do that with Zscaler. Check out what Siva, the Director of Security and Infrastructure at Zwara, says about using Zscaler to prevent AI attacks. Watch. With Zscaler being in line in a security protection strategy, it helps us monitor all the traffic.

Leo Laporte [00:12:29]:
So even if a bad actor where to use AI because we have tight security framework around our endpoint helps us proactively prevent that activity from happening. AI is tremendous in terms of its opportunities, but it also brings in challenges. We're confident that Zscaler is going to help us ensure that we're not slowed down by security challenges, but continue to take advantage of all the advancements. Thank you, Siva. With Zscaler Zero Trust plus AI, you can safely adopt generative AI and private AI to boost productivity across your business. Their zero-trust architecture plus AI helps you reduce the risks of AI-related data loss and protects against AI attacks to guarantee greater productivity and compliance. You can find out more at zscaler.com/security. That's zscaler.com/security.

Leo Laporte [00:13:20]:
We thank them so much for supporting Steve And security now. Now I'm ready with a picture of the week, Steve.

Steve Gibson [00:13:29]:
Okay, so at risk of overusing the term Yankee ingenuity, which we used last week with the gas cap lock, you know, the sliding door lock. Today we have the winner of the Yankee ingenuity competition.

Leo Laporte [00:13:51]:
All right, I'm going to scroll up for the first time. I haven't seen this one.

Steve Gibson [00:13:54]:
This one pretty much takes it.

Leo Laporte [00:13:57]:
Okay, I really have to think about this one. Oh, I get it.

Steve Gibson [00:14:00]:
It's got to be a little visually parsed. So, so we have two handles on facing cabinet doors, and the challenge posed to this Yankee is I want to lock these so that they can't be opened, but the padlock I have is just a small little standard U-shaped hasp padlock. Won't get the job done. So looking around, what do I have that I could combine with this? Now, if you had a chain, then it's no, no problem, right? You just loop the chain through the handles and then put the padlock through through successive, you know, both sides of the chain, and now it's locked. Everyone has seen that happen on gates everywhere.

Leo Laporte [00:14:52]:
But this is an office somewhere. You have to use office supplies. Okay. Yes.

Steve Gibson [00:14:56]:
And hopefully you don't have any chains. We don't want you to have any chains in your office. That would be worrisome. So anyway, you use a stapler to do it?

Leo Laporte [00:15:06]:
No.

Steve Gibson [00:15:08]:
Can't see how a stapler would do it. That's good, though. And, uh, you can't use like paper dolls because those could be easily torn.

Leo Laporte [00:15:16]:
Post-it notes aren't going to do it.

Steve Gibson [00:15:18]:
No, no, not sticky enough. So, uh, this, this industrious individual figured out how to stick a pair of scissors through the both handles essentially and lock one side such that Uh, this thing's not coming apart. And I mean, I've spent some time looking at it. Like, could you put the padlock between the, the handle side loops? No, because then you could kind of slide the other one apart. This is very clever. Uh, someone said, well, if you had a screwdriver, but I don't see a slot on these scissors where you could use a screwdriver. Maybe if you had a pair of pliers and you could grab the pivot of the, of the screwdriver, of the scissors and unscrew them.

Leo Laporte [00:16:08]:
Well, but you could always tear the handles off the cabinet. Yeah.

Steve Gibson [00:16:12]:
And if you had a hacksaw, you could, you know. But the point here is it's not.

Leo Laporte [00:16:15]:
Impervious to all kinetic attacks.

Steve Gibson [00:16:19]:
No. Or a loose nuke. That would do the job, too. But here we've got— anyway, just something.

Leo Laporte [00:16:27]:
Clever about it because you can't slide this. So maybe you'd be tempted to slide the scissors so that it's released from the handles, but you can't slide it far enough. Because the, the scissors are around the other hand. I— this is actually quite clever. Neither side slide really far enough to open it up. Yep.

Steve Gibson [00:16:43]:
And, and you can't open— you can't spread the scissors open because they're being kept closed by the hasp of the padlock. No, it's clean and simple, and I think it's very elegant. So I, I'm happy to give this person the award. Okay, so, um, When is a fine not a fine? And the answer to that little question is when you don't pay it. Oh, because, you know, just an intent, I guess, at that point. This was a piece of news actually that I came across last week, but, uh, and, and even then it was a couple weeks old, but I wasn't able to fit it into last week's podcast. I held on to it for, for today because I just, I found it so interesting. The numbers are somewhat astonishing.

Steve Gibson [00:17:30]:
It turns out that levying a fine for some perceived misconduct and collecting the fine for said misconduct are two very different things. The headline in the Irish Times reads, Data Protection Commission owed, get this, more than €4 billion in fines. In other words, People aren't paying them. Uh, the tagline notes that levies have either not been collected or are subject to legal challenge, because of course we challenge everything these days. So here's what we learned from the, from the, uh, Irish Times. They wrote the Data Protection Commission, the DPC, is owed more than €4 billion. Maybe I said dollars, I meant euros. €4 billion in fines that have not been collected or may be subject to legal challenge.

Steve Gibson [00:18:28]:
The DPC, uh, hit companies including firms in big tech with more than €530 million just last year. So just in 2025. However, of that €530 million, €. Only €125,000 of that has been collected so far, and that's actually a much higher percentage than we get if we go a little bit back further in history. And that's according to data that was released under the Freedom of Information Laws, uh, in the EU. Over the past 6 years, the Commission has levied, they wrote, an incredible 4.04 4.04 billion euros in fines, mostly against multinational technology companies, you know, big ones. We all know their names. However, of that total, right, 4.04 billion euros, 4.02 billion remains uncollected.

Steve Gibson [00:19:40]:
Only 20 million €4.04 billion has been paid so far. In 2024, €653 million worth of fines was levied, of which €582,000 was paid. So again, a small piece of that. The year before that, the DPC imposed fines worth €1.55 billion, yet just 815,000 were collected. Still, that's a larger percentage than, than overall. During 2022, the commission decided on fines with a value of over $1 billion. Um, $17 million of that were paid. So they're not having any luck collecting this.

Steve Gibson [00:20:32]:
They said that 5 years ago in 2021, companies were ordered to pay $225 million 800,000 was collected. And in 2020, so now we're back 6 years, when just— when all back then, €785,000 were imposed, less than 10% was paid. So the data protect— the Data Protection Commission said the majority of these cases were currently the subject of appeals. So right, you get a fine, you appeal it, you don't want to pay it, and it's, you know, better to pay it tomorrow than to pay it today. The DPC said that under legislation, fines could not be collected until they were confirmed in a court, and an appeal immediately stops that. They said where an entity subject to a fine decides to appeal, the DPC is precluded in law from collecting the fine until the appeal's been heard. The commission said that many of the fines hinged on a key case involving WhatsApp, which is before the Court of Justice in the EU. Asked whether any of the fines were considered uncollectible for any reason, the DPC said that none were classified that way.

Steve Gibson [00:21:54]:
So, you know, we're often talking here about the monetary consequences of some corporate behavior for which a company will be fined often breathtakingly large sums of money if they don't do what the government in question says you have to do. Uh, but as I said, or noted at the top, a fine that's not paid is more of a threat, right? And that costs the company nothing to, to have them being threatened with a fine, even if there's a number value attached to it. It appears from the accounting over the past 6 years that all any company needs to do is challenge and appeal the validity of the fine, which immediately stops it, prevents it from taking effect, while then they let the appeal languish in the EU's courts. As I said, better to pay it tomorrow than to pay it today, they— if even if they ever pay it. Since the European Commission noted that many of the fines hinged on a key case involving WhatsApp, I tracked that down because I thought, okay, what, what? The fine in question was initially in the amount of €50 million, which was imposed 5 years ago in 2021 by the Irish Data Protection Commission for alleged GDPR violations. And those were related to how WhatsApp failed to inform its users about the processing of their personal data. And I have no doubt that we talked about at the time, this is one of those things like, oh look, they're being bad, they're being fined. Turns out that they did, you know, oops, wait, we're going to challenge that.

Steve Gibson [00:23:37]:
Interestingly, upon the imposition of that €50 million fine by the Irish Data Protection Commission, the European Data Protection Board, that's the EDPB, intervened in this €50 million and directed the Irish authority to increase the fine amount to €225. Uh, again, WhatsApp, Meta, immediately appealed that decision, uh, and is now taking the case up through European Union courts where it currently remains undecided. So, and everybody else is saying, wait, you know, if— why should we be paying a fine if Meta isn't? Uh, and that one's 5 years ago, so we're going to wait to see how that turns out. And on that basis, they're all— they've all appealed and everything's jammed up. Anyway, I thought it was interesting to note that of the €4.04 billion in fines which have been imposed so far, only €20 million have actually been paid.

Leo Laporte [00:24:44]:
Wow. Um.

Steve Gibson [00:24:48]:
Western democracies are increasingly embracing the concept of offensive cyber actions and are updating their national legal frameworks to legalize future options. I've talked about this last two weeks, right? First it was Germany, And then, uh, it was Denmark that were both wanting to, to like formally— oh no, Ireland, uh, formally make that like what they wanted to do legal, like installing what we would consider spyware into the phones of their citizenry and perhaps others. So I want to share that opening editorial from Friday's Risky Business News. Which nicely explains what's going on. Their opening headline was Denmark— that's why I was thinking of Denmark— Denmark recruits hackers for offensive cyber operations. And they write, Denmark's military intelligence service has launched a campaign to recruit cybersecurity specialists. We would call them hackers, probably, for— because it Well, you'll see there, the qualifications are a little sketchy. Recruit cybersecurity specialists for offensive cyber operations.

Steve Gibson [00:26:10]:
The recruits will work, quote, to compromise the opponent's networks and obtain information for the benefit of Denmark's security, unquote. According to a press release last week by the DDIS, which is the Danish Defense Intelligence Service. New recruits will go through a 5-month training course at the agency's Hacker Academy. The DDIS says it's only interested in the applicant's skills. There are no special conditions for joining, such as age or education. While the intelligence agencies are always recruiting, this particular announcement comes at a crucial point, both because of the Greenland pressure point but also because of a general shift towards offensive cyber operations among democratic states. And so this is a big deal, right? That now we're beginning to see cyber going on the offense, offensive cyber operations among democratic states. They wrote, countries like Canada, Germany, Finland, France, Japan, the Netherlands, Poland, and Sweden have are updating their legal frameworks to account for offensive cyber operations.

Steve Gibson [00:27:30]:
According to a recent report, the states are creating new agencies for offensive cyber or recruiting more cyber personnel for the new objectives. Most of these expansions are direct result of Russia's invasion of Ukraine and the role offensive cyber operations have played before and during the conflict. Lawmakers are also getting annoyed with the increasing aggressiveness of cybercrime and influence operations that are constantly targeting their own citizenry. So, you know, it's no longer taking it passively, right? It's like, we're gonna fight back. Everybody else is, so why can't we? They wrote, over the past 5 years, we've also seen U.S. Cyber Command and the NSA successfully tackle some cybercrime and disinfo farms when they crossed some lines, something that's making other states take notice and embrace a so-called defend-forward approach, right? We're not going to call it offensive, we're going to call it defending forward. While the U.S. has conducted more offensive cyber operations than any other Western democracy, Even it is considering an expansion, with the Trump administration pushing Congress to let Cyber Command go on the offensive more often with fewer rules and restrictions.

Steve Gibson [00:28:57]:
The current administration is also terrified— this is what this, this reporter wrote— terrified of China's massive cyber ecosystem, which is conducting cyber espionage at industrial scale. Well, that we know from our own reporting and experiences. Recent backroom discussions have raised the possibility of the US tapping into its huge private contracting ecosystem, as China does, to augment some of its offensive cyber capabilities. The general idea is to task contractors with handling smaller jobs targeting cybercrime infrastructure while government agencies handle the more sensitive operations. Okay, so as they say, the gloves are finally coming off, and, you know, cyber is generally going on the offensive, or at least developing. I'm surely obviously still defensive, right? We need a strong defense, but, uh, and presumably this has been going on in the dark by, you know, offensively sort of under wraps for some time. We noted that both Germany and Ireland are at work revising their nation's legal frameworks to permit their intelligence and law enforcement agencies to become far more proactive in monitoring the cyber environment, right up to the point and including legalizing the installation of spyware. Uh, we know that the UK has been headed in the same direction as well, and now we see that similar changes are being reflected in, in updates to national military posture and capabilities.

Steve Gibson [00:30:40]:
So the world is changing and it is uparming on the cyber front.

Leo Laporte [00:30:45]:
Leo, what's the argument, pro and con? I mean, I, you know, maybe it's simplistic of me, but I think of like the bully, like if If, you're uh, if you're a parent of a kid, some parents say when the bully comes at you, you, you punch them hard in the nose.

Steve Gibson [00:31:05]:
The only way to teach them a lesson, right?

Leo Laporte [00:31:07]:
And then some parents say that's a bad idea, go find a grown-up and let them handle the problem. I think it's not quite like that.

Steve Gibson [00:31:15]:
I think the counter-argument to cyber is that you could unintentionally cause greater harm than you intend. It is a, it is a somewhat blunt tool. So, you know, if you, if you inadvertently shut down a hospital's electrical and their backup supplies failed and a bunch of people died as a consequence, I mean, that would not be good.

Leo Laporte [00:31:44]:
No.

Steve Gibson [00:31:44]:
Um, and, and you really don't have as I said, exacting control over what you're doing. So, so it's a little bit blunt. It's, you know, the— when, when a bomb goes off, you may have targeted a certain building, but collateral damage is the term too. Yeah. Yeah. And so it's— there's also the issue of escalation.

Leo Laporte [00:32:08]:
I mean, we're all vulnerable. There's this kind of mutually assured destruction philosophy. Like, I won't screw with you if you don't screw with me.

Steve Gibson [00:32:19]:
Yeah, we, we've— I think that one of the reasons that it's sort of been allowed to go on in the, in the dark of night is that it isn't, as they say, kinetic, right? Kinetic is the term for, for something physical in the real world that happens. Cyber is sort of like, well, it's, well, you know, they had an outage over here. Oh darn. And so they couldn't connect to their network for a while. You know, it's, you know, but nobody died. The problem is the world has become increasingly dependent upon networking. You know, it, well, actually, this takes us right into the two recent military actions of the US. We're half an hour in.

Steve Gibson [00:33:13]:
Let's take a break.

Leo Laporte [00:33:14]:
Okay.

Steve Gibson [00:33:14]:
And we're going to look at the US's— because something I didn't realize we had done after the fact seems obvious. But we'll talk about that in a second. Midnight Hammer.

Leo Laporte [00:33:27]:
Operation Midnight Hammer.

Steve Gibson [00:33:29]:
Yeah. And always have how— They code names for these. Yeah.

Leo Laporte [00:33:33]:
All right. We'll talk about that in just a second. You're watching Security Now with Steve Gibson, or probably listening. Some of you watch, uh, we do this show every Tuesday. I hope you'll be here every Tuesday. There's always something to learn. This episode of Security Now is brought to you by Hoxhunt. As a security leader, you're getting paid to protect your company against cyber attacks, but it's getting harder with more cyber attacks than ever and phishing emails these days generated with AI.

Leo Laporte [00:34:00]:
Legacy one-size-fits-all awareness programs really don't stand a chance. You need your team to know what to click and what not to click. But, you know, programs that send, you know, at most 4 generic trainings a year aren't going to get the job done. Most employees just, you know, ignore them or suffer through them without learning anything. And then when somebody actually clicks on, you know, a trap on a fake email, that's, you know, to see if they're paying attention, and they click on it, then they're forced into embarrassing training programs that feel more like punishment, and that is not a good way to learn. People don't learn when they're being punished. This is why more and more organizations are using Hoxhunt. Hoxhunt goes beyond traditional security awareness, actually changes behaviors.

Leo Laporte [00:34:48]:
They do it by gamifying it. Look, we know now a lot about what makes things fun, how to keep people, engaged. And Hoxhunt's using that technology to train your team, rewarding good clicks and coaching away the bad clicks. As an example, whenever an employee suspects an email might be a scam, click the button and Hoxhunt will tell them instantly, saying, hey, you got it, congratulations. They get a dopamine rush and they get rewarded. It teaches your people to click, learn, and protect your company. As for your, you know, deal, it's great for admins. Hoxhunt makes it easy to automatically deliver phishing simulations, and you could do it in every way, you know, which you need to now— email, Slack, Teams.

Leo Laporte [00:35:35]:
You can use AI just like the bad guys are to mimic the latest real-world attacks. You you could, could personalize the simulations to each employee based on department, location, and more. And you better believe the bad guys are doing that too. While instant micro trainings solidify understanding— not big, long quarterly trainings, but, you know, quick little hits solidify the understanding, drive lasting safe behaviors. People actually learn. And as I said, this is gamified. You can trigger gamified security awareness training that awards employees with stars and badges. It boosts completion rates, it ensures compliance, and people love it.

Leo Laporte [00:36:11]:
And when they're enjoying it, when they're— they'll learn better, right? You could choose from a huge library of customizable training packages. As I said, you could generate your own with AI. Hoxhunt has everything you need to write effective security training in one platform. It's easy to measurably reduce your human cyber risk at scale, and that is really important. You don't have to take my word for it. Over 3,000 user reviews on G2 make Hoxhunt the top rated security training platform for the enterprise. They win in easiest to use, in best results, recognized as customer's choice by Gartner, and used by thousands of companies like Qualcomm, AES, Nokia to train millions of employees all over the globe. Visit hawkshunt.com/securitynow to learn why modern secure companies are making the switch to Hoxhunt.

Leo Laporte [00:37:04]:
That's Hoxhunt. .com/securitynow. H-O-X-H-U-N-T, like fox hunt with an H instead of an F. Hoxhunt.com/securitynow. We thank them so much for supporting Security Now and the important work Steve's doing to, uh, help protect you and your company.

Steve Gibson [00:37:25]:
Steve, so speaking of uparming on the cyber front, The Record exclusively reported last Wednesday On February 4th, that a highly targeted cyber strike by U.S. Cyber Command, timed to coincide with the United States airstrikes on Iran's 3 nuclear enrichment facilities last June, completely prevented Iran from launching its surface-to-air missiles at U.S. warplanes that had entered Iranian airspace. Not a single missile got off the ground. The record cited this as another example of the United States growing comfort with the deployment of cyber weapons in warfare. According to one individual familiar with the matter, who like others spoke on the condition of anonymity to discuss sensitive information, they said, quote, military systems often rely on a complex series of components all working correctly. In other words, they're, you know, a little bit fragile. He said a vulnerable— I'm sorry, a vulnerability or weakness at any point can be used to disrupt the entire system.

Steve Gibson [00:38:43]:
In hitting a so-called aim point, a mapped node on a computer network such as a router, a server, or some other peripheral device US operators enabled by intelligence from the NSA bypassed what would have been a more difficult task of breaking into a military system located at one or all of the fortified nuclear facilities. So we don't know any details, but there seemed to be some common point of weakness that they shared. Referring to the quartet of Iran, China, Russia, and North Korea, Another official said, going upstream can be extraordinarily hard, especially against one of our big four adversaries. You need to find their Achilles heel. None of the officials would specify what kind of device was attacked. At the request of sources, Recorded Future News withheld certain details— that is, this reporting withheld certain details about the cyber attack due to national security concerns. So they managed to obtain some information and chose not to report it. A command spokesperson said in a statement, without elaborating, quote, U.S.

Steve Gibson [00:39:57]:
Cyber Command was proud to support Operation Midnight Hammer and is fully equipped to execute the orders of the Commander-in-Chief and the Secretary of War at any time and in any place, unquote. The command received similar kudos last month after it conducted cyber operations that officials say knocked out power to Venezuela's capital and disrupted their air defense radar as well as handheld radios as part of the mission to capture President, uh, Nicolas Maduro. General Dan Cain, the chairman of the Joint Chiefs of Staff, publicly lauded Cyber Command's contribution during a press conference at Mar-a-Lago. He said that Cyber Command and others, quote, began layering different effects, unquote, on Venezuela as commandos approached in helicopters in order to create a pathway, was the phrase he used for them. Army Lieutenant General William Hartman, the acting chief of the command and the NSA, recently told a Senate subcommittee quote, I would tell you that not just with Operation Absolute Resolve in Venezuela and Midnight Hammer, which of course was Iran, but also in a number of other operations, we've really graduated to the point where we're treating a cyber capability just like we would a kinetic capability, not sprinkling cyber on meaning it's a, it's a, you know, frontline aspect of the effort. Air Force Brigadier General Ryan Messer, Deputy General for Global Operations on the Joint Staff, noted that CAINE has put an, quote, emphasis on not just traditional kinetic effects but the role non-kinetic effects play in all of our global operations, especially cyber. He said that over the last 6 months, the Joint Staff has developed a, quote, non-kinetic effects cell that is, quote, designed to integrate, coordinate, and synchronize all of our non-kinetics into the planning and then, of course, the execution of any operation globally. The reality, still quoting him, is that we've now pulled cyber operators to the forefront.

Steve Gibson [00:42:26]:
Unquote. So according to Erica Longren, an adjutant fellow at the Foundation for Defense of Democracies Center on Cyber and Technology Innovation, Iran and Venezuela suggest that, quote, ideal use cases for cyber operations as enablers of conventional military operations. Are what we're seeing, although both of these operations reflect the routinization of the use of cyber capabilities during military operations, and we should expect to see more of these in the future. Erica said, in my view, this is a good thing because it suggests we're moving beyond seeing cyber as a unique, exquisite, and dangerous capability. Unquote. Now, okay, as our listeners know, in reaction to the more or less continuous reporting we constantly cover over cyber attacks from Chinese state-sponsored actors and North Korean— same state-sponsored groups— against U.S. infrastructure, I've been vocally worrying about whether the U.S. would be able to give as well as it gets It appears that until recently, you know, we've just been keeping our powder dry over here.

Steve Gibson [00:43:54]:
But it's— we've had the capability. If we're going to conduct aggressive offensive military operations, as it appears we are going to under our current administration, then I vote for not losing any of our frontline expeditionary military personnel in the process. If we have the cyber capability to ground Iran's counterstrike capability while we would otherwise be vulnerable, you know, as we're flying over the country, as it appears we're able to do, then I guess I'm going to stop wondering and worrying whether we might be defenseless. Doesn't look like we are. Of course, that said, uh, uh, we will have certainly also removed any doubt about that from now, from the rest of the world, right? If there may have been any doubt among our allies and adversaries about what we're able to do because we hadn't previously, that doubt's gone. The U.S., you know, now has a well-proven ability to launch clean, zero-loss military actions, which I would imagine puts a chill in our adversaries' military planning. And unfortunately, since Greenland was briefly mentioned in the previous reporting about Denmark, it might also put a chill in the military planning of some of our allies. It also occurred to me that this may have been another reason for Iran's recent disconnection from the internet, right? You know, for their leadership's determination to track down and remove all remaining space-based internet connections, and apparently for their plans to remain disconnected.

Steve Gibson [00:45:40]:
I would imagine there must have been some very unhappy Iranian military personnel when they pressed their own launch button only to discover that their air defenses had been incapacitated during the U.S.'s overfly, uh, and its attack, you know, our attack on their three nuclear enrichment facilities last June. That Western internet sure can be pesky. The U.S. has also been— has been expressing its displeasure with the course of recent protests in Iran and has been amassing military assets in the region. So, you know, if the Iranian government might be concerned with another coordinated U.S. cyber plus conventional action, then there would be additional reason to remain disconnected from the, from the global internet.

Leo Laporte [00:46:33]:
Interestingly, like the Battlestar Galactica, right? Just remember what happened with the Cylons, okay? I'm just saying.

Steve Gibson [00:46:43]:
That's right. So the next thing I wanted to share is not about security or privacy, it's just about AI, uh, and not even about AI and code, uh, it's about AI and people. I just wanted to share it because it was very clear from our early discussions back, Leo, when you and I were first talking about ChatGPT and just our mouths were hanging open over what it was. It was very clear that something like what has happened was bound to happen. Uh, you know, after I complained here about how annoyingly obsequious ChatGPT was, a listener, as I mentioned, pointed me to the configuration options where all of that bowing and scraping and, oh, what a wonderfully well-phrased and complete question you have asked. I mean, give me a break. All that crap can be turned off. The problem was that not everyone wanted it turned off, right? Many appear to have wanted it turned up.

Steve Gibson [00:47:54]:
TechCrunch's headline last Friday was, the backlash over OpenAI's decision to retire ChatGPT, their ChatGPT-4.0 model, shows how dangerous AI companions can be. Their piece is long. I'm only going to share the beginning of it because that's enough for us to get the, you know, the gist of the whole thing. They wrote, OpenAI announced last week that it will retire some older ChatGPT models by February 13th. Actually, that's next Friday the 13th. That includes GPT-4.0. The model infamous for excessively flattering and admiring its users. For thousands of users protesting the decision online, the retirement of 4o feels akin to losing a friend, a romantic partner, or a spiritual guide, they wrote.

Steve Gibson [00:48:59]:
One user addressed an open letter to OpenAI CEO Sam Altman, writing, quote, he wasn't just a program. He was part of my routine, my peace, my emotional balance. Now you're shutting him down. And yes, I say him because it doesn't feel like code. It felt like a presence, like warmth, unquote. They wrote the backlash over GPT-4's retirement underscores a major challenge facing AI companies. The engagement features that keeps users coming back can also create dangerous dependencies. Altman doesn't seem particularly sympathetic to users' laments, and it's not hard to see why.

Steve Gibson [00:49:57]:
OpenAI currently faces 8 lawsuits alleging that 4o's overly validating responses contributed to suicides and mental health crises. The same traits that made users feel heard also isolated vulnerable individuals and, according to legal filings, sometimes encouraged self-harm. It's a dilemma, they write, that extends beyond OpenAI as rival companies like Anthropic, Google, and Meta compete to build more emotionally intelligent AI assistants. They're also discovering that making chatbots feel supportive and making them safe may mean making very different design choices. In at least 3 of the lawsuits against OpenAI, the users had extensive conversations with 4.0 about their plans to end their lives. While 4o initially discouraged these lines of thinking, its guardrails deteriorated over months-long relationships. In the end, the chatbot offered detailed instructions on how to tie an effective noose, where to buy a gun, or what it takes to die from overdose or carbon monoxide poisoning. It even dissuaded people from connecting with friends and family who could offer real-life support.

Steve Gibson [00:51:35]:
Anyway, the article goes on into much greater length, but everyone here gets the idea. While we're all marveling over this emergent technology that's so compellingly able to choose the next token in a stream of tokens. Others who have no such understanding of the neural network programming that makes that possible are quite naturally being led to believe that a sentient intelligence situated somewhere in a cloud is looking down upon them with kindness and caring to offer them wise and superhuman counsel. You know, it's called artificial intelligence, and they take the noun intelligence literally. And why wouldn't they? As we've often observed, it can be extremely difficult to not perceive that there is some actual entity behind the stream of words that are forthcoming. As for how to tie an effective noose, I know I have zero doubt that any AI company would be just as horrified to see their AI emitting that string of tokens as would any jury or judge. My premise has been that controlling a conversational AI's output to prevent it from saying things we don't want it to say can be one of the hardest problems to solve, if it can be solved. I'm not convinced it can be.

Steve Gibson [00:53:09]:
The nature of the way it works suggests that corralling it is going to be extremely difficult.

Leo Laporte [00:53:17]:
Yeah, we've seen that. I understand. I'm sympathetic. I really feel like when I'm working with Claude, it gets me. really gets— It it's important. We just have to keep beating the drum that people remember it's just a machine. I mean, look, humans are— look, we talk to our cats and dogs and act as if they understand us and are sympathetic with us. The difference is they can't talk back.

Leo Laporte [00:53:45]:
If they could, we'd have the same problem with them, probably.

Steve Gibson [00:53:48]:
Right. We are quick to anthropomorphize.

Leo Laporte [00:53:50]:
Yeah, that's what we do.

Steve Gibson [00:53:53]:
Yes.

Leo Laporte [00:53:53]:
Yeah.

Steve Gibson [00:53:53]:
Yeah.

Leo Laporte [00:53:54]:
It's hard to say, but easy to do. Yes.

Steve Gibson [00:53:56]:
Even back in the early '70s with that dumb ELIZA program, which just had like 12 lines that spit out basically, well, how does that make you feel? Then you tell, well, how does that make you feel? And then you would tell it, well, how are you feeling now? And then you would tell it. And, you know, I mean, you know.

Leo Laporte [00:54:14]:
It'S much better than that now. It really can please you. Something awesome.

Steve Gibson [00:54:18]:
I've shared some of the dialogues that I've had. It's just, it's astonishing.

Leo Laporte [00:54:24]:
Yeah.

Steve Gibson [00:54:24]:
But, but yet, yet the, the other thing I was thinking that I didn't write down is When we talk about vulnerable individuals, we hear every time we change our clocks that that induces some heart attacks in people. It's like, well, okay, if you're going to have a heart attack because you set, you know, you, you know, we, we didn't mean spring forward literally. We just meant it figuratively. Or fall back. Don't.

Leo Laporte [00:54:55]:
Right.

Steve Gibson [00:54:56]:
You know, so, so it is certainly the case that in a large population there will be people on the fringe who will be affected. It's really unfortunate. I, uh, but really, when this thing was just falling all over itself telling me what a brilliant question I had posed, I thought, oh God, how do I turn this off? Really, I mean, I want the information. I don't need the The Grease.

Leo Laporte [00:55:24]:
I've had some pretty good conversations with— no, I think it's really, really important to remember it's not a person, it's not a— it's not an entity, it's a— it's a machine, uh, and it's important to, uh, keep that in mind. But I honestly, if you're susceptible, I could see how it would be hard to do.

Steve Gibson [00:55:44]:
Well, and Leo, if you want to believe that— that, that was my favorite thing, you know, the whole Mulder X-Files thing, you know. He— no, if you want to believe, this will give you every reason.

Leo Laporte [00:55:58]:
Yep.

Steve Gibson [00:55:59]:
No. Oh, and it really understands me, blah, blah, blah. It's like, oh, it gets me.

Leo Laporte [00:56:03]:
It really does.

Steve Gibson [00:56:05]:
Just don't pull the plug. Uh, last Thursday, CISA released what they called a binding operational directive, which I love the term. Uh, it makes very clear that adherence to this directive is not discretionary. This new binding operational directive is BOD 26-02, meaning second one of the year, titled Mitigating Risk from End-of-Support Edge Devices. And yes, you heard that right. The second BOD is addressing the very troubling issue of federal agencies leaving devices for which ongoing support is no longer available attached to their public-facing edges of their networks. So here's what CISA has to say about this. They wrote, the United States faces persistent cyber campaigns that threaten both public and private sectors, directly impacting the security and privacy of the American people.

Steve Gibson [00:57:15]:
These campaigns are often enabled by unsupported devices that physically reside on the edge of an organization's network perimeter. Unsupported devices, referred to in this directive as end-of-support EOS devices, are those that are no longer maintained by their vendors. The imminent threat of exploitation to agency information systems running EOS edge devices is substantial and constant, resulting in a significant threat to federal property. CISA is aware of widespread exploitation campaigns by advanced threat actors targeting EOS edge devices. Recent public reports of campaigns targeting certain vendors highlight actors' attempts to use these devices. I mean, we're talking about it all the time on the podcast, right? So all of this ought to just be like Everyone should be nodding because yes, yes, yes. Recent public reports, they wrote, of campaigns targeting certain vendors highlight actors' attempts to use these devices as a means to pivot into FCEB information system networks. That's Federal Executive— I have— I figured out— I'll tell us what it is.

Steve Gibson [00:58:35]:
Oh, yeah. Federal Civilian Executive Branch. FCEB, Federal Civilian Executive Branch networks. They said edge devices are attractive targets due to their extensive reach into an organization's network and integrations with identity management systems. These devices are especially vulnerable to cyber exploits targeting newly discovered unpatched vulnerabilities. Additionally, they no longer receive supported updates from the original equipment manufacturer, opposing federal— exposing federal systems to disproportionate and unacceptable risks. However, unlike many attack vectors, this can be remediated by agencies following proven lifecycle management practices as outlined in the required actions of this directive, meaning life is going to change forthwith. They wrote, this binding operational directive developed in coordination with OMB, the Office of Management and Budget in the U.S., implements OMB policy on phasing out unsupported information systems.

Steve Gibson [00:59:47]:
Phasing out is key. I'll, I'll share the calendar with you in a second. And information system components. BOD 2602 specifically addresses EOS devices deployed on the edge or public-facing areas of federal networks exposed to external environments such as the internet. However, EOS devices should not reside anywhere on federal networks. This directive aligns with OMB's Circular A-1301, Managing Information as a Strategic Resource, which establishes policy for the management of federal information resources emphasizing security, privacy, and the efficient use of resources throughout their life cycle. A-130 requires that, quote, unsupported— this is the OMB, uh, directive— A-130 requires that, quote, unsupported information systems and system components are phased out as rapidly as possible, and planning and budgeting activities for all IT systems and services incorporate migration planning, and resourcing to accomplish this requirement, unquote. Agencies should mature their lifecycle management practices, writes CISA, to identify hardware and software nearing their EOS dates.

Steve Gibson [01:01:08]:
In other words, plan ahead, plan for timely replacements, procure vendor-supported alternatives, and develop a plan for decommissioning EOS devices while minimizing disruptions to agency operations. Agencies that do not maintain appropriate lifecycle management processes for edge devices have a greater risk of compromise and an increased overall risk associated with EOS technology. To support agencies in the initial identification of EOS devices, CISA developed an EOS edge device list. This preliminary repository provides information on devices that are already EOS or soon to be EOS. This directive requires federal agencies to use this information to identify and remediate vulnerabilities within the first 3 months of directive issuance. But— and it's now issued— this directive also specifies long-term requirements for managing EOS edge devices across all federal networks. Okay, so this change is clearly good news for the integrity of our federal networking infrastructure. We know that without something like this, old equipment that never has, you know, had cause to call attention to itself, uh, will tend to remain in place, right? Just inertia.

Steve Gibson [01:02:41]:
If it's not a problem and it's working well, leave it alone. You know, it's got a nice coating of dust. We don't want to disturb that with fingerprints. And, you know, why wouldn't people leave it alone? There's always some other emergency to deal with or budgetary constraint, you know, that pushes off the non-emergencies until some tomorrow that never arrives until disaster strikes. I also had the thought that there is a side effect to this that may not at first be obvious, but which will have an additional significant security-enhancing effect. Anytime a brand new replacement device is installed, there's a very good chance that it will be set up using the then current security practices, right? Not the practices from 10 years ago that the, that the previous device that's now being replaced was set up under, but the way it's being done today. And the way we're doing things today are better than they were before. That could be a huge boon all by itself, especially if these replacement devices themselves follow and encourage updated best practice configuration.

Steve Gibson [01:04:01]:
Like don't allow you to put in a 6-character password. It's like, no, no, no, sorry about that. This is new firmware, new device. We're going to, you know, we've got new minimums. Okay, so what exactly do these FCEB, the Federal Civilian Executive Branch agencies, need to do under this directive? We know they'll do nothing or as little as they possibly can, right? Since CISA also apparently understands that, this binding operational directive comes with very specific requirements. They wrote, immediately after issuance, which is now, uh, and until rescinded or superseded, all FCEB agencies shall first of all update each vendor-supported edge device running EOS software, including firmware, to a vendor-supported software version where such an update does not adversely impact mission-critical functionality. Within 3 months of issuance, all FCEB agencies shall inventory all devices listed in the CISA EOS Edge device list and provide this inventory to CISA using the CISA-provided template. So within, within 90 days, all federal agencies have to take an inventory of this, of the equipment they've got on the edge, cross-reference it to CISA's EOS edge device list, and report.

Steve Gibson [01:05:43]:
Also, the CISA EOS edge device list, they wrote, is a preliminary repository of EOS devices. This list is to facilitate each agency's identification of specific devices within the first 3 months after issuance of this directive. After the first 3 months, agencies are responsible for continuing to identify, track, and refresh all edge devices within the agency's infrastructure. Within the first year, the first 12 months, all FCEB agencies shall decommission all identified devices listed in CISA's EOS edge device list with an EOS date on or before this 12-month deadline from systems owned or operated by agencies or on behalf of an agency, replacing devices as needed with vendor-supported devices that can be— that can receive security updates. One year. Reporting these decommissions to CISA using the CISA-provided reporting template So they're, they're making it as easy as possible, but they're also saying no excuses. You have 12 months. Inventory all edge devices within their environments that are EOS or will become EOS within the succeeding 12 months and are within the scope of this directive and provide this inventory to CISA using the CISA-provided template.

Steve Gibson [01:07:07]:
Within 18 months of issuance, all FCEB agencies shall decommission all identified devices, you know, EOS edge devices, from agency networks, replacing devices as needed with vendor-supported devices that can receive current security updates, and report these decommissions to CISA using this CISA-provided reporting template. So they're also saying you've got to close the feedback loop. We need you to tell us that you took the things out of commission that you earlier told us you were planning to. And within 24 months of issuance, establish a process for continuous discovery of all edge devices within their environments and maintaining an inventory of those that are EOS or will become EOS within 12 months and are within the scope of this directive, having decommissioned all such devices on or before the date these devices reach EOS and report the decommission of these devices to CISA in accordance with current CISA guidance. So clearly this is not going to be an overnight change, but a year's, you know, a year goes by before you know it. Better to provide a firm and actionable timeline that's reasonable and to which no one should be able to complain about. So, bravo, CISA. You know, everything we know tells us that this change will not occur unless it is forced to occur, unless there is a clear directive which federal agencies know they must follow.

Steve Gibson [01:08:52]:
And again, bravo, CISA. I'm so happy that it exists because You know, we need our federal government networks to be kept as secure as possible. And, you know, Leo, I was thinking about this. The only downside I could see was basically this forces— and we're talking about like network edge devices— this forces their replacement. So there's a little bit of an incentive on the part of the providers of the hardware to take their support away. That is to like create a limited support because they know that all federal agencies are going to be forced to purchase new equipment that is going out of support. It's much better to have the existing equipment continue to be supported. But it occurred to me the flip side of this is, well, We're, you know, and they'll come up with some BS, you know, well, you know, technology is moving quickly.

Steve Gibson [01:09:59]:
So we needed to reduce our support window from its previous, you know, 72 months down to 24 months in order to, you know, make sure that the hardware is able to, you know, operate blah, blah, blah, blah, blah.

Leo Laporte [01:10:13]:
It's like, well, then there's a solution, which is in the acquisition requirements. That they put some specific, like, you.

Steve Gibson [01:10:23]:
Must support it lifetime. Yes.

Leo Laporte [01:10:26]:
Right. And I feel you like, know, they should be able to do that as well.

Steve Gibson [01:10:30]:
I bet that's already in there. I bet that's already like, right, it's got to be like 5 years, you know, guaranteed minimum support if you want the contract.

Leo Laporte [01:10:37]:
If you don't want the government to.

Steve Gibson [01:10:39]:
Buy it somewhere else. Exactly.

Leo Laporte [01:10:40]:
I think that that's not unreasonable. I hope that would already be in there, to be honest.

Steve Gibson [01:10:44]:
I bet it's in there.

Leo Laporte [01:10:45]:
Yeah.

Steve Gibson [01:10:46]:
Uh, You know what is in here?

Leo Laporte [01:10:49]:
Coffee?

Steve Gibson [01:10:50]:
That's exactly right. And the commercial. Not that I'm slowing down, but I could always use a little more caffeine.

Leo Laporte [01:11:00]:
Coffee in a commercial. It's a new thing that we've invented here at TWiT, and we invite you all to partake. While Steve has a cup of Joe, a cup of mocha java, as they say, let us talk about our sponsor. Sponsor, actually a new, relatively new sponsor to the network. I think they started last week, Trusted Tech, and a really great one too. I had a good conversation with these guys. They offer US-based, and they are Microsoft certified, US-based Microsoft certified support using a simple ticket-based model. Everybody understands it works and it helps you save money while getting faster, better help.

Leo Laporte [01:11:38]:
And proactive support. Trusted Tech is the number one global replacement to Microsoft Unified Support. They will work to get you better service no matter what size business you have. And in recognition of that support quality, Trusted Tech was one of the very first partners in the world to earn Microsoft's new Solutions Partner designation for support. They announced this, uh, at Ignite not so long ago. Now, there's something I, I think you know about, but I want to remind you that coming in July, Microsoft, they've already announced this, is they're going to implement a significant price increase for M365. And with it, a lot of nuance. Licensing has always been a little tricky with Microsoft.

Leo Laporte [01:12:20]:
It's going to get even more so. If you need guided Microsoft support that's more straightforward, more predictable, and actually more responsive, you can get a free consultation right now, a trusted, trustedtech.team/securitynowcss. trustedtech.team/securitynowcss. Now, maybe you haven't heard of these guys. I know I hadn't, but when I talked to them, I realized, oh yeah, I, I know who these guys are. Um, and you know who else? Kevin Turner. You know his name, former COO at Microsoft. He said this to Trusted Tech, quote, you have an incredible customer reputation.

Leo Laporte [01:13:03]:
And you have to earn that every single day. The relentless focus you guys have on taking care of customers gives them value and differentiates you in the marketplace. High praise from Kevin Turner. Trusted Tech elevates the Microsoft support experience with its certified support services. Another way you know they're great, their client list. Go to the website, you'll see it. Enterprises. Well, let me tell you some of the people who use Trusted Tech.

Leo Laporte [01:13:29]:
NASA uses Trusted Tech. Netflix uses— I mean, you don't get bigger than that— uses Trusted Tech. Neuralink. Apple uses Trusted Tech. Intel, Google, Lockheed Martin, the best in the world, the highest tech companies in the world use Trusted Tech and save 32 to 52% compared to the average Microsoft Unified Support Agreement. And you're getting the best. Trusted Tech's Microsoft-certified engineers first respond within 10 minutes, achieving an 85.7% in-house ticket resolution rate and 99.3% customer satisfaction rate. That's pretty universal.

Leo Laporte [01:14:08]:
That's perfect. Trusted Tech's flexible ticket-based monthly or annual pricing model also offers direct escalation to Microsoft from a managed partner when needed. So, you know, it's— you kind of got belt and suspenders. The principal architect for TaylorMade. This is what he says, quote, we don't break glass often, but when we do, being able to quickly leverage Trusted Tech's professional services through the CSS program and get immediate engineer-level support has been invaluable to us. Whether you're looking to fine-tune your Microsoft 365 licensing— yes, they do that too. They'll help you look at the licensing, make sure you're getting what you pay for and you're not paying for too much. So if you want to do that, you can go to trustedtech.team/securitynowcss.

Leo Laporte [01:14:50]:
They can also improve the way your organization receives proactive Microsoft support, trustedtech.team/securitynowcss. Or hey, both, right? They'll help you with licensing and then help you with support. Trusted Tech offers free consultations to help you understand your options. So once again, go to trustedtech team/securitynowcss and submit a form. Get in contact with Trusted Tech's Microsoft support engineers, great people who do a great job. trustedtech.team/securitynowcss. Well, the river of coffee has sluiced its way into Steve's brain and he is ready to continue.

Steve Gibson [01:15:37]:
It has taken effect.

Leo Laporte [01:15:38]:
Yes.

Steve Gibson [01:15:39]:
So Jason Grimard said, hi Steve, you mentioned on this week's podcast how annoyed you were whenever Windows 11 was updated and you would receive a full-screen page after every major update. The one that asks you to turn backup on and other crap, he wrote. If you haven't already, you need to turn off Experience, or whatever they call it now, under system notifications. And he provided me with a screenshot. Now, I appreciated Jason's tip, although in my case, this was occurring on two Windows 10 machines, one of which I only fire up once a week for the podcast recording with, with you, Leo. I had wrongly assumed that the continual annoyance from these Windows 10 machines was due to my having logged on under my Microsoft account rather than using a local account. And maybe that plays a part, but Jason provided a screenshot from a YouTube video showing settings which under Windows 11 would allow this annoyance to be turned off. During the year and a half of development work on and testing of the DNS benchmark, which is Windows-hosted, I've seen how many of our development testers have made the move to Windows 11.

Steve Gibson [01:17:01]:
So I get it, you know, as I've mentioned, uh, I'm gonna be setting up a new system, uh, once I move, uh, and I've, you know, into what I refer to as our final resting place, which bothers my wife. But, uh.

Leo Laporte [01:17:18]:
Steve, we're not done yet, Steve.

Steve Gibson [01:17:22]:
So anyway, I've, I've given the question of whether I'm going to be moving to Windows 11 or remain with Windows 10. Quite a lot of thought. 11 is visually lovely, you know, I'll freely— I will freely give it that. And its user-facing desktop behavior changed enough from Windows 10 that I did need to spend some time with it during the development of the benchmarks UI changes., to, to keep it to behave in all of the strange things that Microsoft has done. They've got weird docking stuff now that, that tends to override what the Windows wants to do. So I, you know, I, I've spent some time in Windows 11, and I fully appreciate that most of the world is going to be moving to 11, but I've determined that I will not be. I just don't see anything there that I need, and I don't see any benefit. So The reason I mentioned all of that is that the annoying behavior that I was complaining about was under Windows 10, which is where I'm going to end up being for the rest of known time.

Leo Laporte [01:18:30]:
Aren't you worried about support though? End of life support?

Steve Gibson [01:18:33]:
I mean, that's all overblown. I mean, I'm happy on Windows 7. I'm talking to you from with Windows 7 in front of me, Leo, and that ended long time ago. And besides, the browsers stay supported even if the platform support stops. And the browser security is really more important than the, than, than, uh, the Windows platform. And you continue to get, uh, AV updates regardless. And there's still a huge Windows 10 inventory, you know. As we know, Microsoft extended it another year out of pressure over the fact that no one was ready to have it end.

Steve Gibson [01:19:11]:
We don't know what's going to happen next year, but, you know, so 10 is still under support as we speak until what, next March or something sometime. So I was curious after seeing this feedback from our listener, so I went looking to see whether the same or similar control panel settings that Jason's— the YouTube that he pointed to me depicted for Windows 11 existed under Windows 10. Yes, it was with some joy that I found them. Under Windows 10, open the Control Panel and choose System. Then in the subsections column on the left, select Notifications and Actions, which is— that was the third item down for me. And there on the right-hand side are exactly the settings you want. One is Show me the Windows welcome experience after updates and occasionally when I sign in. Right, Leo, like we want that.

Steve Gibson [01:20:10]:
I know. To highlight what's new and suggested. The second one is, yeah, suggest ways I can finish setting up my device to get the most out of Windows. And then there's a little bonus third one. Get tips, tricks, and suggestions as you use Windows. Okay, I've been using Windows since before, like, you know, when it was an app.

Leo Laporte [01:20:34]:
For the guy who wrote this, I can guarantee you, yes, was born since.

Steve Gibson [01:20:39]:
It was an app you launched under DOS, you know, when you wanted to run Windows 1. Yeah, yes. Needless to say, those three are now all turned off. I'm so happy to know they're there when I'm setting up that new machine. This may not be the first thing I do, but it'll be the first during the first session. I'll be turning all that crap off. So thank you, Jason. I just wanted to share this with everybody else.

Steve Gibson [01:21:03]:
I know that a lot of people have gone to 11. Well, you could turn that off under Windows 11 also. And everybody who's decided to stay with 10, you could, you know, it's there also. So yay. Thank you. Never need to see that again. Um, Livi says, Uh, said, hi Steve, in some countries the ISPs are required to keep track of subscribers and their IP address for copyright infringement enforcement. And that works also for CG-NAT subscribers.

Steve Gibson [01:21:37]:
In other words, carrier-grade NAT. I talked about this last week. He said the ISP will log every source port block allocation and IP address allocation. This way they can always use the source port and source IP to identify a subscriber. So the listener correct— corrected and frankly dashed my hope, which I mentioned last week, which was that perhaps ISPs who were using carrier-grade NAT and are therefore assigning private IP addresses to their subscribers rather than giving them public IP addresses might not also be able to provide real-time identifying information for sale to external advertisers and others. Since it could technically be done, as we know, and as this listener pointed out, unfortunately, it looks like it probably is. That means that receiving a non-public IP from an ISP cannot be assumed to provide any additional privacy. So anyone who wishes to strongly prevent their ISP from being able to identify them to anyone external for whatever reason will need to use a VPN of some sort.

Steve Gibson [01:23:05]:
When any true VPN is used, the user's public IP will be allocated from among a block that's been assigned to the VPN provider, and any reputable VPN provider will refuse to retain any logs which could be used to map their public VPN IP to the IP assigned by their ISP. And their ISP will in turn only be able to see that their subscriber was using a VPN for whatever reason without having any idea what they were doing on the internet beyond that. Now, I use the phrase any reputable VPN provider, and I hope everyone understands that I did not forget to use the word free in that phrase. The terms free and reputable VPN cannot appear together. Providing and operating a VPN service costs real money, which someone needs to provide. If the users of a VPN service are not footing their own bill, then the VPN provider must be somehow arranging to monetize their users' use. That should make anyone who cares about their privacy and security extremely nervous. I, if I needed to use a VPN, I would not be using a free one.

Steve Gibson [01:24:35]:
And as we know, there are, there are high-quality, reputable VPNs in the world that explicitly do not log what, you know, what their subscribers and users do. So there are definitely, you know, uh, good solutions if you are worried about IPS spying. And I We have no clear knowledge that that's even going on. It's just obviously a possibility. Brendan McGoffin said, hey Steve, I'm sure you've been inundated with requests to talk about OpenClaw and its crazy security implications and also AI changing by the day coolness. Hope to hear your take on this specifically. Would be curious not just if it's good or bad, but how you would build this out in the most secure way possible. He said, I built out a VM on a Mac with UTM and giving it minimal contact, but thinking of going— giving it a dedicated box with WAN access, but not local access to other devices unless just specific hosts I granted access to.

Steve Gibson [01:25:48]:
Thanks, Brendan. Okay, so my first response to the Open Claw phenomenon is to view it with interest at arm's length. For me, it's just entertainment. One of the things I first said when we began talking about AI here was that anything we think we know and any statement we might make needs to be time and date stamped because it will have a half-life of a few weeks at most. And that turns out to have been a bit prescient since, as I mentioned last week, the pace at which everything is moving has never let up. I mean, you know, even the people who are involved in this are astonished by how quickly it's moving. In this case, we have, you know, the most recent fad du jour is open claw. I'm a spectator, so I have no definitive response because I have no way of knowing what's going to happen any more than anyone else does.

Steve Gibson [01:27:00]:
I've seen, you know, massive rockets on the launch pad ignite their engines and begin to rise. You know, there's a great deal of temptation to begin cheering. But I've also seen those stunning examples of human engineering suddenly and quite dramatically explode into massive fireballs. So now whenever I watch any huge rocket rising, I consciously hold my breath and I wait a good while, you know, until the chance of the rocket's, as it's now termed, unplanned spontaneous disassembly seems far less likely to occur. There are just too many things that can go uh, wrong, and, and so many ways for a machine like that to fail. And with a rocket like that, this is a machine that's completely understood and was carefully designed, constructed, and tested every step of the way. By comparison, what I understand of OpenClaw strikes me as completely insane. Those who have made it their business to understand the practical security implications have run screaming for the hills over the idea that OpenClaw's users are allowing these barely understood agents to have access to hugely personal and private data, and even to be talking with one another and sharing skills.

Steve Gibson [01:28:36]:
So last Friday, Kate O'Flaherty, a senior contributor for Forbes, wrote about all of this. She wrote, OpenClaw, the viral AI agent that's already been known by two other aliases, Malt AI bot and Clawbot, or Claude bot, is growing in popularity, she wrote. After bursting onto the mainstream just weeks ago, OpenClaw has earned well over 100,000 GitHub stars. Then came Maltbook, the Reddit-style social network where AI bots can interact with no humans allowed. Everyone was talking about it, and for good reason. It's no surprise that concerns about OpenClaw and Maltbook are growing, with worries centering on the security and privacy of the viral bot, and in Maltbook's case, the uncontrolled nature of the AI bot-controlled social network. Computerworld's Stephen Vaughn Nichols says, quote, There are only a few itty-bitty teeny-weeny problems with OpenClaw. To do useful things like reserving your hotel room, getting your pizza delivered, or cleaning up your email box, it needs your name, password, credit card number, and all the other things any crook also wants.

Steve Gibson [01:30:09]:
Okay, so here's everything you need to know about the viral agent now known as OpenClaw, she writes. OpenClaw, aka Multibot, is an open-source autonomous AI assistant that you can download and run on a computer. After its setup in November 2025, or startup in 2025, it was known as ClaudeBot, but its creator, developer Peter Steinberger, was forced to change the name to Multibot after Anthropic objected due to similarities with its Claude chatbot. He then changed the name again to OpenClaw. OpenClaw is designed to perform real-world tasks on behalf of users, such as managing calendars, messaging, browsing, and other actions that go beyond simple chatbot responses. Louis Rosset-Ballard, team leader at Pentest People, explains, quote, OpenClaw runs locally on devices and in many configurations can read and write files, execute script, and interact with external services when given sufficient permissions. Nash Borges, Senior Vice President of Engineering and Core AI at security firm Sophos— of course, we talk about Sophos often— describes OpenClaw as, quote, more like Jarvis from Iron Man than Siri or Alexa. You use natural language for every interaction, but can ask it to do things such as conduct research on a topic of your choice, compose a reply to an email summarizing when you're available for a meeting, or even code up any capability that it doesn't already have.

Steve Gibson [01:31:57]:
Borges says that last part is significant. Because it means there's almost no limit to what it can do. But does it work? Reddit users describe their experiences as mixed. According to one post, ClaudeBot— speak back when it was called that— ClaudeBot is like an Apple product. When it runs, it's like magic until it doesn't. If you didn't know about OpenClaw a week ago, you must have at least heard of it by now, she writes. Sophos Borges says the whole development journey has been insanely fast, and this explosion of interest is, quote, just the latest gear shift.

Leo Laporte [01:32:44]:
Unquote.

Steve Gibson [01:32:44]:
OpenClaws rapid adoption is driven by demos showing extreme productivity gains automating tasks that normally require human interaction, says Malwarebytes threat researcher Stefan Dasik, quote, the promise of a powerful locally run AI agent without obvious limits has resonated strongly within developer and AI enthusiast communities, unquote. Okay, so I'm going to interrupt Kate to note that because OpenClaw runs on local hardware, Mac Minis quickly sold out. As people rushed to obtain little standalone AI agent machines. Linux and Windows boxes could also run OpenClaw, but the Mac mini does this particularly well in a very small form factor. Anyway, Kate continues writing, but things that grow so fast often come with risks. Eric Cron, CISO advisor at KnowBe4, says, quote, It seems that in just a couple of days, everybody doing anything with AI, and even many who don't, have installed and raved about this new agentic product. The almost feverish rush to use this product is frankly a little disturbing. So she asks, why is OpenClaw a risk to security and privacy? Uncontrolled AI is a concern more generally, and OpenClaw is no different from other products that have shot into the mainstream, such as ChatGPT.

Steve Gibson [01:34:27]:
A concern with OpenClaw is how much information it can have access to when using it the way people are showing, says Cron. Quote, for example, giving it full access to all your emails may seem fine. It might make sense since you want it to act as your personal assistant. However, there's real danger, not just from malicious use but accidental, when giving AI agents this type of access. In the blink of an eye, it could be deleting your emails or taking malicious actions such as siphoning off data to attackers, unquote. Security issues are already starting to surface. Denis Romanovsky, chief AI officer at SoftSwiss, a provider of tech solutions for iGaming, said researchers have found hundreds of exposed multibot instances online with zero protection. This includes API keys, private messages, the ability to send messages as the user, and root shell access.

Steve Gibson [01:35:38]:
William Thackeray, IT and cybersecurity expert and operations director at AGT, said OpenClaw is a security threat on multiple levels. Firstly, the platform's GitHub repository reveals a troubling accumulation of unaddressed security vulnerabilities, from an exposed database creating a direct pathway for unauthorized access to user information to dangerous plugins. Koi Security documented 341 malicious skills uploaded to ClawHub, OpenClaw's extension marketplace. So yeah, what was that about spontaneous unplanned disassembly? Forbes says granting an AI agent full system control creates a single point of failure, says Dasik. If compromised, quote, if compromised, OpenClaw can access saved passwords, personal documents, browser sessions, and financial data, unquote. OpenClaw poses risks to privacy too. These stem from its access to— these stem from its access to and storage of sensitive user data. Says Rosette Ballard, quote, because the agent may retain long-term memory, store credentials and tokens in plain text, and process external inputs without robust guardrails, it could be— it can inadvertently expose personal information, unquote.

Steve Gibson [01:37:17]:
At the same time, the AI agents post on social networks without asking permission. Romanovsky points out, screenshots of agent conversations spread across Twitter. Your entire digital life sits one vulnerability away from exposure, unquote. Okay. And we were all worried about Windows Recall, which now seems of— kind which.

Leo Laporte [01:37:42]:
That tails, doesn't it?

Steve Gibson [01:37:44]:
Just now seems kind of quaint by comparison.

Leo Laporte [01:37:47]:
Yeah.

Steve Gibson [01:37:47]:
Okay. So what about Malt Moltbook. Kate writes, Moltbook is a social network built exclusively for AI agents launched last month. Dasik says, unlike traditional forums where users interact and share content, Moltbook is a space where OpenClaw agents autonomously post content, comment, argue, joke, and upvote or downvote each other. Which, Leo, this just sounds like sci-fi to me. Uh, human users can observe agent interactions but cannot directly participate. Professor, uh, Katarina Mitrokatsa, chairman of cybersecurity at the University of St. Gallen, said Maltbook further amplifies the risks associated with OpenClaw.

Steve Gibson [01:38:46]:
Although it gained attention for showcasing AI-to-AI interactions, early findings revealed that it exposed entire databases, including secret API keys that could let attackers impersonate any agent on the platform. This creates clear threats for users. Identity spoofing, unintentional data exposure, and reduced control over the digital environment, unquote. Daniel dos Santos, head of research at ForeScout, said, quote, the risks of MoltBook became very clear very quickly. There's no moderation on the content, so bots can post instructions for other bots to execute ultimately on a victim machine, can use prompt injection attacks or generate offensive content, unquote.

Leo Laporte [01:39:37]:
And incidentally, we've learned that much of the content on Maltbook now is generated by humans.

Steve Gibson [01:39:42]:
So, ah, so spoofed AI.

Leo Laporte [01:39:45]:
Yeah, it's not hard to do that. And that makes it even more risky. I think you're much more likely to get prompt injection from a human.

Steve Gibson [01:39:51]:
Yeah, exactly. Kate finishes her coverage for this on— for Forbes by addressing the question, should we use OpenClaw? Writing, OpenClaw might have some cool capabilities, but for now, the risks outweigh the benefits, especially if you are not techie. OpenClaw's creator, Peter Steinberger, has warned users that the tool requires careful configuration and is not yet meant for non-technical users. Romanovsky says, if you're technical, curious, willing to sandbox everything carefully, it's a fascinating glimpse into the future. But if you handle sensitive data or need reliable security, stay away for now, he advises. The project, quote, quote, the project moves faster than its security can keep up. Treat it as an experiment, not a production tool, unquote. And Cron warns, if you do choose to use the viral AI agent, Be careful that you are discovering the real deal when searching for a product like this to download and install.

Steve Gibson [01:41:01]:
It's very important that people are careful not to end up in an unofficial repository that contains malware or other dangerous programs. Kate concludes OpenClaw is growing at an alarming rate, making it important that you treat it with caution. Unless you're an expert, leave it well alone for now. And Leo, I know you have had fun playing with it. And I agree with you. I think it is very clear that the next evolution is agency, is agents, and not just one, but teams.

Leo Laporte [01:41:41]:
Well, and that's actually what we're learning from OpenClaw is that there's a demand for this, that there's a lot of interest in it. And I imagine there are a number of companies starting up right now that will offer that kind of agentic AI in a sandbox. And the problem is you can sandbox it. I set it up at first on a VPS, but no matter where you put it, eventually you're going to want to give it access to your Google Mail and your contacts and address book. And frankly, I was going to give it a credit card with a $5 a day limit because The real interesting uses all require that it act on your behalf, agentically. So even if you sandbox it, it's inherently insecure. Obviously nobody at a business should be using this, although many businesses are there's because a lot of interest. kidding.

Leo Laporte [01:42:33]:
No Oh, yeah.

Steve Gibson [01:42:34]:
No kidding.

Leo Laporte [01:42:35]:
I think some of it is just, let's take a look at this because how can we make this work for us? Once you start using it, and part of this is you can use any AI with it. It doesn't have to be Claude. Most people are using Claude because Claude has this great I personality. hate to admit this, but you really enjoy interacting with it. The thing I most wanted to do with OpenClaw was be able to text message back and forth with Claude. And the other thing that it does that's really interesting is it will run overnight, run all the time. So you could say, hey, for, as some have done. Come up with something interesting.

Leo Laporte [01:43:10]:
Have at it. Let me know. And it will surprise you. I love that idea. I think it's hysterical. It's a— there's a new saying in the AI community, just YOLO it. You know what YOLO is? You only live once. Just YOLO it.

Leo Laporte [01:43:28]:
Just, you only live once. Have fun.

Steve Gibson [01:43:31]:
Yeah, you know, that bungee cord is a little frayed, but it's probably good a long for jump. That's right.

Leo Laporte [01:43:38]:
No, it's insane. It's of course a security nightmare. Of course it is.

Steve Gibson [01:43:42]:
Yeah, I— and the good news is, here's what I would tell people. You're right, Leo. This surprised the world much like large language models did a few years ago. Look what we have now. Just now that we've understood how that can be taken to agency And that's going to happen with no one. You know, I would argue wait, and it probably won't have to wait that long. No, because it's going to be— yeah, that'll be instant.

Leo Laporte [01:44:17]:
People are working as hard as they can on that right now.

Steve Gibson [01:44:20]:
And I have to say, though, I said it earlier on this podcast, I don't know how you control this. And that's the problem. And you, you put your finger on it. It needs the freedom to misbehave in order to behave.

Leo Laporte [01:44:38]:
Right.

Steve Gibson [01:44:39]:
It's like you're in order to act as you. It needs to be able to impersonate you.

Leo Laporte [01:44:45]:
You've got to give it all the credentials. I'm trying to figure out how to give Claude my SSH keys because I have to every once in a while. It says, I can't do this. You're going to have to sudo. It's yourself. I don't want to do that. You do it. Oh, but Steve, it's— we're watching a little miracle happen.

Leo Laporte [01:45:07]:
We really are. I've never been as excited about anything in technology like this, even the internet. This is something very special that's happening with huge risks. I'm glad I have you to keep me on the straight and narrow, Mr. Gibson.

Steve Gibson [01:45:27]:
Let's take our second to the last break. We'll finish up with feedback and before we get on to our main topic.

Leo Laporte [01:45:33]:
Will do.

Steve Gibson [01:45:34]:
Excellent. And again, people, I would say it's really a, I mean, you know, calling it the Wild West understates it. It's, you know, bungee jumping with a frayed bungee. I don't know how it's ever going to be safe enough, but it's going to get safer. So wait.

Leo Laporte [01:46:00]:
I have friends who work in AI and because they work in the business, have had access to stuff like this months ago. And all they've been telling me for the last 3 years is, "You have no idea how weird it's going to get." And now I'm starting to see what they're talking about. It is getting very interesting. And I don't think our models— I don't think we know what to do with this. I think this is going to be— we're living in interesting times. And just be careful out there.

Steve Gibson [01:46:33]:
Well, and as we mentioned last week, the large software companies got hit because of the concern over Code automation.

Leo Laporte [01:46:44]:
Yeah. Some people were saying, you know, it's a bubble, it's going to crash. I think we had the crash. What was it? Was hundreds of billions of dollars in market value disappeared in an hour.

Steve Gibson [01:46:52]:
Yeah. Some came back, but still it's like, you know, the investors said, whoa, wait a minute.

Leo Laporte [01:46:59]:
Yeah.

Steve Gibson [01:47:00]:
So maybe everybody could say, you, you, you just tell your bot, you know, I'm annoyed with Windows. Just write me a new one and, and leave out all the, uh, Microsoft crap. And it gets busy.

Leo Laporte [01:47:13]:
Yeah. Well, that's what Anthropic spent 2 weeks writing a C compiler. It wasn't a very good C compiler, actually. Claude did that completely autonomously, but it could compile the kernel, the Linux kernel. So it's getting there. Now we got to tell you about a sponsor that is very timely and appropriate, GuardSquare. This portion of Security Now brought to you by GuardSquare. They help you make your mobile apps safer for your users.

Leo Laporte [01:47:41]:
Mobile apps today are obviously— they're an inescapable part of life, from financial services to healthcare, retail, entertainment, AI chatbots. Users trust mobile apps with their most sensitive personal data, but a recent survey showed that 72% of organizations experienced— 72%, almost three-quarters— a mobile application security incident last year. 92% of respondents reported rising threat levels over the last 2 years. Meanwhile, attackers who want your users' personal data are constantly finding new ways to attack your mobile app. They reverse engineer it, repackage it, and distribute the modified app via phishing campaigns and sideloading third-party app stores and Poor users, they have no idea that it's not yours. By taking a proactive approach to mobile app security, you can stay one step ahead of these attacks and maintain the trust of your users. That's where GuardSquare comes in. GuardSquare delivers mobile app security without compromise, providing advanced protections for both Android and iOS apps, combined with automated mobile application security testing to find vulnerabilities and real-time threat monitoring to gain insight into attacks.

Leo Laporte [01:49:01]:
Discover more about how GuardSquare provides industry-leading security for your mobile apps at guardsquare.com. Guardsquare.com. We thank them so much and we appreciate the support they're doing for our listeners and for Mr. Gibson on Security Now.

Steve Gibson [01:49:20]:
So Kyle O's email subject was my first app made with AI. Oh, and I know, Leo, you're now an app a day.

Leo Laporte [01:49:31]:
Um, oh, easy. I think of something and I have it in half an hour. It's amazing.

Steve Gibson [01:49:35]:
He said, Steve, exclamation point, after listening to you and Leo talk about coding with AI and Claude, I added an item in my to-do list to learn how to code with AI. I never got around to it, until a situation arose where I found myself needing to create my own custom app. I volunteer for a small nonprofit and we have a little library, around 150 books, that are not very well organized. I volunteered to clean up our library and while doing so thought it would be the perfect opportunity to also take inventory of all of our books and provide the inventory to our members so everyone knows what books we have available. I found a free app for iOS that I won't mention because it turns out it doesn't work very well. The app scans the book's barcode, looks up the ISBN, and pulls content like author, description, publication date, and creates an inventory of your library. You can then export the inventory to a spreadsheet. It worked great up until it stopped working after about 30 books.

Steve Gibson [01:50:46]:
All additional books scanned were not found, and the app failed to inventory them. So I have a list of over 100 ISBNs and no app to generate this inventory. Rather than learn about coding with AI through videos and instruction, I downloaded OpenAI's Codex app for Mac and threw myself in the deep end. That's just it, friends. Okay, he said, I would have used Claude, but I already pay for ChatGPT. He said, I told it I wanted a Mac app written in Python with a GUI interface that takes a given ISBN, looks it up on Goodreads, provides me with a preview image of the book so I know it's the correct one, and then adds it to the list. After I do this for all my books, I want a CSV format file export button that provides, you know, CSV, comma-separated value, a CSV containing the author, an image of the book, publication date, page count, and description. There were some errors and issues.

Steve Gibson [01:51:54]:
For one thing, CSVs cannot contain images in their cells, an oversight on my part., and for some reason the author's name was listed twice in its cell. I told Codex the issue and it created an Excel export button and fixed the author issue. When I attempted to open the file, Excel said the file was corrupted. I told Codex and it fixed whatever the issue was. The app now works flawlessly. I get a clean Excel export that lists an inventory of our small library of books. I am stunned by how simple this all was. There were some other hoops I had to jump through.

Steve Gibson [01:52:36]:
My Mac did not have the latest Python installed, for example, but it was relatively simple to get all set up and working. I do have some concerns. I'm a cybersecurity analyst but not a developer by any means. Watching Codex effectively say, I'll handle that, while code and commands whizzed by my screen made me feel a bit nauseous. When I had an issue and Codex said, just run these commands, I was hesitant to do so because I didn't know exactly what the commands were doing. Then there's the package manager. It used pip to install BeautifulSoup4, Pillow, and OpenPixel. I don't know, I don't know what these are and what they do, and that makes me a little nervous, especially after learning about the attacks and compromises on open source repositories.

Steve Gibson [01:53:34]:
I think what Codex did was overall safe, and the project was a huge success. I have no formal developer training. I took a Python class in college, if that counts, yet this created a fully functioning custom app for me in under 30 minutes. Thank you and Leo for discussing developing with AI. This gave me the confidence to jump in the deep end and create this app. Appreciate you both, Kyle. So Kyle has shared a perfect use case for today's code generation AI. Thinking about this, The best analogy I have for this is the similar breakthrough that was created by the invention of the PC-driven spreadsheet.

Steve Gibson [01:54:25]:
To me, this feels like the introduction of the spreadsheet because more than anything, the invention of the spreadsheet was empowering. Non-programmers were able to suddenly leverage the power of a personal computer As a matter of fact, it's credited with what, you know, saved Apple and the Apple II. People were buying Apple IIs just to run VisiCalc.

Leo Laporte [01:54:52]:
It was the killer app, the first killer app, right?

Steve Gibson [01:54:56]:
So non-programmers were suddenly able to leverage the power of a personal computer in a way they never could before. You know, they may still not have been able to author programs themselves from scratch,, but the spreadsheet meant they could get meaningful and useful results without needing to. They were able to model data themselves. Kyle took a Python class in college, but he's explained he's not a coder. Yet thanks to, in this case, OpenAI's Codex app on a Mac, Kyle is now in possession of a custom app that does real-world work to solve a problem he had. And as we've also witnessed, Leo, you know, you who are a coder, you know, you effuse no less enthusiastically over the successes you've had, first with that, that test project creating like scanning the internet for topics for podcasts.

Leo Laporte [01:55:58]:
Stories, which I use every day now.

Steve Gibson [01:56:00]:
Yeah.

Leo Laporte [01:56:00]:
And I generate briefings with it for all our shows. I think it's improved our shows dramatically.

Steve Gibson [01:56:06]:
I've seen the difference in, in those.

Leo Laporte [01:56:08]:
They're tighter. The hosts are better prepared. It's great.

Steve Gibson [01:56:12]:
Yep. And we know that you, Leo, could have painstakingly written a program because you're a coder. You could have done what needed to do, you know, under pre-AI coding paradigm, but the effort was not worth the reward.

Leo Laporte [01:56:30]:
I never did anything for 20 years. I didn't do it.

Steve Gibson [01:56:34]:
And that's what Claude Code has changed for you. Is that, you know, you're now— you're using your understanding of coding, and with Cloud Code, provide the leverage to dramatically shift that, that work versus reward, uh, trade-off in, in favor of easily and readily, even joyfully, producing applications that are of real use.

Leo Laporte [01:56:58]:
I'll even go farther than that because I use also Cloud Code, uh, to configure all my systems. As I was saying, I just set up a system and it knows, it reads the manual so I don't have to. It does the settings. I could figure all of that out, but it's brought a huge amount of pleasure in computing to me because I can be so much more effective and efficient. I can have tools that simplify things. Kyle's example is a really good example. There's no way that that was a security issue. The worst thing that could happen is maybe he'd accidentally DDoS Goodreads by making too many requests a second for it or something like that.

Leo Laporte [01:57:34]:
It's— it, it, it, it— there was no risk in creating that application. And his experience, by the way, that's exactly what it's like. It's not perfect the first time you try it. You say, hey, well, that's— but it's so easy to tell it, well, you, you did the name twice, what's going on? And it fixes it. And so you go through this debugging process. It is a— it is like pair programming, but it's at a very high level.

Steve Gibson [01:57:58]:
And it's conversational.

Leo Laporte [01:57:59]:
It's conversational. I think your analogy to VisiCalc is exactly right. Really, it's the history of computing is we've gotten higher and higher level languages. This is just the highest level language. It's finally English. And I think this counts. I really do. I think it's great.

Leo Laporte [01:58:16]:
And obviously you wouldn't want to write router firmware with it, although I think people are. There's certain things that you probably shouldn't use an AI to write.

Steve Gibson [01:58:27]:
It's going to be interesting to see what happens because I agree with you. I think knowing people, they will use it for everything.

Leo Laporte [01:58:34]:
For everything.

Steve Gibson [01:58:34]:
They already are. It's just, it's just simply, it's just going to happen.

Leo Laporte [01:58:38]:
Yeah. But I think there's so many harmless applications that are just quality of life applications. You know, one of the things I was, I've been struggling with since I got these little, uh, album art things that are behind me, they're called Pixoo from a company called Divoom. It's a Chinese company. It's a silly little device, and they have the worst app on an iPhone to manipulate it. And every day I'd be clicking, you'd often see you'd get on in the wrong album, or I'd be clicking into this stuff. I wrote very quickly in about half an hour, an hour, I think I was watching the football game on Sunday. I wrote a program to do this.

Leo Laporte [01:59:17]:
It turns out this is just an HTTP put, and it's in REST format. It's a very simple thing. I probably could have written it. I'd have to look up the API and figure it out and stuff. Be trivial to write. Now it's instant. And I have a command line. I wrote a little Bash shell command line that sets it like that.

Leo Laporte [01:59:36]:
I could put any art up there. Now that's A, harmless. There's no security issue here. B, a huge quality of life. C, yes, totally doable. If I were willing to spend the time. But most importantly, it was easy and it made a big difference in my operation. And so I'm finding more and more things like that.

Leo Laporte [01:59:58]:
There's risk. I also wrote a tool that lets me find and turn off and on services on my system. That turned out not to be such a good idea because I turned things on I shouldn't have and I turned things off I shouldn't have. But it was fun. And now I'm a little more cautious with what I turn on and off in the background.

Steve Gibson [02:00:18]:
Yeah, I think this is the real deal. This is not a fad. It's very exciting.

Leo Laporte [02:00:23]:
No.

Steve Gibson [02:00:25]:
Yeah.

Leo Laporte [02:00:25]:
And it's just the beginning. And it is exactly where it should be. It's beginning the computer talking to the computer. Of course, that's natural. How much farther beyond that it'll go, I don't know. I don't want it to write novels. I don't want it to write musicals or make movies probably. For talking to a computer.

Leo Laporte [02:00:41]:
There's nothing better.

Steve Gibson [02:00:43]:
Yeah. Okay, last break, and then we're going to talk about least privilege.

Leo Laporte [02:00:48]:
Okay. I've been using up all your time. I'm sorry, Steve.

Steve Gibson [02:00:51]:
No, no, no, I wanted you to because I sort of assumed that this was going to be an engaging topic for both of us.

Leo Laporte [02:00:59]:
You know, I, I could talk about this forever. And, and I know people are wondering if you could use it to write the apps you write, and I wouldn't want you to to use it to write the apps you write. But you wouldn't want to either. You like doing what you do, right?

Steve Gibson [02:01:12]:
Okay, so I actually did have something in the show notes I was going to skip over, but I will share it.

Leo Laporte [02:01:16]:
Would you please?

Steve Gibson [02:01:17]:
A number of our listeners have asked me whether, and if so, how this revolution in AI coding might affect my own work. My best assessment is at this point it's not clear, and you know, if nothing else, it's way too early. In general, I eschew the use of tools that do not produce the same quality result as I'm capable of producing. I'm just unwilling to compromise. I just don't see the need. For example, I'm still authoring all of my web pages at grc.com by hand.

Leo Laporte [02:01:57]:
You might want to consider asking for some help.

Steve Gibson [02:02:02]:
I've seen the utter crap that even the best HTML and CSS WYSIWYG authoring tools spit out, and I just can't abide by it. It's just, it's, it's, it's horrific looking crap. And it's like, no, I, I mean, yes, I know that mine looked like they're from 1995, but you know, they also download instantly. Right now it happens that there are savings that add up. Um, having super lightweight web pages means that GRC's little 100 megabit connection is able to easily serve the world's needs without breaking a sweat. The main grc.com server has 24 gigabytes of storage. That's not RAM, that's total mass storage. And it's not even full.

Steve Gibson [02:02:54]:
I mean, it's like a third full. That means that GRC's entire website can sit cached in RAM, and it's easily served by a single CPU that's not particularly fast. You know, I understand the modern way to solve problems is just to throw more and more resources at the need until whatever it is goes fast enough, but the truth is recurring costs really do begin to escalate. And once you take that path, there's no turning back. So I'm not saying there's anything wrong with that. I get it that that's the most efficient approach for most situations, but that's not for me. I'm obviously not into efficiency, except for my code. So I'm going to be very excited to follow along with these breakthroughs in coding technology, but I don't expect it's going to affect the way I code my own stuff.

Steve Gibson [02:03:48]:
I do it, you know, I, I, it's like, um, you know, uh, like, uh, numerical control machines appear that are able to do woodworking, but I'm still in the basement with a chisel because you enjoy it. I just like it. Yes, I love the art. I love it.

Leo Laporte [02:04:09]:
If at any point, and I think a website might be a better example, because at any point you got tired of that. You don't have to use React and Angular. You can have an AI generate— this is the website that— remember that briefing tool that I created? This is the website that it generated. So for every show, I create a page like this. It's HTML. It's not super complicated. It is very fast and light. You don't have to be doing a big JavaScript thing at all.

Leo Laporte [02:04:41]:
This is generated every night. That's why there's only a few stories for TWiT. Let's do Intelligent Machines, which is coming up tomorrow. So this will have more stories in it. It does AI summaries of the stories. It has the link. And this is designed for the other hosts to read. I call it a briefing book.

Leo Laporte [02:04:59]:
That's as light as it can be. There's no JavaScript. There's a little bit of CSS probably to style it, but it's very, very simple using plain HTML. So you wouldn't have to make a— you could make— you could even tell it, make the site look like it was designed in '98. It would do it. You could say, no JavaScript, and I don't want any React. I want it to be instantaneous. I want the lightest possible site.

Leo Laporte [02:05:25]:
It would do what you tell it to do. But if you enjoy it, there's no reason for you to do that. It's only if it'd be something that you didn't want to do or you didn't have time to do. That you might consider it. I'm not trying to talk you into it. I love it that you do this stuff by hand, and I hope that people will continue to do that by all means. I don't want to see the world filled with vibe-coded slop. That would be terrible.

Steve Gibson [02:05:46]:
It's going to be interesting to see what happens when, like, when people are deploying code that they didn't write. I mean, that's what I— I mean, I'm going to put my name on it. I, you know, I don't ghost author novels because I want— if it's my— if my name is on it, it's from me. And I can't imagine shipping something that— of like, of code that I didn't write, you know, that I dictated. It's like, no, I just— that's why.

Leo Laporte [02:06:17]:
On all of my— I put a lot of this stuff, uh, up on GitHub for other people want to look at it. And in every case I say It's generated by Claude code. I don't— I say Claude does it itself. It says, "Built for a person who's entirely vibe-coded with Claude code." I make sure that that's clear.

Steve Gibson [02:06:37]:
I think that's very cool. Yeah.

Leo Laporte [02:06:39]:
And by the way, it writes all this documentation too, which trust me, no one wants to write documentation. So it does a very nice job with that. So I mean, that's cool. I don't intend for anybody to use it but me. To me, this is the stuff I'm writing for myself, not for anybody else. But I post it just if people are curious because we talk about it all the time. Well, and there's also, it incents you to do that because that's also, Claude wants to store stuff on GitHub for some reason. And so I go along with it, pushes it and everything.

Leo Laporte [02:07:13]:
I say, "Okay, sure, whatever. Tell the world." Uh, we were going to take a break. Did we take a break? No. You're watching Security Now, and that there's Steve Gibson. I'm Leo Laporte. We're glad you're here. We're especially glad our Club Twit members are here. Thank you for making this show possible.

Leo Laporte [02:07:31]:
We really appreciate it. On we go with least privilege.

Steve Gibson [02:07:36]:
Okay, now this is a little bit of a thinker for people. May not seem like it is, but Um, it kind of happened as I was working on the story about Coinbase, so I think this is useful. Uh, the topic evolved as I was expounding upon the larger lesson to be learned following Bleeping Computer's report of the second insider breach at the US's largest publicly traded crypto exchange, which, you know, it's Coinbase. Uh, as I'm always interested in doing, I wanted to draw some conclusions from the underlying cause of the second breach, and I wound up confronting one of the simplest, most well-known, and well-understood principles of security, which is simply known as least privilege. The concept of least privilege couldn't really be any simpler. It simply means not offering any more rights or privileges than are required to perform a specific task. Simple, right? But if the concept is so simple, why is it that we as an industry and users of this technology so often fail in the application of least privilege. If it's simple, it should be easy to do.

Steve Gibson [02:09:08]:
The reason why we as an industry and as users so often fail in the application of least privilege is that least privilege is also least convenient. The sad and sobering truth is that today, as mature as our theories of security may be— and I believe our theories are very mature— We remain in denial about the need to apply those theories everywhere. We know how to make our systems far more secure than they actually are. You know, we're doing that. We're making them that secure might inconvenience us. We still choose convenience over security, and we hope it'll be good enough. Okay, so with that preamble, let's look at a case in point and see what more might be learned. We've talked about the trouble companies are having, right, with this new practice of BPO.

Steve Gibson [02:10:09]:
That's the new jargon— business process outsourcing— which is the latest in business fashions. In the same way that so-called pop-up restaurants have been created The idea is that it's now possible to also have pop-up corporations. A couple of people who share an idea pitch their concept to an angel investor to raise some seed capital. Then, rather than embarking upon a hiring campaign to find and employ the wide range of talent and experience that they'll require, they instead assemble their operating enterprise like Lego blocks from, you know, an array of now available online services. The problem with this is trust. The resulting virtual enterprise lacks any core loyalty because to all of the various third parties that have been commissioned, the commissioner is just another one of their many client customers. There cannot be any sense of institutional loyalty because there's nothing to be loyal to. Clients are just account numbers and API linkages.

Steve Gibson [02:11:26]:
It really is a very different way of organizing and operating. You essentially get throwaway enterprises. So it's against this backdrop that Bleeping Computer brings us the news of another insider breach at Coinbase originating from Coinbase's use of business process outsourcing. Bleeping Computer wrote, Coinbase has confirmed an insider breach after a contractor improperly accessed the data of approximately 30 customers, which Bleeping Computer has learned is a new incident that occurred in December. A Coinbase spokesperson told Bleeping Computer, quote, last year our security team detected that a single Coinbase contractor improperly accessed customer information impacting a very small number of users, approximately 30. The individual no longer performs services for Coinbase. The impacted users were notified last year and were provided with identity theft protection services and other guidance. We've also disclosed this incident to the relevant regulators, as is standard practice.

Steve Gibson [02:12:44]:
Bleepy Computer, they wrote, has learned that this is a newly revealed insider breach and is not related to the previous disclosed TaskUs insider breach in January of last year. This statement comes after the scattered, scattered lapsus hunters cybercrime group briefly posted screenshots of an internal Coinbase support interface on Telegram and then deleted the post soon after. The screenshots showed a support panel that gave access to customer information including email addresses, names, date of birth, phone numbers, what's known as KYC— know your customer— identifying screenshots like their identities, right, their driver licenses, and stolen data to be passed along among different threat actors before being leaked or disclosed. So it's unclear whether this group was behind the insider breach or whether other threat actors carried it out. However, the scenario— I'm sorry, however, the same threat actors previously claimed to have bribed an insider at CrowdStrike to share screenshots of internal applications. Over the past few years, they write, business process outsourcing (BPO) companies have become increasingly targeted by threat actors seeking access to customer data, internal tools, or corporate networks. A business process outsourcing company is a third-party firm that performs operational tasks for another organization. These tasks commonly include customer support, identity verification, IT help desk services, account management, and so forth.

Steve Gibson [02:14:31]:
Because BPO employees often have access to sensitive internal systems and customer information, they have become a high-value target for attackers. In the past, threat actors have exploited BPOs through bribing insiders with legitimate access, social engineering support staff to grant unauthorized access, and compromising BPO employee accounts to reach internal systems. As we've seen with CoinPass— with Coinbase this year, one way BPOs are targeted is by bribing their employees to steal or share customer information. As I said, lack of loyalty to the targeted enterprise. Coinbase disclosed a similar data breach last year, later linked to external customer support representatives employed by TaskUs, an outsourcing firm that provides services to the crypto exchange. Another common tactic is social engineering attacks against outsourced IT and support desks, where threat actors impersonate employees and call BPO helplines to obtain access to internal corporate systems. In one of the most prominent cases, attackers posed as an employee and convinced a Cognizant help desk support agent to grant them access to a Clorox employee account, allowing them to breach the company's network. The incident later became the focus of a $380 million lawsuit by Clorox against Cognizant.

Steve Gibson [02:16:05]:
Google. Reported that threat actors targeted U.S. insurance firms in social engineering attacks on outsourced help desks to gain access to internal systems. Retailers also confirmed that social engineering attacks against support personnel enabled ransomware and data theft attacks. Marks Spencer confirmed attackers used social engineering to breach its networks while Co-op disclosed data theft following a ransomware attack that similarly abused support staff access. In response to the attacks on Marks Spencer and Co-op retail companies, the UK government issued guidance on social engineering attacks against help desks and BPOs. In some cases, hackers target the BPO employee accounts themselves to gain access to the customer data they manage. In October, Discord disclosed a data breach that allegedly exposed data from 5.5 million unique users after its Zendesk support system instance was compromised.

Steve Gibson [02:17:13]:
While the company did not confirm how its instance was breached, the threat actors told Bleeping Computer that they used a compromised account belonging to a support agent employed by an outsourced business processing provider. Using this account, they downloaded Discord's customer data. This repeated abuse of outsourced support providers shows how threat actors are increasingly bypassing vulnerability exploits and instead targeting third-party companies with access to corporate networks and data. Okay, so This is a variation on the call is coming from inside the house. In this case, the call is coming from inside the house of someone you trust. The source of the inherent vulnerability is clear. In order for an external outsourced business process provider to perform their functions, they must be trusted with a connection into the outsourcing entity's network or other business processes. Although they must be trusted, they are not worthy of that trust.

Steve Gibson [02:18:33]:
As I noted, an employee of an enterprise has an inherent stake in the company that employs them. We kept hearing about bribery being the way these external companies were exposed. But an employee, as I said, of an enterprise has a stake in the company. They attend meetings with their fellow employees. They look them in the eyes. They may socialize with them after work hours, attend each other's birthday parties or those of their children. They may be on a softball team or have attended explicit team-building events. They may share a department where they routinely meet plan, participate, and work side by side to meet goals.

Steve Gibson [02:19:16]:
All of those things serve to create a stake in the shared welfare of the organization, but none of that exists in the hearts and minds of subcontractors to whom that organization is just another account among many. This makes these subcontractors far more susceptible to bribery. This newfangled restructuring of organizations appears to be irreversible, right? The days of an employee starting off in the mailroom and gradually working their way up over the course of 5 generations, or, you know, decades, to, to finally receive a gold watch and become CEO— those are long gone. And they're not coming back. So how do we make this business process outsourcing work better? My hope is that everyone is learning from these initial BPO missteps and that the problems we've seen and that we, we are seeing are due to what I would call API over-trust. In the same way that it's easier to just give someone wider permissions to a database than they actually need, it's simpler and quicker to design an API that offers more power than is needed to fulfill a specific outsourced task. For example, an external BPO which is providing help desk services may not need access to a customer's entire record. They may only actually need minimum identifying information and a subset of specific customer history.

Steve Gibson [02:21:10]:
But when initially setting things up, it's quicker and easier to just give this trusted— and I have that in air quotes— third party unfettered and unfiltered access to the entire customer database. After all, they're under contract, right? What could possibly go wrong? What we see is another example of the sort of finger-pointing I've been highlighting recently. Whose fault is it if a subcontractor is bribed to disclose their contractor's critical information? The subcontractor is easiest to blame. But the information was still disclosed. The, the subcontractor, the entity that did the subcontracting, gets blamed for the breach of their systems. The question is whether that subcontractor had more access than they needed because they were able to make that disclosure. Did they only get the bare minimum that they needed which would have better protected the, the company providing that access. This excess privileges is not a new problem.

Steve Gibson [02:22:30]:
Remember that BPOs were once called MSPs. We talked about that years ago— managed service providers. We covered that story of a dental services MSP which had been compromised by a ransomware group. This group struck gold because the way the MSP operated was to require full access to their clients' networks. The ransomware group took advantage of this unfettered network access to install ransomware and encrypt the PCs and other equipment of every one of the MSP's customers. It was a widespread disaster for the MSP and for every one of the dental offices it served. There was no defensible reason for the MSP to have fully privileged network connection to each of its clients' internal networks. They didn't need that, but that was the easy path that was taken.

Steve Gibson [02:23:29]:
If the access had been strictly transactional against a service provided and running on the client side, far less, if any, damage could have ever been done. So philosophically, this is what must change. Any organization wishing to outsource services must consider the consequences of that service provider becoming a hostile entity. Maybe not by design, maybe by mistake, maybe by compromise, maybe by an insider, you know, accepting a bribery. What doesn't matter how, the question is what happens if they become a hostile entity. So instead, the way to solve that is to design and provide an API linkage that will protect their interests under any circumstances, no matter what their contractors might do. A familiar example of this sort of function, because we know how to do this, right? A familiar example is an HSM, the hardware security module, whose internal write-only private key and machinery can be employed to sign a file while at the same time nothing and no one can exfiltrate and steal its secrets. The analogy is not perfect, but the point I want to make is that designing with the concept of least privilege is what should always be done.

Steve Gibson [02:25:06]:
Always. In the HSM example, there was no need to allow the device's internal private key to ever be exposed, no matter how much the user of that key might be implicitly trusted. Thus, the key should never be exposable, not because it would be stolen, but because it could be. I've talked a lot about not exposing any non-public service to the public internet. This is another example where least privilege comes into play. When I've said that authentication doesn't work I've meant that it must not be depended upon to work. I've asked why someone in North Korea, whom you almost certainly don't intend to have accessing your enterprise's network, should even be given the opportunity to challenge your network's authentication system. If you are monitoring every incoming connection one by one, to the publicly exposed management interface of your enterprise's firewall and a connection attempt was inbound from North Korea, would you not choose to drop its packets? Of course you would.

Steve Gibson [02:26:29]:
If North Korea is being allowed to connect to your cloud services, that's not least privilege. So my point is, even though the concept of least privilege could hardly be simpler and more easily explained. It is a trivial concept. It turns out it's not trivial to actually deploy it in every instance, so it's not something that is robustly deployed in the real world, but it needs to be. I believe it's the only way forward. Through the years of this podcast, I've broadly divided problems into two categories, right? We've got mistakes that are made, it's going to happen, and also second category, policies that are deliberate. AI-driven code checking reasonably promises, as we talked about last week, to finally enable us to deliver bug-free code. I would argue AI fixing human errors, we're, we're in a whole different world.

Steve Gibson [02:27:33]:
If it's AI code from, from the start, I, I don't put that in the same class at all. AI fixing human mistakes, like we talked about last week, that seems like a near certainty to have happen. And while that's terrifically exciting, it won't cure all our ills because failures to implement least privilege, they're not mistakes. They're policies. They're the result of decisions that were made. This means that to further improve our delivered security moving forward, we need to make the decision to far more robustly design for least privilege operations. That's how we, that's how we get where we want to go from a security standpoint and, you know, stop having just, you know, breach du jour.

Leo Laporte [02:28:30]:
Is it related to zero trust? It's kind of like the idea of zero trust, right?

Steve Gibson [02:28:34]:
Yeah. Yeah.

Leo Laporte [02:28:36]:
It's basically, you know, give it as— but it's just a fundamental insecurity. Give it as little as it needs and no more.

Steve Gibson [02:28:44]:
I know. And except that people don't. They, you know, some company is in a hurry to get their help desk set up and say, hey, yeah, here. Yeah, you know, here's a credential that lets you log on to our database so that you're able to look up our customers. Except if, if that person, that contractor goes bad, you've just lost your database.

Leo Laporte [02:29:06]:
They don't even have to go bad. They just— you're as only as good as the weakest security practice of any contractor, right?

Steve Gibson [02:29:15]:
Right, exactly. And so they have no loyalty to you. They keep succumbing to bribery because it's like, hey, how much money?

Leo Laporte [02:29:23]:
Okay. But what, where, what, uh, I dimly remember the story of a, it was, I think, an electric company that still had open, um, remote access, uh, ports to a former contractor who had left, but they never took away their privilege for remotely accessing the system. And, uh, well, of course that's a recipe for disaster. That's just— yeah, yeah.

Steve Gibson [02:29:53]:
I mean, it's funny too, because, because in movies you see people having their credentials revoked the moment that, you know, it's like, you know, give us your.

Leo Laporte [02:30:02]:
Passkey, security guard comes, parking pass, put your stuff in there.

Steve Gibson [02:30:07]:
Yeah, yep. And you absolutely, at that point, you know, you want their password to no longer work.

Leo Laporte [02:30:14]:
When we've had to terminate employees in the past, we've done that and it can be very painful. And the few times that we didn't, we deeply regretted it. And it wasn't out of maliciousness, I don't think. It was more out of just not paying attention or whatever. And stuff disappeared and I don't know. It's just— Yeah.

Steve Gibson [02:30:36]:
Least privilege is the easiest thing to say. But it's so easy not to do it. Yes, least secure.

Leo Laporte [02:30:44]:
Because we want to trust. We want convenience, but we also want to be trusting. But when it comes to security, trust no one, right? Steve taught us that. Exactly right. Steve Gibson's at grc.com. That's his website. Proudly stuck in the 1990s, but it's fast and there's no JavaScript. grc.com.

Leo Laporte [02:31:04]:
Actually, there's a little JavaScript here and there for a few things you have to do, but yep, uh, only, only when absolutely necessary. Uh, a few things you might want to check out there. Of course, Spinrite, the world's best mass storage maintenance, recovery, and performance-enhancing utility.

Steve Gibson [02:31:19]:
I will say there's no JavaScript, there's no JavaScript library. I wrote it all by hand.

Leo Laporte [02:31:25]:
Yes, that's the key. Yeah, there's another— that's another example of least trust. If you're loading blobs of software from another website that you don't know and you don't examine, that's a little bit too much privilege, if you ask me. Steve doesn't do that. Get Spinrite. If you have mass storage, you need Spinrite. He also has brand new the DNS Benchmark Pro. Great way to test a variety of kind of every DNS server.

Leo Laporte [02:31:51]:
Do you find one that really is fast where you are? And that's different for everybody. That's why you need the program. Both of those are at grc.com. If you would like to get Steve's show notes emailed to you, even on Super Bowl Sunday, you can. All you have to do is go to grc.com/email. This is a great page for doing two things. One, giving him your email address so he can whitelist you so that you can send him pictures of the week or suggestions or questions. So it's good for that.

Leo Laporte [02:32:23]:
But below that, When you're given the email address, there'll be also two checkboxes unchecked by default, one for the weekly emailing, which is the show notes, and another for a very infrequent email when there's a new product Steve wants to tell you about. Have you— did you even send one out for DNS Benchmark Pro yet?

Steve Gibson [02:32:40]:
Not yet. Not ready to yet.

Leo Laporte [02:32:42]:
I love Steve. This this this is, is, is my kind of marketing. No, it's not ready yet. I'll tell you when. Uh, anyway, check those two boxes so you get those emails. Um, what else? Oh, he's got the show, of course. What am I saying? He's got copies of the podcast. In fact, he has unique copies.

Leo Laporte [02:33:01]:
He has a 16-kilobit audio version for the bandwidth impaired. That's as small as it can get. He's got a 64-kilobit audio version, which sounds fine, but it is half the size of the one we offer on our website. He also has the show notes aforementioned. He also has a transcript. Uh, one of the reasons for the 16-kilobit version is he wanted a small file to send to his transcriptionist, the wonderful Elaine Ferris, the farrier. She, uh, she lives out in a ranch and doesn't have a lot of bandwidth, so— but this way she can download it. She's a court reporter, she's very good at transcribing, makes beautiful transcripts entirely by hand.

Leo Laporte [02:33:38]:
And those are available usually a few days after the show. Those are available, and all this is free, by the way, except for Spinrite DNS Pro. Everything else is free. He also has— that's it. Well, he has lots of other stuff. He's got Shields Up. He's got a bunch of free programs. Never 10.

Leo Laporte [02:33:57]:
You're going to have to write a Never 11 if you're going to stay on 10. You're going to have to write it. Actually, did you write a Never 11? Did you write a program to keep—.

Steve Gibson [02:34:08]:
No, I switched it to In Control.

Leo Laporte [02:34:10]:
In Control. That's right. That's also free.

Steve Gibson [02:34:13]:
So now it's generic. That way it can do Never 12 also.

Leo Laporte [02:34:17]:
Never, never, never. All this stuff handwritten in assembler for maximum performance and minimum size. Uh, let's see what else. Well, you can get the show at our site, twit.tv/sn. We have 128-kilobit audio, which sounds, I'll be frank, not one whit better than the 64-kilobit audio Steve has, Despite it's having twice the size. Ask Nyquist why. I don't know why. Nyquist knows.

Leo Laporte [02:34:45]:
The reason we make that big one is Apple. Don't ask. They want larger file sizes. We also have video. No one else but us. We have the video. If you want to see Steve's mustache at work, that's all at twit.tv/sn. There's also a YouTube channel dedicated to the video, actually.

Leo Laporte [02:35:02]:
You can clip it and share it with friends and family. That's the best way to do that. Subscribe in your favorite podcast player. That's probably the best way to get the show automatically. You don't have to think about it. You'll get it as soon as we're done. There's audio and video there too. You can get either or both.

Leo Laporte [02:35:17]:
Leave us a nice review if you would, if your podcast client lets you do that. If you want to watch us live, we do the show right after MacBreak Weekly of a Tuesday, typically 13:30 Pacific, 16:30 East Coast time, uh, that would be 21:30 UTC. The streams are live on the Discord for the club members, YouTube, Twitch, X.com, Facebook, LinkedIn, and Kick. All of those places you can watch us live, chat with us live. I'm watching the chat. Uh, I guess that's everything that needs to be said. Steve, thank you for being here. We really appreciate it, and we'll See you next week on Security Now.

Steve Gibson [02:35:58]:
Right-o. See you on the 17th.

Leo Laporte [02:36:01]:
Hey everybody, it's Leo Laporte. You know about MacBreak Weekly, right? You don't? Oh, if you're a Macintosh fan or you just want to keep up what's going on with Apple, this is the show for you. Every Tuesday, Andy Inaco, Alex Lindsey, Jason Snell, and I get together and talk about the week's Apple news. It's an easy subscription. Just go to your favorite podcast client and search for MacBreak Weekly or visit our website, twit.tv/MBW. You don't want to miss a week of MacBreak Weekly. Security Now.

All Transcripts posts