Transcripts

Security Now 971 Transcript

Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show

0:00:00 - Leo Laporte
It's time for security now. Steve Gibson is here, the latest chapter in the Voyager 1 drama coming up. We'll talk about the gray beard at Gentoo who says no AI in Linux, about the Hyundai owner whose car really is tracking him, and then what the EU plans to do with end-to-end encryption. I can give you a little tip it's not good news All that coming up. Next on Security, now Podcasts you love.

From people you trust. This is Twit. This is Security Now with Steve Gibson, episode 971, recorded Tuesday, april 23rd 2024. Chat out of control. It's time for security now. The show we cover the latest security, privacy, internet updates, even occasionally some good books and movies, with this guy right here, steve Gibson, the security guy in chief.

0:01:05 - Steve Gibson
Hello, security guy yeah, and you know where are the good movies, Leo? I mean like we used to have a lot of fun. I did see that you're doing the Bobaverse with Stacy's Book Club, yes, thursday. Yes, she was chagrined when you reminded her. She said, oh, I forgot to read that, so of course it'll take her an hour, it's pretty quick yeah.

And my wife apparently just glances at pages when she like. I see her she like, like with e-books. I say can I test you on the content of this after you're through Like? And she says, oh, I'm getting most of it.

0:01:40 - Leo Laporte
Did she go to Evelyn Wood's Speed Reading Academy?

0:01:49 - Steve Gibson
I apparently or she's in some sort of a time warp, I don't know. Because I mean, I'm an engineer, I read every word and in fact that's why Michael McCullum was sending me his books before publication. Because it turns out I'm a pretty good proofreading editor, because I spot every mistake Of course not my own, so other people's Much easier to see.

0:02:08 - Leo Laporte
Yeah, it's really true. It's in the code, right? You can even know there's a bug in there. You can stare at it until the cows come home and you go.

0:02:16 - Steve Gibson
Absolutely. One of the neatest things that the GRC group has done for me is that our Spinrite news group is. You know this thing? Basically it's not. We're not having any problems. Thousands of people are coming on board with six one and it's done.

0:02:32 - Leo Laporte
We're not you know I'm not chasing bugs around. Oh it is, it's good so you're a great bunch of beta testers uh, that's in control.

0:02:41 - Steve Gibson
We're going to talk about what is out of control. Today's title is Chat Out of Control for Security. Now Episode 971, for the second to last episode of the month of April, which becomes important because of what's going to happen in June. But anyway, I'm getting all tangled up here. We're going to talk about a lot of fun things like what would you call Stuxnet on steroids.

What's the latest on the Voyager 1 drama? We've got even more good news than we had last week. What new features are coming to Android? Probably in 15. We're not sure, but probably, and also in Thunderbird this summer. What's China gone and done now?

What did Gen 2 Linux say? I'm sorry, why did Gen 2 Linux say no to AI and like? What's that all about? And after sharing and discussing a bunch of feedback because there wasn't a huge bunch of like really gripping news, so we have, but we had a lot of feedback from our listeners that we're going to have fun with news, but we had a lot of feedback from our listeners that we're going to have fun with. And a brief little update on Spinrite. We're going to examine the latest update to the European Union's quite worrisome chat control legislation, which is reportedly just over a month away from becoming law. Is the EU about to force the end of end-to-end encryption in order to enable and require the scanning of all encrypted communications for the children? And it appears ready to do just that. This latest update it came onto my radar because somebody said that the legislators had excluded themselves from the legislation.

0:04:32 - Leo Laporte
Of course.

0:04:34 - Steve Gibson
Well, so I got this 203-page tome and Section 16A was in bold because it just got added. Anyway, we'll talk about it. I think that the person doing that, speaking that, caught my attention. I'm glad he caught my attention, but he was overstating the case in order to make a point. But the case that we have doesn't need overstating because it looks really bad. There's no sign of exclusion, like the? U gave us on their legislation in September, which said we're technically feasible. That's completely missing from this. So anyway, I think we have a lot of fun to talk about, fun things to talk about.

I did make sure that the pictures showed up this week in Apple devices. What's interesting is I have an older 6, I think it's a six, or maybe it's a seven or eight, I don't know. Anyway, the pictures all work there. Even last week's pictures work there, but not on my iPhone 10. So Apple did in fact change the rendering of PDFs, which caused some problems, some incompatibilityatibility, anyway. I don't know why it was last week but not this week. We're all good to go this week. So even mac people can see our picture of the week, uh, which uh, is kind of fun. So lots of good, I can see it so I verify that it doesn't work.

0:06:04 - Leo Laporte
No, that's good news week. May I take this moment to talk about a longtime sponsor of the show company we really like quite a bit. You and I started way back when 19 years ago.

0:06:16 - Steve Gibson
The Honey Monkey Honey yeah.

0:06:21 - Leo Laporte
And when we did our last pass event in Boston about four or five years ago now.

I can't believe it pre-COVID we had Bill Cheswick on who created one of the very first honeypots. Back then it was hard to do. Today it's trivially easy, thanks to the Finkst Canary. Finkst Canaries are honeypots. They're well-named because, you know, like the canary in the coal mine, they're honeypots that can be deployed in minutes and they're there to let you know one thing Somebody is inside your network. Whether it's a bad guy from the outside or a malicious insider. Somebody's snooping around your network, whether it's a fake SSH server. In my case it's a Synology NAS. This is the thing, scenaria. It looks like in every respect, even down to the Mac address, a Synology NAS. It's got the login and everything, or an IIS server or Apache, or a Linux box with a Christmas tree of services opened up, or just a few carefully chosen, tasty morsel services that any bad guy will look at and say, oh, I got to try that. But the minute they touch that device it's not really a server.

It's not really SSH, it's a ThinkScanary, and you're going to get the alerts, just the alerts that matter. No false positives, but real alerts saying somebody's snooping around that thinks Canary. You can also make Canary tokens files that you can sprinkle around, so really you can have unlimited little tripwires all over your network. I have files, you know, xls spreadsheet files that say things like employee information. You know what hacker is going to not look at that right. But as soon as they open it it's not really an Excel file it lets me know there's somebody snooping around our network. You choose a profile and there are hundreds to choose from. It could be a SCADA device, it could be almost anything you choose, and it's easy to change too. I mean, you could change it every day. You could have a different device, which is nice if you're tracking the wily hacker, as bill might say. Uh, you register with the hosted console for monitoring and notifications. It supports web hooks. They've got an api, slack, email text. Any way, you want to be notified, they can notify you. They can even be a green bubble, if you want, you know, or blue bubble, whatever color bubble you want. Then you wait attackers who breached your network, malicious insiders, other adversaries, absolutely, you know this is the problem Companies normally don't know these people are inside, they are looking around. They're not just sitting there going, they're looking, they're actively exploring devices and files and they will trigger the Thinks Canaries and you will be alerted. Thinks Canary is so smart.

Visit canarytoolscom. For just $7,500 a year, as an example, you can get five Thinks Canaries. Big bank might have hundreds Small operation like ours, just a handful. Five Thinks Canaries, $7,500 a year, your own hosted console. You get the upgrades, the support, the maintenance.

And here's a deal If you use the offer code TWIT in the how Did you Hear About Us? Box, you're going to get 10% off the price for life. There's no risk in this. You can always return your ThinkScanary for a full refund two-month money-back guarantee, 60 days. I have to point out that in the decade now that we've been telling you about Thinks Canaries, no one has ever asked for a refund. People love them, these things. Once you get it, once you have it, you will just say no, no, no. Every network needs one or two or three or four or five. Visit canarytoolscom. Enter the code TWIT in the how did you hear about us box, slash twit. Enter the code. Twit in the how did you hear about us box Thinkst Canary a very important piece of your overall security strategy. Now I believe it's time for a picture of the week.

0:10:16 - Steve Gibson
Yeah, so this just caught my attention because lately I've been seeing, as I'm sure our listeners have, so much of this you know, AI, everything.

0:10:29 - Leo Laporte
AI, AI, AI.

0:10:30 - Steve Gibson
Yep, ai everywhere. So the picture shows a couple of young upstarts in a startup venture who are they've got some ideas for some product that they want to create. And one of the one of the things that happens when you're going out to seek financing and funding, you're typically going and giving presentations to like venture capital firms and you know explaining what you're going to do and how you're going to do it, and so PowerPoint presentations are put together and they're called pitch decks because you're making a pitch to whomever you're explaining your ideas to. So we see in this picture two guys facing each other, each behind their own display, one of them saying to the other can you go through all the old pitch decks and replace the word crypto with AI? And, of course, the point being that you know we were just what was it like?

0:11:40 - Leo Laporte
a year ago, leo, it's just the new catchphrase, yeah.

0:11:43 - Steve Gibson
Yeah, exactly, I mean, it was like time must be accelerating, because it was just so recently that everything was blockchain this and blockchain that cryptocurrency. You know crypto this and that and so no, that was. You know, that's all you know, yesterday, that's so. What do they say? So last minute, or something? Anyway, now it's AI. So, yes, and we have a couple things during this podcast that we'll be touching on this too. So, anyway, just you know, not a fantastic picture, but I thought it was just like so indicative of where we are today. I've been dealing with Bing. I don't know why I've been launching it, but it's been launched a few times in the last week.

0:12:27 - Leo Laporte
Because you use Windows. That's why.

0:12:30 - Steve Gibson
They do everything they can to get.

0:12:31 - Leo Laporte
Bing in your face.

0:12:33 - Steve Gibson
Oh my God, yes. And so it's like no, I don't want this. And also for me, since I'm not normally using Edge or Bing, it's like okay, how do I close this? It looks like it takes over the whole UI and it very much like that old you know when people were being forced to upgrade to Windows 10 against their will, where for a while it said no, thank you, and then it changed to later tonight. So it's like wait a minute, what happened to? Not at all, never, ever, you know. It's like do you want to do it now or do you want to do it in an hour?

Uh, wait those are my only two options anyway. Uh, okay, so, as we know, security now is primarily an audio podcast, but even those watching you know, though it remains unclear to me why anyone would don't have the advantage of looking at my show notes. If anyone were to be reading the notes, they would see that the spelling of the name of this new attack is far more, shall we say, acceptable in polite company than the attack's verbal pronunciation. Polite Company than the attack's verbal pronunciation. But this is an audio podcast, and the story of this attack that I very much want to share refers to the attack by name, and that name, which rhymes with Stuxnet, is spelled F-U-X-N-E-T, and there's really no other way to pronounce it than just to spit it out, but I'm just going to say F-net for the sake of the children.

0:14:11 - Leo Laporte
Thank you, steve, thank you.

0:14:18 - Steve Gibson
Yes. So it's not really an F-bomb, but it's audibly identical and there's no point in saying it. Everybody understands how you would pronounce F-U-X-N-E-T. Saying it, everybody understands how you would pronounce F-U-X-N-E-T, which is what the Ukrainians named their, the weapon which they reportedly and this was confirmed by an independent security company successfully launched into the heart of Russia. So, with that preamble and explanation, let's look at the very interesting attack that was reported last week by Security Week. Their headline, which also did not shy away from using the attack's name, said Destructive ICS, malware, fnat used by Ukraine against Russian infrastructure. So here's what we learned from what they wrote.

They said in recent months, a hacker group named Blackjack, which is believed to be affiliated with Ukraine's security services so you know, as in state-sponsored has claimed to have launched attacks against several key Russian organizations. The hackers targeted ISPs, utilities, data centers and Russia's military and allegedly caused significant damage and exfiltrated sensitive information. Last week, blackjack disclosed the details of an alleged attack aimed at Moss Collector M-O-S-C-O-L-L-E-C-T-O-R. Moss Collector, a Moscow-based company responsible for underground infrastructure, meaning things like water, sewage and communication systems. So, quoting, they said Russia's industrial sensor and monitoring infrastructure has been disabled, so said the hackers. It includes Russia's network operations center that monitors and controls gas, water, fire alarms and many others, including a vast network of remote sensors and IoT controllers. So the hackers claimed to have wiped database, email, internal monitoring and data storage servers. In addition, they claimed to have disabled some 87,000 sensors 87,000 sensors including ones associated with airports, subway systems and gas pipelines.

To achieve this, they claim to have used FNET, a malware they described as Stuxnet on steroids, which enabled them to physically destroy sensor equipment. You know, our longtime listeners and anybody who's been in you know around IT will recall that Stuxnet was a previous, also physically destructive malware. I guess we have to call it malware, even though we were apparently part of the US, participated or US intelligence services was involved in its creation. It caused the centrifuges used in Iran to overspin and essentially self-destruct, so those were being used to enrich uranium at the time anyway. So so that's why they're calling this thing stuxnet on steroids is that they worked to cause actual physical damage, as we'll see in a second to hardware there's a big difference between destroying centrifuges, which have one purpose, which is enriching uranium, and destroying sensors which can prevent gas leaks.

0:18:14 - Leo Laporte
I mean, this is a civilian attack.

0:18:17 - Steve Gibson
Finish this story but.

0:18:18 - Leo Laporte
I would love to talk at the end of it about how you feel about this.

0:18:22 - Steve Gibson
Good and I agree with you. So they wrote fnet has now started to flood the rs485 slash mbus and is sending random commands to 87 000 embedded control and sensory systems, and they did say, while carefully excluding hospitals, airports and other civilian targets. Now they said that, so they share some of our sensitivity to that and I do question, given that they're also claiming 87,000-some sensors, how they can be that careful about what they've attacked and what they haven't. Anyway, the report goes on saying the hackers' claims are difficult to verify, but the industrial and enterprise IoT cybersecurity firm Clarity was able to conduct an analysis of the Fnet malware based on information and code made available by Blackjack. Clarity pointed out that the actual sensors deployed by Moss Collector Fnet malware based on information and code made available by Blackjack.

Clarity pointed out that the actual sensors deployed by Moss Collector, which are used to collect physical data such as temperature, were likely not themselves damaged by Fnet. Instead, the malware likely targeted roughly 500 sensor gateways. So the idea is that the gateway is a device out located remotely somewhere and it has RS-485 lines running out to a ton of individual sensors. So it's the sensor data collector and forwarding device. So the malware targeted around 500 of these sensor gateways, which communicate with the sensors over a serial bus, such as RS-485 or meter bus that was mentioned by Blackjack. These gateways are also connected to the internet to be able to transmit data to the company's global monitoring system. So that was probably the means by which the FNAT malware got into the sensor gateways. Clarity notes, quote if the gateways were indeed damaged, the repairs could be extensive, given that these devices are spread out geographically across Moscow and its suburbs and must be either replaced or their firmware must be individually reflashed.

Clarity's analysis of FNET showed that the malware was likely deployed remotely. Then, once on a device, it would start deleting important files and directories, shutting down remote access services to prevent remote restoration and deleting routing table information to prevent communication with other devices. Fnet would then delete the file system and rewrite the device's flash memory. Once it has corrupted the file system and blocked access to the device, the malware attempts to physically destroy the NAND memory chip and then rewrites the UBI volume to prevent rebooting. In addition, the malware attempts to disrupt the sensors connected to the gateway by flooding their serial communications channels with random data in an effort to overload the serial bus and sensors, essentially performing an internal DOS attack on all the devices the gateway is connected to, and I'll argue that if these are not sensors but these are actuators, as you said, leo, this could be causing some true damage. I mean like true infrastructure.

0:22:04 - Leo Laporte
Well, they said subway systems airports, gas pipelines, yeah, yeah.

0:22:10 - Steve Gibson
Yeah, clarity explained, quote during the malware operation, it will repeatedly write arbitrary data over the meter bus channel. This will prevent the sensors and the sensor gateway from sending and receiving data, rendering the sensor data acquisition useless. Therefore, despite the attacker's claim of physically destroying 87,000 devices, wrote Clarity, it seems that they actually managed to infect the sensor gateways and were causing widespread disruption by flooding the meter bus channel connecting the sensors to the gateway, similar to network fuzzing the different connected sensor equipment. As a result, it appears only the sensor gateways were bricked and not the end sensors themselves. So, okay, I particularly appreciated the part about attempting to physically destroy the Gateway's NAND memory chip, because it could happen.

As we know, nand memory is fatigued by writing, because writing and erasing, which needs to be part of writing, is performed by forcing electrons to tunnel through insulation, thus weakening its dielectric properties over time. So the attacking malware is likely writing and erasing and writing and erasing the NAND memory over and over as rapidly as it can. And since such memory is likely embedded into the controller and is probably not field replaceable, that would necessitate replacing the gateway device and perhaps all 500 of them spread across Moscow and its suburbs. And even if the NAND memory was not rendered unusable, the level of destruction appears to be quite severe Wiping stored data and directories and killing the system's boot volume means that those devices probably cannot be remotely repaired. Overall, I'd have to say that this extremely destructive malware was well named, and we live in an extremely and increasingly cyber dependent world. Everyone listening to this podcast knows how rickety the world's cybersecurity truly is, so I shudder at the idea of any sort of all-out confrontation between superpowers. I don't want to see that.

0:24:38 - Leo Laporte
Do you think there should be a I don't know Geneva Convention-style accord between nations about cyber warfare? I mean convention style accord between nations about cyber warfare? I mean it's a, it's it's. The problem is, you can do it, but then you're just going to escalate. It's going to go back and forth. Just which is why we decided, for instance, not to allow bioweapons. Now still get they, still get used, uh, but it's again. You know the. The civilized world agrees not to use biologic weapons in war.

0:25:08 - Steve Gibson
Well, and the feeling is, of course, that COVID was a lab escape, right?

0:25:14 - Leo Laporte
I mean, there's some evidence, but not a lot there's no evidence. That's a question yeah, it wasn't a very good. It wasn't a very effective warlike attempt, since it killed far more people in China than it did elsewhere, but anyway, well, clearly a mistake. Yeah, it wasn't intentional.

0:25:33 - Steve Gibson
So what do you think? So I agree with you. The problem is it's tempting because it doesn't directly hurt people, right I mean? So, like right now, we're in a Cold War. We're constantly on this podcast talking about state-sponsored attacks.

0:25:57 - Leo Laporte
Well, those are attacks, especially infrastructure attacks.

0:26:00 - Steve Gibson
Yes, yes, I mean the whole colonial pipeline thing that really damaged the? U. Yes, I mean the whole colonial pipeline thing that you know that really damaged the US, and I mean it was a true attack. So so you know, and we just we just talked about how China was telling some of their China told their commercial sector you need to stop using Windows, you need to stop using this Western computer technology because the West is able to get into it. So that was the first indication we really had that, as I put it at the time, that we're giving as well as we're getting.

Unfortunately, this is all happening. I mean I wish none of it was happening. But the problem is, security is porous. A nuclear weapon and a bioweapon are unconscionable, you know, is that they are so tissue damaging, for lack of a better word. I mean they really, they're like really going to kill people, whereas a network got breached Whoops, you know, I mean it's it doesn't have the same sort of visceral grip. And unfortunately, here's an example, and I'm glad you brought up, leo, that you know Ukraine, sympathetic as we can be, for you know their situation this was a blunt ededged attack, right, I mean this was, you know, sewage and water and gas and airports, and you know I mean they couldn't have controlled what damage was caused. And you know you mess up water and sewage and you're really hurting actual people who are innocent.

0:28:11 - Leo Laporte
Or subways, or airports, or gas pipelines. I don't know what the answer is. I mean, I'm no fan of Putin, he brought the war upon himself, but hurting civilians, I don't know. This is not a good situation.

0:28:30 - Steve Gibson
It is the world we're in. It's the world we're in.

Yeah, and it is technology we created. I mean, you know, oh, let's have the password, be admin, admin, because we don't want people, you know, calling us and asking what the password is. Or you know, I mean, it's like we've made so many bad decisions and while we're now making them better, today we have seen how long the tale of inertia is. I mean, it's you could also you could argue infinite, you know. You know we still have Code Red and Nimda out there. You know we still have Code Red and NMDA out there, you know, sending packets out. Somewhere there's an NT machine just hoping to find something that it can infect.

When is it going to die? I don't know? We have another update on Voyager 1. Apparently, if Voyager is not going to give up on us, we're not going to give up on it. No-transcript. Those isotopes are continuing to put out less and less heat and thus Voyager has less and less energy available to it. So it can't go forever, but it, you know, it, amazes everybody that has gone as long as it has and it is still going. What equally amazes me is that the intrepid group of well past their retirement engineers who are now endeavoring to patch the code of this ancient machine that's 22 and a half light hours away.

0:30:28 - Leo Laporte
Oh, my God, it's amazing.

0:30:30 - Steve Gibson
It boggles the mind.

0:30:32 - Leo Laporte
It's so amazing.

0:30:34 - Steve Gibson
Just yesterday, on April 22nd, jpl, nasa's Jet Propulsion Laboratory, posted the news under the headline posted the news under the headline NASA's Voyager 1 resumes sending engineering updates to Earth, they wrote. After some inventive sleuthing, the mission team can, for the first time in five months, check the health and status of the most distant human-made object in existence. For the first time since November, nasa's Voyager 1 spacecraft is returning usable data about the health and status of its onboard engineering systems. The next step is to enable the spacecraft to begin returning science data again. The probe and its twin Voyager 2, are the only spacecraft to ever fly in interstellar space, the space between the stars.

Voyager 1 stopped sending readable science and engineering data back to Earth on November 14, 2023, even though mission controllers could tell the spacecraft was still receiving their commands and otherwise operating normally In March. So last month, the Voyager engineering team at NASA's Jet Propulsion Laboratory in Southern California confirmed that the issue was tied to one of the spacecraft's three onboard computers called the Flight Data Subsystem, or FDS. The FDS is responsible for packaging the science and engineering data before it's sent to Earth. The team discovered that a single chip responsible for storing a portion of the FDS's memory, including some of the FDS computer's software code is no longer working. The loss of that code rendered the science and engineering data unusable, Unable to repair the chip. Right after you know 22 and a half light, what is it? Days light, days away, the team decided to place the affected code elsewhere. They're relocating the code leo, at this distance, I know on a probe built in 73, launched in 73.

Cool it's, you know it, it's insane. But they said no single location is large enough to hold the section of code in its entirety, so they're having to fragment it. They devised a plan to divide the affected code into sections and store those sections in different places in the FDS. To make this plan work, they also needed to adjust those code sections to ensure, for example, that they all still function as a whole. Any references to the location of that code in other parts of the FDS memory need to be updated as well. So they're relocating and then patching to relink the now fragmented code sections so that they jump to each other. It's, you know, dynamic linking in a way that was never designed or intended. In a way that was never designed or intended, they wrote. The team started by singling out the code responsible for packaging the spacecraft's engineering data. They sent it to its new location in the FDS memory on April 18th.

A radio signal takes about 22 and a half hours to reach Voyager 1, which is now over 15 billion, with a B miles from Earth, and another 22 and a half hours for a signal oh sorry, hours, not days for a signal to come back to Earth. When the mission flight team heard back from the spacecraft on April 20th, they saw that the modification worked. For the first time in five months, they have been able to check the health and status of the spacecraft. During the coming weeks, the team will relocate and adjust the other affected portions of the FDS software. These include the portions that will start returning science data, rendering the satellite again back to doing what it was designed to do, which is using its various sensor suites, and sending back what it's seeing and finding out in interstellar space, which, as I mentioned previously, has surprised the cosmologists because their models were wrong. So Voyager 1 is saying, ah, not so fast there. Nice theory you got, but it's not matching the facts.

0:35:42 - Leo Laporte
Wow, yay, v'ger, yeah, and yay those brilliant scientists who are keeping her alive, oh and leo, I did take, I made laurie and I watched.

0:35:54 - Steve Gibson
Oh yeah, in in it's quieter in the twilight and, and what was interesting was that this announcement and it was picked up in a few other outlets showed a photo of the event, where the team were gathered around their conference table. I recognized them from the documentary.

0:36:13 - Leo Laporte
It's the same people. Yeah, exactly Since 1974. They're all still there.

0:36:18 - Steve Gibson
In fact, some of them don't look like they've changed their clothes, but that's what you get with old JPL engineers.

0:36:25 - Leo Laporte
I just love it. It's such a great, wonderful story.

0:36:27 - Steve Gibson
It really is let's take a break. I'm gonna uh catch my breath and then we're gonna talk about uh changes to coming to android 15 and thunderbird yes, all right, as we continue with the best show on the podcast universe, 22 light hours ahead of everyone else.

0:36:48 - Leo Laporte
Our show today, brought to you by Lookout. We love Lookout. Every company today is a data company. I mean, it's all about data, isn't it? And that means every company is at risk. That's the bad news.

Cyber criminals, breaches, leaks these are the new norm, and cybercriminals are growing more sophisticated by the minute. At a time when boundaries no longer exist, what it means for your data to be secure has fundamentally changed. Enter Lookout. From the first phishing text to the final data grab, lookout stops modern breaches as swiftly as they unfold, whether on a device in the cloud, across networks or working remotely at the local coffee shop. Lookout gives you clear visibility into all your data, at rest and in motion. You'll monitor, assess and protect without sacrificing productivity for security, with a single unified cloud platform. Lookout. Without sacrificing productivity for security. With a single unified cloud platform, lookout simplifies and strengthens reimagining security for the world that we'll be today. Visit lookoutcom today to learn how to safeguard your data, secure hybrid work and reduce IT complexity. That's lookoutcom. Thank you, lookout, for supporting the great work Steve's doing here at Security.

0:38:10 - Steve Gibson
Now Okay so there's not a lot of clear information about this yet, but Google is working on a new feature for Android which is interesting. They're going to start watching their app's behavior. It will place under quarantine any applications that might sneak past its Play Store screening, only to then begin exhibiting signs of behavior that it deems to be malicious. The apps will reportedly have all their activities stopped, all of their windows hidden, and notifications from the quarantine apps will no longer be shown. They also won't be able to offer any API-level services to other apps. The reports are that Google began working on this feature during Android 14's development last year and that the feature is expected to finally appear in forthcoming Android 15, but we don't have that confirmed for sure. So I wasn't able to find any dialogue or conjecture about why the apps aren't just removed, maybe, oh, and that they do still appear to be an app installed on the phone. They're not hiding it from the user. They're just saying, no, you bad app, we don't like what you've been doing. Maybe it reports back to the Play Store and then Google takes a closer look at the app which is in the Play Store which, of course, is how the user got it and then says, oh yeah, we did miss this one. And at that point it gets yanked from the Play Store and yanked from all the Android devices. So it could just be like essentially a functioning as a remote sensor package. Anyway, I'm sure we'll learn more once it becomes official, hopefully in this next Android 15.

Also, this summer Thunderbird will be acquiring support for Microsoft Exchange email for the first time ever. It'll only be email at first. The other exchange features of calendar and contacts are expected to follow at some later date, although Mozilla is not saying for sure. Now. I happen to be a Thunderbird user. I was finally forced to relinquish the use of my beloved Eudora email client Once I began receiving email containing extended non-ASCII character sets that Eudora was unable to manage. I got these weird capital A with little circles above them things in my email, which was annoying instead of line separators. At the same time, I have zero interest in Exchange. You know, grc runs a simple and straightforward instance of a mail server called HMailServer which handles traditional POP, imap and SMTP and does it effortlessly with ample features. But I know that Exchange is a big deal and obviously Mozilla feels that for Thunderbird to stay relevant it probably needs to add support for Exchange, in any event to support this rather massive coding effort.

In Mozilla's reporting of this, they mentioned that it had been 20 years because you know email is kind of this. They mentioned that it had been 20 years because you know email is kind of done 20 years since any code in Thunderbird dealing with email had been touched. They've just been, you know, screwing around with the user interface and that during those 20 years a lot of, as they put it, institutional knowledge about that code had drained. So they've decided now that they're going to recode in Rust. Rust is their chosen implementation language and they did so for all the usual reasons. They cited memory safety. They said Thunderbird takes input from anyone who sends an email, so we need to be diligent about keeping security bugs away. Performance Rust runs as native code with all the associated performance benefits and modularity. And ecosystem, they said. The built-in modularity of Rust gives us access to a large ecosystem where there are already a lot of people doing things related to email which we could benefit from.

They said so anyway, for what it's worth, thunderbird is a know from Mozilla. Is it multi-platform, leo? Do you know? Um? Is Thunderbird windows only or Mac and Linux? I I, I don't know either way. Um, anyway, uh, china, uh, the Chinese government has ordered Apple to remove four Western apps from the Chinese version of the Apple App Store. Those are Meta's, new social network threads, which is now gone Signal, telegram and WhatsApp, all removed from the Chinese App Store. China stated that they have national security concerns about those four and, as we've seen, and as I fear we'll be seeing shortly within the EU, what countries request, countries receive, technology is ultimately unable to stand up to legislation and this is going to cause a lot of trouble, as I mentioned, in the EU. We'll be talking about that here at the end of the podcast.

0:44:09 - Leo Laporte
Yeah, and I think the Chinese government's removing it for the same reason the EU wants to remove it.

0:44:13 - Steve Gibson
They don't like end to end encryption. Yes, exactly.

0:44:17 - Leo Laporte
Threads is something else but signal and telegram and WhatsApp. That's all. E2e encryption. By the way, to answer your question, I wasn't. I was down the hall. Thunderbird is mac, windows, linux. Okay, it's completely open source and everywhere.

0:44:30 - Steve Gibson
Yeah, very nice program in that case? In that case, yeah, it will have access to exchange server which may allow it to move into a corporate environment which is probably what they're seeking.

0:44:40 - Leo Laporte
Yes, yeah, that would be great. Yeah, well, we'll see if Microsoft does it. Oh, no, they're going to do it, oh cool.

0:44:48 - Steve Gibson
Oh yeah, I mean Mozilla.

0:44:52 - Leo Laporte
I just wish they'd kill Exchange server, but okay.

0:44:55 - Steve Gibson
And I mean just get out of the Exchange server.

0:44:58 - Leo Laporte
I wish Microsoft would kill Exchange. That's been a problem since forever.

0:45:04 - Steve Gibson
Since it was created exactly, and you think that China's move is like in response to TikTok and what's happening here in the US?

0:45:14 - Leo Laporte
with that. Well, we had a discussion on that break about that. The Times says it's because nasty things were said on those platforms about Xi Jinping, which is possible. There's no corroboration of that. Apple says no, I think it's just that there were Threads. Maybe that would be because Threads doesn't have any. You know, it's just a social network, but for sure they don't want. I think Threads is being killed, probably because of TikTok. It did, andy, I think pointed outs is being killed, probably because of TikTok. Andy, I think, pointed out that it happened immediately after the TikTok ban was approved in the House.

It's likely, by the way, that this time will be approved in the Senate, because it's part of a foreign aid package Right, so get ready to say goodbye to TikTok.

0:46:03 - Steve Gibson
Wow, Leo, that'll be an event, won't it?

0:46:06 - Leo Laporte
I think the courts will block it. I'm they will, but I don't know. It's a very weird thing.

0:46:11 - Steve Gibson
They have a year and a half to do it well, and I mean here we, you know we were talking about a cold war, and there's this. You know it's an economic cold war absolutely, yeah, right, and.

And china, understandably, is uncomfortable about western based apps using encryption that they're unable to compromise, right. So I mean I, I get it, you know, and. And so it's sort of like we, we lived through this brief period where, like we, you know, there was global encryption and privacy and everybody had apps that everybody could use, and then barriers began getting erected, right, I mean, so sorry, uh, you know, if you're chinese, you got to use, you know, you know china chat well, and the numbers for these particular apps in china are pretty low.

0:47:04 - Leo Laporte
We're talking hundreds of thousands of users, not millions or billions.

0:47:09 - Steve Gibson
Okay, so not a huge actual impact.

0:47:11 - Leo Laporte
I think it's an easy thing for them to do, yeah.

0:47:14 - Steve Gibson
Okay, so this was interesting. I'll just jump right in by sharing the posting to the Gentoo mail serve. This was posted by a longstanding since 2010 uh, so 14 years of involvement and he's very active gen 2 developer and contributor. He wrote given the recent spread of the ai bubble and he has ai in quotes like everywhere, so he's obviously immediately exposed himself as not being a fan gen 2 says the, you should understand, is the ultimate gray beard Linux.

0:47:48 - Leo Laporte
That's all you need to know. That totally explains it.

0:47:51 - Steve Gibson
Yes, given the recent spread of the AI bubble, I think we really need to look into formally addressing the related concerns. Oh, he says. In my opinion, at this point, the only reasonable course of action would be to safely ban AI-backed contribution entirely, in other words, explicitly forbid people from using chat, gpt, bard, github, copilot and so on to create e-builds, code documentation, messages, bug reports and so on for use in gen 2. Just to be clear, I'm talking about our original content. We can't do much about upstream projects using it. Then he says here's the rationale.

One copyright concerns. At this point, the copyright situation around generated content is still unclear. What's pretty clear is that pretty much all LLMs you know large language models are trained on huge corpora of copyrighted material and the fancy AI companies don't care about copyright violations. What this means is that there's good risk that these tools would yield stuff we cannot legally use. Number two quality concerns. Llms are really great at generating plausible-looking BS. Generating plausible looking BS and he didn't actually say BS, but I changed it for the podcast. I suppose they can provide good assistance if you are careful enough, but we can't really rely on all our contributors being aware of the risks. Then there's ethical concerns.

Number three as pointed out above the AI in quotes, corporations care about neither copyright nor people. The AI bubble causing huge energy waste. It's giving a great excuse for layoffs and increasing exploitation of IT workers. It is driving the further and here I felt I had to use the word because it is now become a common word. I think I've heard it on the twit network we allow this. The inshutification of the internet. Yes, it is empowering all kinds of spam and scam, and that is the case. Gen 2 has always stood. He concludes, as something different, something that worked for people for whom mainstream distros were lacking. I think adding made-by-real people to the list of our advantages would be a good thing, but we need to have policies in place to make sure that AI-generated crap and again, not the word he chose- doesn't flow in I like this guy.

0:51:12 - Leo Laporte
He's right. I think that's fair.

0:51:16 - Steve Gibson
Did you?

0:51:16 - Leo Laporte
see the study from the University of Illinois at Urbana-Champaign. They used ChatGPT-4, the latest version of OpenAI's model. They gave it the CVE database. That's it Nothing more than the description in the CVE, and it was able to successfully attack 87% of those vulnerabilities. It was able to craft an attack based merely on the CVE description an effective attack, wow, yeah, I mean, I think he's probably right, but I don't know how you enforce this, because no that's exactly the problem.

0:51:58 - Steve Gibson
Yeah, in his posting he had a link he referred to some we'll call it a crap storm over on GitHub and I followed the link because I was curious. There is a problem underway where what is AI-generated content, which you know looks really reasonable but doesn't actually manage to get around to saying anything, is like becoming a problem over on GitHub.

0:52:37 - Leo Laporte
Yeah.

0:52:38 - Steve Gibson
So, anyway, in order to share that, as we saw, I had to clean up the language, you know, in his posting. You know, since he clearly doesn't think much of AI-generated code and, as I said, there have been some signs over on GitHub, which he referred to, of descriptions appearing to be purely AI-generated. You know, they're not high quality, they're not high quality and I suppose we should not be surprised, leo, that there are people maybe we'll call them script kitties who are probably incapable of coding from scratch for themselves. So why wouldn't they jump on to large language model systems which would allow them to feel as though they're contributing? But are they really contributing?

0:53:29 - Leo Laporte
Now look, let's face it, humans are just as capable of introducing bugs into code as anybody else, and more often maliciously than AIs. I mean, ais aren't natively malicious. The other thing I would say is there's a lot of AI-generated pros on GitHub, because English is often not the first language of the people doing the coding. A lot of the Github are by non-English speakers and I think that that's more likely the reason you'll see kind of AI like pros on there, because they don't speak English that well or they don't word at all, and so they're using, you know, chat GPT, for instance, to generate the, the text right. Uh, human, I don't think. I mean, honestly, I've used copilot. I have my own custom gpt for lisp. The code it generates is indistinguishable from human code, probably because it is at some point from human code, right?

0:54:25 - Steve Gibson
I don't know how you're going to stop it.

0:54:28 - Leo Laporte
It doesn't have a big red flag that says an AI generated this.

0:54:32 - Steve Gibson
Right and, as we've noted, the genie is out of the bottle already. So, yeah, we're definitely in for some interesting times. Okay, we've got a bunch of feedback that I found interesting, that I thought our listeners would too. Um, let's take another break and then we will get into what. Uh, what was this one? Uh, oh oh uh. We have a listener whose auto was spying on them and he's absolutely sure it never had permission and we have a picture of the report that it generated.

0:55:09 - Leo Laporte
We here at Nissan see that you've been using your vehicle for lovemaking and we want you to knock it off. Well, we'll find out more about that in just a second, but first a word. Apparently, you're doing it wrong.

You're doing it wrong. We have some tips we'd like to share. First, a word from Collide. We love Collide. You've probably heard us talk about Collide many times on the show. I personally think Collide's model of enlisting your users as part of your security team is the only way to travel. Maybe you just heard the news that Collide was acquired by 1Password. That is good news. I really think so. Both companies are leading the way in creating security solutions with a user-first focus.

For over a year, collide Device Trust has helped companies that use Okta to ensure that only known and secure devices can access their data. You know, okta verifies the human, authenticates the human, but what's authenticating the hardware they're bringing into your network with them? Well, collide does. It works hand-in-hand with Okta and they're still doing that. They're just doing it now with the help and resources of 1Password. I think it's a match made in heaven.

If you've got Okta and you've been meaning to check out Collide, this is the time. Oh and, by the way, collide's easy to get started with because they come with a library of pre-built device posture checks so you can get started right away. But it's also easy to write your own custom checks, and for just about anything you can think of. So you start with a high level of security for a broad variety of software and hardware, and then you can add some custom stuff that's specific to your environment. For instance, plus and I love this you can use Collide on devices without MDM. That means your whole Linux fleet, that means contractor devices and, yes, every BYOD phone and laptop in your company. So now that Collide is part of 1Password, it's just going to get better. I want you to check it out. Go to collidecom slash security. Now you can watch. There's a great demo. You can learn more. Watch the demo today. That's K-O-L-I-D-E collidecom slash security now. Congratulations, collide. That's a great partnership and I think you're Family, steve.

0:57:32 - Steve Gibson
Let's let's close the loop. So we have a note from a guy who is no slouch. He's a self-described user of Shields Up Spinrite and an avid listener of Security. Now he's also an information security practitioner and I think he said, a computer geek. Yeah, he does. So he said hi, steve, I apologize for sending to this email. I probably came through sue or greg, he says, as I couldn't find a different email for contact information and yes, that's by design. But okay, he said anyway, long time follower of shields up and spin right and an avid listener of security.

Now my full-time gig is an info security practitioner and computer geek. We have a couple of Hyundais in the family and I purchased one last fall. I used the Hyundai Blue Link app on my phone as I can make sure I locked my doors and get maintenance reminders. I made a point to not opt in for the quote he has in quotes driver discount and, as a privacy cautious person, I declined sharing data wherever possible. But after the story in the New York Times regarding car makers sharing data, I contacted Verisk and Lexus Nexus to see what they had on me. Lexus Nexus had nothing other than the vehicles I have owned in the past, but Verisk had a lot.

I have attached a page of the report. It includes driving dates, minutes, minutes, day and night acceleration events and braking events. The only thing missing is the actual speeds I was going or if I was ever speeding. What bothers me most about this is that I have no way to challenge the accuracy. For events that are not illegal, I can still be penalized. Breaking hard and accelerating fast should not be safety concerns without context. And today's smarter cars are still imperfect. My adaptive cruise control he has in Prince Radar will still break hard at times. It shouldn't, and I will get penalized by that data. My car is also a turbo and if I accelerate for fun or safety, that too can be a penalty. And if I happen to drive in Texas where the highways with an 85-mile speed limit, so I would be downrated for that illegal behavior. My family tried the safe driving BT dongles from another insurer years ago but the app had too many false positives for driving over speed, he says posted speed limit doesn't agree with the app and hard braking and accelerating that we decided it wasn't worth our time or the privacy concerns. My wife and I are close to Leo's age and she drives like a grandmother, but her scores were no better than mine.

I have attached a picture of the document I got from Verisk. He says name and VIN number removed. To give you an idea of what is reported without my consent from my car, I've contacted Hyundai and told them I do not and did not consent to them sharing my data with Verisk. After a few back and forths I got this reply on April 12th. Quote thank you for contacting Hyundai Customer Care about your concerns. As a confirmation, we've been notified today that the driver's score feature and all data collecting software has permanently disabled. We do care as always. If you ever need additional assistance, you can do so either by email or phone Case number. Dot dot, dot. So he said. I will request another report from Verisk in the future to validate this report from Hyundai. Keep up the good work. I thought you would like to see the data and hear from someone who is 100% certain. They never opted in All the best, andrew.

And in the show notes we sure enough we've got the page with a report from the period September 26th of last year through March 25th of this year, so just last, toward the end of last month showing things like the number of trips vehicle ignition on to ignition off was 242 instances.

Speeding events where the vehicle speed is greater than 80 miles per hour has an NA.

Hard braking events, where they say change in speed is less than because it's braking negative nine point five KPH per second Is 24. So during that period of time, what with the with a car regarded as a hard braking event occurred 24 times. Rapid acceleration events change in speed is greater than 9.5 KPH per second is 26. Daytime driving minutes between the hours of 5 am and 11 pm 6,223. Nighttime minutes actually very few. Between 11 pm and 5 am, just 25 minutes. Miles driven 5,167.6 miles during this period, and then an itemized daily driving log showing the date, the number of trips taken that day, the number of speeding events, the number of hard braking events, rapid acceleration events, driving minutes and both daytime and nighttime. So yes, just to close the loop on this, as we first talked about from the New York Times reporting which informed us that both this Verisk and LexisNexis were selling data to insurers, were selling data to insurers and, as a consequence, those insurers were relying on that data to set insurance premium rates.

And look what it says at the bottom that's all happening.

1:04:23 - Leo Laporte
This report may display driving data associated with other individuals that operated insured's vehicle, so my guess is this is a report for an insurance company, right?

1:04:36 - Steve Gibson
Right.

1:04:37 - Leo Laporte
Right, whether he agreed to it or not. It may be that he can turn off some things. I noticed the speeding events is NA all the way through. Either he's a really careful driver or they're not recording that which may well be something he didn't agree to right.

1:04:53 - Steve Gibson
Yeah.

1:04:54 - Leo Laporte
So anyway, I know my bmw records that because I have it on my app and my mustang used to give me a report card after every trip right, and I mean compared to the way I used to drive when I was in my younger information.

1:05:09 - Steve Gibson
Yeah, I would be happy to have my insurance company uh, privy to the fact that I drive about three miles a day at 60 miles an hour, with, you know, surrounded by other traffic. I mean, it's just you know.

1:05:23 - Leo Laporte
Here's my driving performance for the month of March, and in fact, Lori added me to her car insurance and her rate went down. Yes, exactly Because you're safe, right, yeah, this more, this is more because it's an ev. You want to know a little bit about a reason. You want about hard braking and hard acceleration and stuff, because it's got to embrace your battery treatment, right, yeah, right. So I think that's. I think that's great, I you know.

1:05:50 - Steve Gibson
But, yeah, I understand why he doesn't want hyundai to record well, and and I would argue that a consumer who says no, I don't want to be watched and spied on and reported on that ought to be a privacy right that is available.

1:06:03 - Leo Laporte
Yeah, I'd like to see the fine print in the rest of the contract. Those are long, those contracts, you know.

1:06:08 - Steve Gibson
They go on and on, yeah.

1:06:10 - Leo Laporte
On and on.

1:06:11 - Steve Gibson
Yeah, so Lon Seidman said I'm listening to the latest Security Now episode. Definitely agree that freezing one's credit needs to be the default position these days? Yes, one question, though Most of these credit agencies rely on the types of personal information that typically gets stolen in a data breach for authentication. Certainly, a bad actor will go for the lowest hanging fruit and perhaps move on from a frozen account, but if there's a big whale out there, they may go through the process of unlocking that person's credit, then stealing their money. What kind of authentication changes do you think are needed? Okay, well, that's an interesting question.

Since I froze my credit reporting, I've only had one occasion to temporarily unfreeze it, which was when I decided to switch to using an Amazon credit card for the additional purchase benefits that it brought, since I'm a heavy Amazon user. And that's when I discovered, to my delight, that it was also possible to now specify an automatic refreeze on a timer to prevent the thaw from being inadvertently permanent. Since I had very carefully recorded and stored my previously freezing authentication, I didn't need to take any account recovery measures. So I can't speak from experience, but one thing does occur to me is that strong measures are available. The reporting agencies, for example, will have our current home address so they could use the postal system to send an authentication code via old school paper mail. That would be quite difficult, if not effectively impossible, for a criminal located in some hostile foreign country to obtain. So there certainly are strong authentication measures that could be employed if needed.

Again, I don't have any experience with saying whoops, I forgot what you told me not to forget when I froze my credit. So, but it's me, hi, it's me really unfreeze me, please. You know because lon's right that you know so much information is is in the report that if, or in the data which is being leaked these days, for example in a massive, a massive AT&T leakage, that you know something over and above that needs to be used.

1:08:41 - Leo Laporte
They gave me a long pin. I mean like a really long pin.

1:08:45 - Steve Gibson
Yeah, I had that too. I'm not you know, I wrote it down and saved it. But then what happens if you say, oh, it's me, yeah.

1:08:54 - Leo Laporte
Yeah, saved it. But but then what happens? If you say, oh, it's me, yeah, yeah, they say if you don't, you know you can't log in, you forget it, call us. Which means you could easily socially engineer a customer service rep. Because, let's face it, the, the credit reporting agencies don't want you to have a credit freeze. That that, correct, that's how they make their money is selling your information. So I suspect it's pretty easy to get turned off, I would guess by a third party.

1:09:19 - Steve Gibson
Um, eric Berry. He tweeted what was that credit link from the podcast? I tried the address you gave out and got page not found and I just tried the link and it works. So grcsc slash credit. That bounces you over to the Investopedia site and I've just verified, as I said, that it is still working. And for what it's worth. In what page nine of the show notes is the Investopedia link all the way spelled out. So if, if something about your computer doesn't follow uh, you know, http 301 redirects, then the link is there and at investopedia how to freeze and unfreeze your credit.

1:10:08 - Leo Laporte
So you could probably also just google that and should also point out the ftc has a really good page about credit freezes, fraud alerts, what they are, how they work and so forth. So you could also just Google FTC and credit freeze and they have a lot of information on there.

1:10:24 - Steve Gibson
Does that provide links to the actual freezing pages in the bureaus? Yes, because that's why I chose. Oh good, yep, absolutely Good, good, good.

1:10:34 - Leo Laporte
Okay, they point you to a website run by the FTC called identitytheftgov and they're going to give you those three. Now, I should point out there's more than three. These are the three big ones, but when I did a credit freeze there, I think I did five or six of them. There's others and it's probably not a bad idea to seek them all out, but obviously these are the three the FTC mentions, as well as Investopedia.

1:10:56 - Steve Gibson
Yeah right, so someone who tweets from the handle or the moniker, the monster. He said at SGGRC the race condition isn't solved solely with the exchange counter ownership protocol, unless the owner immediately rereads the owned memory region to be sure it wasn't altered before it got ownership. Okay now I don't think that's correct. There are aspects of computer science that are absolutely abstract and purely conceptual, and I suppose that's one of the reasons I'm so drawn to it.

1:11:38 - Leo Laporte
Yeah, me too. I think you are too. Yes, exactly.

1:11:42 - Steve Gibson
One of the time-honored masters of this craft is Donald Knuth, and the title of his masterwork is a multi-volume.

1:11:55 - Leo Laporte
I have all three, although there supposedly are five.

1:11:59 - Steve Gibson
He's working on them. He calls the other ones fascicles, and I have those as well. Oh yeah.

1:12:08 - Leo Laporte
How do you get those? Oh yeah, they're available. I wanted the full bookshelf but I only could find three. Not that I've read them. Three are in that original. Nice, they're available. I wanted the full bookshelf, but I only could find three. Not that I've read them.

1:12:15 - Steve Gibson
Three are in that original nice classic, binding, beautiful. And then he has a set of what he calls fascicles, which are the other two the art of computer programming. Anyway, the masterwork is titled, the Masterwork is the Art of Computer Programming, and saying that is not hyperbole, there are aspects of computer programming that can be true art and his work is full of lovely constructions, similar to the use of a single exchange instruction being used to manage inter-thread synchronization. In this case, as I tried to carefully explain last week, the whole point of using a single exchange instruction is that it is not necessary to reread anything, because the act of attempting to acquire the ownership variable acquires it only if it wasn't previously owned by someone else, while simultaneously and here simultaneity is the point and the requirement also returning information about whether the variable was or was not previously owned by any other thread. So if anyone wishes to give their brain a bit more exercise, think about the fact that in an environment where individual threads of execution may be preempted at any instant may be preempted at any instant nothing conclusive can ever be determined by reading the present state of the object ownership variable, since the reading thread might be preempted immediately following that reading and during its preemption, the owner variable might change. The only thing that anyone reading that variable might learn that is just simply reading it is that the object being managed was or was not owned at the instant in time of that reading. While that might be of some interest, it's not interesting to anyone who wishes to obtain ownership, since that information was already obsolete the instant it was obtained. So that's what is so uniquely cool about this use of an exchange instruction, which both acquires ownership only if it isn't owned and returns the previous state, meaning if it wasn't previously owned, now the thread that asked owns it, and it's as simple as a single instruction, which is just, you know, conceptually so cool.

Um Java Mountess said regarding episodes nine 70 and nine 69 with push button hardware config options. My first thought is of the 2017 Saudi chemical plant attacked with the Triton malware. The admins working on the ICS controllers deliberately left an admin permission key in the controllers instead of walking the 10 minutes required to insert the key every time a configuration needed changing. I don't blame them. That's 10 minutes, man. As a result, the attackers were able to access the IT systems and then the OT systems, because the key was always left in and in admin mode. The key was always left in and in admin mode. He says lazy people will always work around inconvenient, very secure systems Like me. And he finishes with to 999 and beyond, like Voyager. Yes, this podcast is going into interstellar space 999 and beyond, to boldly go where no podcast has gone before.

That's not true, though, because I keep hearing Twit talking about oh yeah. Anyway, I thought he made a good point. For example, the push button dangerous config change enabler should work on a change from not pushed to pushed rather than on whether the button is depressed. The electrical engineers among us will be familiar with the concept of edge triggered versus level triggered. If it's not done that way, people will simply depress the button once, then do something like wedge a toothpick into the button in order to keep it depressed.

My feeling is the ability to bypass well-designed and well-intentioned security does not matter at all. There's a huge gulf separating secure by design and insecure by design, and it's absolutely worth making things secure by design, even if those features can be bypassed. Those features can be bypassed. The issue is not whether they can be bypassed, but whether they are there in the first place to perhaps be bypassed. If someone goes to the effort to bypass a deliberately designed security measure, then the consequences of doing that is 100% on them.

It's a matter of transferring responsibility. If something is insecure by design, then it's the designers who are at fault for making the system insecurely. You know, designing the system insecurely, they may have assumed that someone would come along and make their insecure system secure, but we've witnessed far too many instances where that never happened. So the entire world's overall net security will be increased if systems just start out being secure and are then later, in some instances forced against their will to operate insecurely. And if someone's manager learns that the reason the enterprise's entire network was taken over, all their crown jewels stolen and sent to a hostile foreign power and then all their servers encrypted, is because someone in IT wedged a toothpick into a button to keep it held down for their own personal convenience, Of course they did Well.

You won't be asking that manager for a recommendation on the resume that will soon need updating, right? Wow, it'll be your fault and no one else's. David Sostian tweeted hi, mr Gibson, long-time listener, very formal and spin right owner. Actually, ant used to call me Mr Gibson, but nobody else does. He said I was listening to podcast 955, and I meant to message you about the Italian company Actalis A-C-T-A-L-I-S, but life has a tendency to get in the way. They happen to be one of the few remaining companies that issue free S-MIME certificates. I've been using them for years to secure all my email. All the best, david, so I just wanted to pass that on, david thank you.

1:20:26 - Leo Laporte
Who is that again? Italian company Actalis A-C-T-A-L-I-S All right Because I've been paying for my S-MIMEs.

1:20:35 - Steve Gibson
S Talis A-C-T-A-L-I-S are issuing free, because I've been paying for my S-Mimes S-Mime certs.

1:20:39 - Leo Laporte
I mean, I use PGP most of the time, which is free, but that's cool, right? S-mime is a lot easier for some people, so that's cool.

1:20:46 - Steve Gibson
Meanwhile, the felonious waffle has tweeted Hi Steve, I created an account on this platform to message you. Oh, thus, felonious waffle, he says I cannot wait for your email to be up and running. Neither can I. I was just listening to episode 968 on my commute and believe the outrage of AT&T's encryption practices to be undersold. Encryption practices to be undersold.

Oh, he says you mentioned that if someone is able to decrypt one string to get the four-digit code, then they have everyone's code who shares the same string. I believe it to be far worse than that. Am I wrong in thinking that if they crack one, then they have all 10,000? I'm making some assumptions that there are only two ways that 10,000 unique codes produces exactly 10,000 unique encrypted strings. The first and this is what I'm assuming AT&T used the same key to encrypt every single code. That's right. The second would be to have a unique key for each code, so code 123 would have to be a different key than 5678. That seems far-fetched to me. Is there an error to my thinking? Thanks for the podcast and everything you do. Glad you're sticking around beyond 999, daryl. Okay, so I see what Daryl is thinking. He's assuming that what was done was that if the encrypted string was decrypted to obtain the user's four-digit passcode, then the other 9,999 strings could similarly be decrypted to obtain the other four-digit passcodes. And he's probably correct in assuming that if one string had been decrypted then all the others could be too. But that isn't what happened. No encrypted strings were ever decrypted and the encryption key was never learned, but due to the static nature of the passcode's encryption, that wasn't necessary. I wanted to share Daryl's note because it reveals an important facet of cryptography, which is that it's not always necessary to reverse a cryptographic operation, as in decryption in this case. But it's also true of hashing, where we've talked about through the years many instances where we don't need to unhash something. You know, going only forward in the forward direction is often still useful. If the results of going in the forward direction can only be reapplied to other instances, then a great deal can still be learned.

In this case, since people tended to use highly non-random passcodes, you know, reusing their plain text passcode and its encryption, meaning the key never changed. Examining, for example, the details of all the records having a common encrypted passcode, imagine that you, from this big, massive database, you pull together all the records with the same encrypted passcode and you look at them. Look just that observation would very quickly reveal what single passcode most of those otherwise unrelated records shared, and thus all of them used. For example, one household lived at 1302 Willowbrook, whereas the birthday of someone else was February 13th and someone else's phone number ended in 1302. So by seeing what digits were common among a large group of records all sharing only the same encrypted passcode, it would quickly become clear what identical passcode they all chose, no decryption necessary. So that's one of the cool things that we've seen about the nature of crypto in the field is there actually are some interesting ways around it when you have the right data, even if you don't have the keys.

Skynet tweeted hi steve would having dram catch up and be fast enough eliminate the ghost race issue, and I thought that was a very interesting question. You know we've talked about how caching is there to decouple slow DRAM from the processor's much more hungry need for data in a short time. So the question could be reframed a bit to further clarify what we're really asking. So let's ask if all of the system's memory were located in the processor's most local instant access L1 cache, that is, if its L1 cache were 16 gigabytes in size, so that no read-to or write-from main memory took any time at all. Would speculative execution still present problems? And I believe the answer is yes. Even in an environment where access to memory is not an overwhelming factor, the work of the processor itself can still be accelerated by allowing it to be more clever about how it spends its time.

Instructions one at a time, and in fact processors have not actually been executing one instruction at a time for quite a while. The concept of out-of-order instruction execution dates way back to the early CDC Control Data Corporation 6600 mainframe, which was the first commercial computer system, a mainframe, to implement out-of-order instruction execution, and that was in 1968, I believe, is when the CDC 6600 happened. It sucked in instructions ahead of them being needed, and when it encountered an instruction whose inputs and outputs were independent of any earlier instructions that were still being worked on, it would execute that later instruction in parallel with other ongoing work, because the instruction didn't need to wait for the results of previous instructions, nor would its effect change the results of previous instructions. The same sort of instruction pipelining goes on today, and we would still like our processors to be faster today, and we would still like our processors to be faster. If a processor had perfect knowledge of the future by knowing which direction it was going to take at any branch or where a computed indirect jump was going to land it, and it could reach its theoretical well, if it had perfect knowledge of those things, it would be able to reach its theoretical maximum performance given any clock rate. But since a processor's ability to predict the future is limited to what lies immediately in front of it, it must rely upon looking back at the past and using that to direct its guesses about the future or, as we say, its speculation about its own immediate future. Here's something to think about.

The historical problem with third-party cookies has been that browsers maintained in the past a single large, shared cookie jar, as we've discussed before, in fact just recently. So an advertiser could set its cookie while the user was at site A and read it back when the same user had moved to site B. This was never the way cookies were meant to be used. They were meant to be used in a first-party context to allow sites to maintain state with their visitors. The problem is that until very recently there has been no cookie compartmentalization. We have the same problem with microprocessor speculation that we have had with third-party cookies.

Lack of compartmentalization the behavior of malware code is affected by the history of the execution of the trusted code that ran just before it.

Of the trusted code that ran just before it, malware is able to detect the behavior of its own code, which gives it clues into the operation of previous code that was running in the same set of processors. In other words, a lack of compartmentalization Malicious code is sharing the same micro-architectural state as non-malicious code, because today there's only one set of state. That's what needs to change, and I would be surprised if Intel wasn't already well on their way to implementing exactly this sort of change. I have no idea how large a modern microprocessor's speculation state is today, but the only way I can see to maintain the performance we want today in an environment where our processors might be unwittingly hosting malicious code is to arrange to save and restore the microprocessor's speculation state whenever the operating system switches process contexts. It would make our systems even more complicated than they already are, but it would mean that malicious code could no longer obtain any hints about the operation of any other code that was previously using the same system it is.

I'll omit this listener's full name, since it's not important. We'll call him John. He says I got nailed in a phishing email for AT&T See the attached. Yeah.

1:32:30 - Leo Laporte
Oh.

1:32:34 - Steve Gibson
See the attached picture. He said no excuse, but at least I realized it immediately and changed my password, he said, which is not one that has been used anywhere else, of course. And he ended saying saying feel stupid, dot dot no, because we've been talking about this at&t breach.

1:32:53 - Leo Laporte
He was expecting this email from AT&T Yep Exactly.

1:33:06 - Steve Gibson
The email says Dear Customer, at AT&T, we prioritize the security of our customers' information and are committed to maintaining transparency in all matters related to your privacy and data protection. We are writing to inform you of a recent security incident involving a third-party vendor. We are writing to inform you of a recent security incident involving a third-party vendor. Despite our rigorous security measures, unauthorized access was granted to some of our customer data stored by this vendor. This incident might have involved your names, addresses, email addresses, social security numbers and dates of birth. We want to assure you that your account passwords were not exposed in this breach, but they're about to be. We have notified federal law enforcement about the unauthorized access. Please accept our apology for this incident. To determine if your personal information was affected, we encourage you to follow the link below to log into your account. Oh boy. And then there's a little highlight that says sign in. And finally, thanks for choosing us AT&T.

1:34:07 - Leo Laporte
I'm willing to bet this is a copy of the actual email because it's too much corporate like rigorous security measures, but they did gain all your data. It's very much what at&t said, so I bet the bad guy just copied the original at&t email and just changed this little link here exactly what.

1:34:27 - Steve Gibson
well, I would imagine that there was probably no sign in in the original link, right, because you know that's really what changes it in into a phishing attack, and so, anyway, I just wanted to say this is how bad it is out there. I mean, as you said, leo, you saw it immediately. We've been talking about it. This is a listener of ours. He knew about it before it came. So again absolutely authentic looking.

1:34:56 - Leo Laporte
They're so smart, they're so evil. You know, we really, we absolutely need to always be vigilant and never click links in email. No, no.

1:35:07 - Steve Gibson
Even from mom.

1:35:09 - Leo Laporte
Especially from mom.

1:35:14 - Steve Gibson
Tom Minnick said, with these atomic operations to mitigate race conditions. How does that work with multi-core processors when multiple threads are running in parallel? Couldn't a race condition still occur about how multi-core processors handle threads? So Tom's question actually is a terrific one and it occurred to many of our listeners who wrote. He and everyone were right to wonder.

The atomicity of an instruction only applies to the threads running on a single core, since it, the core, can only be doing one thing at a time. By definition, Threads, as I said, are an abstraction for a single core. They are not an abstraction if multiple cores are sharing memory. So what about multiple cores or multiprocessor systems? So what about multiple cores or multiprocessor systems? The issue is important enough that all systems today provide some solution for this.

In the case of the Intel architecture, there's a special instruction prefix called lock prefix, called lock, which, when it immediately precedes any of the handful of instructions that might find it useful, forces the instruction that follows to also be atomic in the sense of multiple cores or multiple memory sharing processors. Only one processor at a time is able to access the targeted memory location and after all, just an instant right. It's just, essentially there is a lock signal that comes out of the chip that all the chips are participating with. So the processor, when it's executing a locked instruction, drops that signal, performs the instruction and immediately raises it. So it's just an, you know, it's as infinitesimally brief lockout as could be. So it doesn't hurt performance, but it prevents any other processor from accessing the same instruction at the same time. Only one processor at a time is able to access the targeted memory location. And there's one other little tidbit that simple exchange instruction is so universally useful for performing thread synchronization that the lock prefix functionality is built in to that one instruction. All the other instructions that can be used require an explicit lock prefix, not the exchange instruction. It automatically is not only thread safe but multi-core and multi-processor safe, which I think is very cool.

Finally, Michael Hagberg said credit freeze rather than unlock your entire account. It should work this way I'm buying a car. The dealer tells me which credit service they use and the dealership's ID number. I go to the credit service website, provide my social security number, PIN assigned by the site when I froze it, and the car dealer's ID number. My account will then allow that car dealer only to access my account for 24 hours. And, Michael, I agree 100%, and this just shows us that the child in you has not yet been beaten into submission and that you are still able to dream big. More power to you. Wouldn't it be nice if the world was so well designed?

1:39:34 - Leo Laporte
I actually do everything but that last piece where you give the car dealer's ID to the credit bureau. But I do ask them which credit bureau are you going to use?

1:39:43 - Steve Gibson
Good.

1:39:44 - Leo Laporte
And then that's the one I unfreeze and I tell them you got whatever three days to do this and it's going to automatically lock up again and nowadays enough people use freezes that when they get that they kind of know what happened and they'll call you and say, hey, your credit's a good person.

1:40:01 - Steve Gibson
Yeah right, it's not unusual to encounter a freeze, and in fact I did some Googling around before I got my card with Amazon to find out which of the services they used. Yeah, exactly, and then that's the one I unlocked. Then you'd be more judicious yeah.

1:40:16 - Leo Laporte
I love the idea, though, of saying, hey, credit bureau, this guy's going to ask Don't tell anybody else.

1:40:22 - Steve Gibson
Wouldn't that be? Yeah, but Leo, you know all the junk mail we receive as elders.

1:40:29 - Leo Laporte
All those credit card offers.

1:40:31 - Steve Gibson
Yes, it's because everybody's pulling our credit.

1:40:39 - Leo Laporte
By the way, when I froze all my accounts.

1:40:40 - Steve Gibson
I stopped getting those. Yeah, I haven't had any for you.

1:40:41 - Leo Laporte
I haven't had any for years the only ones I get are from cards, existing cards, saying you know, hey, you got a blue car, would you like a green one? That's it, because no new card company companies can get my information. So it works, it works right, okay, we got.

1:40:57 - Steve Gibson
Uh, just two little bits. Regarding Spinrite, mike Shales said recently I've run into some issues with my old iMac, a mid-2017 model. He said I've wanted to support your valuable Security Now efforts for some time, but investing the time to see if I could even run Spinrite on my Macs when they were all running without problems discouraged me. Spinrite on my Macs when they were all running without problems discouraged me. But now you mentioned on your April 9th podcast. I wanted to remind any would-be Mac purchasers oh, this is me speaking, he's quoting me. I wanted to remind any would-be Mac purchasers that this is the reason I created GRC's freeware named Bootable in favor okay, the name in favor of DOS Boot. If you can get Bootable to congratulate you on your success in booting it, then exactly the same path can be taken with Spinrite. Right? So, he said he wrote but bootable is a Windowsexe file and needs a Windows machine to create a bootable USB flash drive. Right, lacking a Windows machine, I made a bootable DOS drive from your read speed image download. Wow, good going there. That's ambitious. Yeah, following instructions from chat GPT, I used DD to write the read speed image to a four gig flash drive. Then, following instructions in the GRC forum post. Quote boot a Mac into free DOS for spinrite or read speed. Unquote. He said I succeeded in booting my iMac into DOS and running read speed.

So far, so good, but I believe the current Spinrite 6.1 includes the capability to recognize more drives than previously and might rely on features not provided in the version of DOS that I now have installed on my flash drive. If so, perhaps downloading the Spinrite 6.1.exe file and copying to my flash drive might not be ideal. Is this an issue? Thanks for the help, mike. Well, okay, mike very cleverly arranged to use various tools at GRC and, amazingly enough, chat GPT to create a bootable USB drive which successfully booted his mid 2017 iMac.

So first responding to Mike directly Mike, everything you did was 100% correct and if you place your copy of the Spinrite XE onto that USB stick and boot it, everything will work perfectly. And if you run it at level 3 on any older Macs with solid-state storage, you can expect to witness a notable and perhaps even very significant subsequent and long-lasting improvement in the system's performance. And while it won't be obvious, there's also a very good reason to believe that in the process, you'll have significantly improved the system's reliability. The reason the SSD will now be much faster is that it's needing, as I mentioned before, to struggle much less after running Spinrite to return the requested information, and we will be learning far more about this during the work on Spinrite 7, and although 6.1 is a bit of a blunt instrument in this regard, it works and it's here today.

To Mike's question, the specific version of free DOS does not matter at all, since DOS is only used to load spinrite and to write its log files. Otherwise, spinrite ignores DOS and interacts directly with the system's hardware. So yes, you can run it on your read speed drive. I wanted to share Mike's question because I just finished making some relevant improvements. He mentioned correctly that bootable is Windows only freeware. But over the weekend the bootable download was changed from an XE to a zip archive and the zip archive now also contains a small bootable file system image which can be used by any Mac or Linux user to directly create a bootable boot testing USB drive.

1:45:06 - Leo Laporte
Any Intel Mac or Linux user. Yes, intel, we should really emphasize that, because most Mac users now are no longer Intel, yep are no longer Intel.

1:45:16 - Steve Gibson
Yep, one of the guys in GRC's web forums put me onto a perfect and easy-to-use. I should mention. Leo, we've solved the Intel problem, but that's for another time.

1:45:27 - Leo Laporte
Oh cool, tease me, why don't you we?

1:45:30 - Steve Gibson
have. Some guys have figured out how to boot on UEFI-only systems and on ARM-based silicon using some concoction of virtual machines, and I haven't followed what they're doing because I'm just focused on getting what all of this is done done. Anyway, there's something known as Etcher by a company called Balena. Something known as Etcher by a company called Balena. It is a perfect, easy-to-use for an Intel Mac person means of moving the bootable image onto a USB without the DD command. Dd makes me nervous because you need to know what you're doing. I mean, it's a very powerful, you know, direct drive copying tool. Linux people are probably more comfortable with DD. I'm glad that this Mac user, mike, was able to get ChatGPT to help him.

1:46:36 - Leo Laporte
And I'm glad that ChatGPT just didn't stumble over a hallucination at that particular moment.

1:46:39 - Steve Gibson
You can erase everything with DD very easily. Yeah, careful, yeah. And lastly, sean wrote hey, steve, I'm sure you're hearing this a lot, but Windows did not trust Spinrite, despite all your signing efforts. I had to clear three severe warnings before it would allow me to keep 6.1 on my system for use. I hope it gets better soon for users less willing to ignore the scary warnings from Microsoft. Signed Sean and yep. I don't recall whether I had mentioned it here also, since I participated a lot about this over in this discussion in the GRC's news groups a lot about this over in this discussion in the GRC's news groups.

One thing that has been learned is that Microsoft has decided to deprecate any and all special meaning for EV extended validation code signing certificates. It's gone All those hoops I jumped through to get remote server-side EV code signing to work remotely on an HSM device will have no value moving forward, except having the signing key in the HSM does prevent anybody, even me, from extracting it. It can't be extracted, it can only be used to sign. When I saw this news, I reached out to Jeremy Rowley, who's my friend and primary contact over at DigiCert, to ask him if I had read Microsoft's announcement correctly, and he confirmed that Microsoft had just that, like the week before, surprised everyone in the cab forum with that news. Apparently, what's at the crux of this is that, for you know, historically end users were able to use EV code signing certificates to sign their kernel drivers. That was the thing Microsoft most cared about as far as EV was concerned. But after the problems with malicious Windows drivers, microsoft has decided to take away that right and require that only they, meaning Microsoft, will be authorized to sign Windows kernel drivers in the future. In their eyes, this eliminated the biggest reason for having and carrying at all about EV code signing certs. So they will continue to be honored for code signing, but EV search will no longer have any benefit. They will confer no extra meaning.

What I think is going on regarding Spinrite is that something Windows Defender sees inside Spinrite 6.1's code which was not in 6.0, absolutely convinces it that this is a very specific Trojan named W-A-C-A-T-A-C dot B, which I guess you'd pronounce whack attack. If I knew what part of Spinrite was triggering these false positives, I could probably change it. I have some ideas. So I'm going to see, because you know, we just can't keep tolerating these sorts of problems from Microsoft and it doesn't now look like my having an EV cert.

It's been three months now and you know of thousands of copies of of grc's freeware, because you know thousands plural thousands of copies are being downloaded every day. I re-signed them all with this new certificate in order to get it exposed and let microsoft see that. You know whoever was signing this wasn't producing malware, but you, you know. Here's Mike or no, sorry, sean who just said you know he had to go to extreme measures to get Windows to leave this download alone. So grumble, grumble, grumble Big time. Okay, we're going to talk about what the EU is doing, leo, after you share our last sponsor with our listeners.

1:50:58 - Leo Laporte
Breaking news, however, that you will, depending on your point of view. The eu is doing, leo after you. Uh, share our last sponsor with our listeners. Breaking news, however, that you will. You, depending on your point of view, will either be surprised or not surprised to hear google has decided to delay third-party cookie blocking until next year. Wow, digiday, this fantastic opening sentence by Seb Joseph. Google is delaying the end of third-party cookies in its Chrome browser again. In other unsurprising developments, water remains wet, so they did not outline a more specific timetable beyond hoping for 2025.

1:51:35 - Steve Gibson
Okay, beyond hoping for 2025.

1:51:44 - Leo Laporte
Okay, and that, I mean it does show you the problem with taking this away. They promised it originally in January 2020. This is the third time they've pushed it back and I'm guessing it's not going to be the last, and I'm guessing it's not going to be the last. Some of this is actually intertwined with the UK Competition and Markets Authority. They want, they say it's critical the CMA has sufficient time to review all the evidence, including results from industry tests which the CMA has asked market participants to provide by the end of June In order to see whether the privacy sandbox will be a replacement.

We recognize Google says there are ongoing challenges related to reconciling divergent feedback from the industry, regulators and developers, and we'll continue to engage closely with the entire ecosystem. Yeah, but some of this is that the CMA wants to see proof and they're not ready to provide proof.

1:52:49 - Steve Gibson
Well, leo, here's another reason I'm so happy we're going past 999. Yeah, because November is when we hit 999. And I would not be here for—.

1:52:59 - Leo Laporte
Let's make a deal that you'll keep doing the show until Google phases out third-party cookies. Oh no, rats, well past, almost fooled him no no, I think it's going to happen. I think it's inevitable. Okay, we'll see I do. It's been four years, I would wager 2025.

1:53:19 - Steve Gibson
I'll go for next year.

1:53:20 - Leo Laporte
Let's go for 2025. Let's do it. Let me real quickly mention our great sponsor and then we can get to the meat of the matter. These chat guys here going on what's going on in Europe. But first a word from Zscaler, the leader in cloud security. It's no surprise.

Cyber attackers are finally I just mentioned it are using AI in creative ways to compromise users, to breach organizations from high-precision phishing emails to video voice deep fakes of CEOs of celebrities. In a world where employees are working everywhere, apps are everywhere, data is everywhere, firewalls and VPNs are just not working to protect organizations. They weren't designed for these distributed environments, nor were they designed with AI-powered attacks in mind. In fact, often it's the case that firewalls and VPNs become the attack surface. In a security landscape where you must fight AI with AI, the best AI protection comes from having the best data Boy. Listen to this.

Zscaler has extended its zero-trust architecture with powerful AI engines that are trained and tuned in real time by 500 trillion signals every day 500 trillion with a T signals every day. Zscaler Zero Trust in AI helps defeat AI attacks today by enabling you to automatically detect and block advanced threats, discover and classify sensitive data everywhere, generate user-to-app segmentation to limit lateral threat movement and to quantify risk, prioritize remediation and, you know, importantly generate board-ready reports. Board's got to write the check. Learn more about Zscaler Zero Trust Plus AI to prevent ransomware and other AI attacks, while gaining the agility of the cloud. Experience your world secured. Visit zscalercom slash Zero Trust AI. That's zscalercom slash Zero Trust AI. Zscalercom slash Zero Trust AI. All right, let's talk about chat, steve Gibson.

1:55:40 - Steve Gibson
Okay. So, oh boy, across the pond from the US, the EU is continuing to inch forward on their controversial legislation, commonly referred to as chat control. Thus, today's title is Chat Out of Control, which proposes to require providers of encrypted messaging services to somehow arrange to screen the content that's carried by those services for child sexual abuse material, commonly known as CSAM. As I said when we last looked at this last year, 2024 will prove to be quite interesting, since all of this will likely be coming to a head this year. What's significant about what's going on in the EU, unlike in the UK, is that the legislation's language carries no exclusion over the feasibility of performing this scanning. Just to remind everyone who has a day job and who might not be following these political machinations closely, last year the UK was at a similar precipice and, with their own legislation at the 11th hour they added some language that effectively neutered it while allowing everyone to save face. For example, last September 6th Computer World's headline read UK rolls back controversial encryption rules of online safety bill unquote and followed that with quote. Companies will not be required to scan encrypted messages until it is quote technically feasible unquote, and where technology has been accredited as meeting minimum standards of accuracy in detecting only child sexual abuse and exploitation content. Unquote. So since it's unclear how any automated technology might successfully differentiate between child sexual abuse material and, for example, a photo that a concerned mother might send of her child to their doctor, there's little concern that the high bar of technical feasibility will be met in the foreseeable future. While the UK came under some attack for punting on this, the big tech companies all breathed a collective sigh of relief. But so far and boy, there's not much time left. There's no sign of the same thing happening in the EU, not even a murmur of it.

One of the observations we've made about all such legislation was the curious fact that, if passed, the legislation would mean that the legislators' own secure, encrypted and private communications would similarly be subjected to surveillance and screening, or would they? Two weeks ago, on April 9th, the next iteration of the legislation appeared in the form of a daunting 203-page tome. Fortunately, the changes from the previous iteration were all shown in bold type or crossed out, or bold, underlined, or crossed out and underlined, all meaning different things. But that made it at least somewhat possible to see what's changed. You could tell I spent way too much time with that 203 pages.

This was brought to my attention by the provocative headline in an EU website chat control colon. Eu ministers want to exempt themselves, and what that article went on to say was quote according to the latest draft text of the controversial EU child sexual abuse regulation proposal leaked by the French news organization Context, which the EU member states discussed, the EU interior ministers want to exempt professional accounts of staff of intelligence agencies, police and military from the envisioned scanning of chats and messages. The regulation should also not apply to confidential information such as professional secrets. The EU governments reject the idea that the new EU Child Protection Centre should support them in the prevention of child sexual abuse and develop best practices for prevention initiatives. Okay, so the EU has something called the Pirate Party, which doesn't seem to be well-named, but it is what it is.

2:00:30 - Leo Laporte
No, it's a real, it's. You know the Pirate Bay people. Yeah, it's a party of pirates.

2:00:35 - Steve Gibson
Yeah, yes, and popular, I might add yes, it's formed from a collection of many member parties across and throughout the European Union. The party was formed 10 years ago, back in 2014, with a focus upon Internet governance, so the issues created by this pending legislation is of significant interest to this group. Pending legislation is of significant interest to this group. To that end, one of the members of Parliament, patrick Breyer, had the following to say about these recent changes to the proposed legislation, which came to light when the document leaked. He said, quote the fact that the EU interior ministers want to exempt police officers, soldiers, intelligence officers and even themselves from chat control scanning proves that they know exactly just how unreliable and dangerous the snooping algorithms are that they want to unleash on us citizens. They seem to fear that even military secrets without any link to child sexual abuse could end up in the US at any time. The confidentiality of government communications is certainly important, but the same must apply to the protection of business and, of course, citizens' communications, including the spaces that victims of abuse themselves need for secure exchanges and therapy. We know that most of the chats leaked by today's voluntary snooping algorithms are of no relevance to the police, for example, family photos or consensual sexting. It is outrageous that the EU interior ministers themselves do not want to suffer the consequences of the destruction of digital privacy of correspondence and secure encryption that they are imposing upon us. The promise that professional secrets should not be affected by chat. It is a lie cast in paragraphs. No provider and no algorithm can know or determine whether a chat is being conducted by doctors, therapists, lawyers, defense lawyers, etc. So as to exempt it from chat control. Chat control inevitably threatens to leak intimate photos sent for medical purposes and trial documents set for defending abuse victims. It makes a mockery of the official goal of child protection that the EU interior ministers reject the development of best practices for preventing child sexual abuse, of best practices for preventing child sexual abuse. It couldn't be clearer that the aim of this bill is China-style mass surveillance and not better protecting our children. Real child protection would require a systematic evaluation and implementation of multidisciplinary prevention programs, as well as Europe-wide standards and guidelines for criminal investigations into child abuse, including the identification of victims and the necessary technical means. None of this is planned by the EU Interior Ministers. So, after the article finished quoting Patrick Breyer ministers, so after the article finished quoting Patrick Breyer, that the EU governments want to adopt the chat control bill by the beginning of June. We're approaching the end of April, so the only thing separating us from June is the month of May. I was curious to see whether the breadth of the exclusion might have been overstated in order to make a point, so I found the newly added section of the legislation on page 6 of the 203-page PDF. It reads this is section 12A. The A is the new part.

In the light of the more limited risk of their use for the purpose of child sexual abuse and the need to preserve confidential information, including classified information, information covered by professional secrecy and trade secrets, electronic communication services that are not publicly available, that's, the key electronic communication services that are not publicly available, such as those used for national security purposes, should be excluded from the scope of this regulation. Accordingly, this regulation should not apply to interpersonal communication services that are not available to the general public and the use of which is instead restricted to persons involved in the activities of a particular company, organization, body or authority. Okay, now, I'm not trained in the law, but that doesn't sound to me like an exclusion for legislators who would probably be using iMessage, messenger, signal, telegram, whatsapp, etc. It says this regulation should not apply to interpersonal communication services that are not available to the general public. So you know applications. Remember that it's this proposed EU legislation which includes the detection of grooming behavior in textual content. So it's not just imagery that needs to be scanned, but the content of all text messaging. We're also not talking about only previously known and identified content which is apparently circulating online, but also anything the legislation considers new content.

As I read through section after section of what has become a huge mess of extremely weak language that leaves itself open to whatever interpretation anyone might want to give, my own lay feeling is that this promises to create a huge mess. I've included a link to the latest legislation's PDF in the last page of the show notes for anyone who's interested. You'll only need to read the first eight pages or so to get a sense for just what a catastrophic mess this promises to be. As is the case with all such legislation, what the lawmakers say they want and, via this legislation, will finally be requiring is not technically possible. They want detection of previously unknown imagery and textual dialogue which might be seducing children, while at the same time honoring and actively enforcing EU citizen privacy rights. Oh, and did I mention that 78% of the EU population that was polled said they did not want any of this.

And it occurred to me that encryption providers will not just be able to say they're complying when they are not, because activist children's rights groups will be able to trivially test any and all private communication services to verify that they do in fact detect and take the action that the legislation requires of them.

All that's needed is for such groups to register a device as being used by a child, then proceed to have a pair of adults hold a seductive grooming conversation and perhaps escalate that to sending some naughty photos back and forth, and perhaps escalate that to sending some naughty photos back and forth.

And you can believe that if the service they're testing doesn't quickly identify and red flag the communicating parties involved, those activist children's rights groups will be going public with the service's failure under this new legislation. I've said it before before and I understand that it can sound like an excuse and a cop-out, but not all problems have good solutions. There are problems that are fundamentally intractable. This entire debate surrounding the abuse of the absolute privacy created by modern encryption is one such problem. This is not technology's fault. Technology simply makes much greater levels of privacy practical and people continually indicate that's what they prefer. As a society, we have to decide whether we want the true privacy that encryption offers or whether we want to deliberately water it down in order to perhaps prevent some of the abuse that absolute privacy also protects.

2:09:56 - Leo Laporte
Agreed, agreed and agreed yeah, I, I do.

2:10:02 - Steve Gibson
I do commend to anyone that the last page of the show notes has a link uh, it's not widely available publicly because it was leaked and patrick uh breyer, uh, at dot de, so a german site has it on his site and is making it available, so you'll need to get it if you're interested. But boy, as I said, just reading through it, it is. Again, it's insanely long at 203 pages. I struggled to find any language about, like, what time period this takes effect over. I couldn't find any. It all seems to indicate, once this legislation is in place, that the organizations need to act, but I just think the EU is stepping into a huge mess. And again, as I said, 2024, I said last year, this next year, 2024, we're in now is going to be one to watch, because lots of this is beginning to come to a head, although, leo, as you just shared with us, not the third-party cookie issue with Chrome that's been punted into, when we have four digits on this podcast.

2:11:23 - Leo Laporte
And the future. Yeah, it's an interesting world we live in.

2:11:31 - Steve Gibson
So I imagine when the legislation happens and it's supposed to be happening in early June there will be lots of coverage. We'll be back to it and we'll have some sense for when it's taking effect and what the various companies are choosing to do.

2:11:46 - Leo Laporte
And it might be well modified from this leaked document. There will certainly be amendments and things like that, so we'll have to look at the actual legislation to see what's happening. I guess Right, and we will, because that's what we do. That's what we do here. I know you love this show. You're listening, you got all the way to the end. That's pretty impressive.

May I invite you, if you're not yet a member of Club Twit, to save some time by eliminating all the ads and support Steve's work so that we can keep doing it by joining Club Twit. It's only $7 a month. It's very inexpensive. You get ad-free versions of all the shows. You get video for shows where we only have audio, like Hands on Mac, hands on Windows, untitled Linux Show, home Theater, geeks, ios Today. You also get access to the Discord, which is more than just a hang or a chat room around the shows.

It's really where now? Thousands of very smart, interesting people. I really think it was brought home to me this Sunday. We had the live audience and I got to meet everybody, and just every one of them high-level, interesting, smart people, mostly in tech, almost all in technology, in fact. I don't think there was anybody not in technology and that's who you get access to in the Discord. It's more than just us. It's some really smart people. So if you've got questions or thoughts or you want to talk to somebody who really knows what they're doing, it's another great benefit of joining the club. The best benefit is you're supporting us in our mission to keep you informed without fear or favor. We owe no one and that's what we want to keep doing, thanks to you. Twittv slash club twits the URL. Please join, we'd love to have you and we'll see you in the Discord.

Steve does this show along with me. I happen to be here most of the time Not all the time, but most of the time Every Tuesday right after MacBreak Weekly. That's 1.30 Pacific, 4.30 Eastern, 20.30 UTC. You can watch us do it live on YouTube, youtubecom slash twit. If you go to that page and you hit the bell, you'll automatically get notified when we go live, because we don't stay live. We go live when the shows start and we stop it when the show ends. So subscribe to the channel and that way you'll get the notifications. After the fact.

If it's more convenient and I'm sure it is you can always download any show from Steve's site, grccom. He has the 64-kilobit audio, but he also has a 16-kilobit audio only place. You can get that for the bandwidth impaired, and really, really good transcriptions written by Elaine Ferris those are available at GRCcom. While you're there, hey, it would behoove you to get yourself a copy of Spinrite, the world's best mass storage, maintenance and recovery utility. 6.1 is out. You're getting the latest baby and you're getting some good stuff. Grccom, and it's also where you can find so many other useful tools that Steve gives away. He's very generous, valid drive shields up and on and on and on GRCcom. He's on Xcom at S-G-G-R-C, so you can DM him there. His DMs are open, or you can leave him a message at sggrc on xcom.

We have the 64-kilobit audio at our website. We also have video at our website. That's our unique format twittv slash sn. There's also a YouTube channel with a video. That's actually the most useful for sharing clips. So you heard something here today that you wanted to share to a friend, a colleague, the boss. You can clip it very easily on YouTube and send it to them, share it to them. That's a really great use for that channel. And then, of course, the most convenient thing would be to get a podcast player and download it and subscribe to Security Now so you get every episode automatically. You don't miss a one. You don't want to miss one. You want to see or hear them all clearly, right? Thank you for joining us, steve. We'll be back next week with more exciting security news. Have a great week. We'll see you next time, steve.

2:15:45 - Steve Gibson
Last podcast of April, and then we plow into May.

2:15:52 - Leo Laporte
Yay. 

All Transcripts posts