Transcripts

Security Now 1006 Transcript

Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.
 

00:00 - Leo Laporte (Host)
Happy holidays everybody. We gave Steve Gibson the week off. Actually we gave him two weeks off. But coming up next, the best of Security Now 2024. Podcasts you love.

00:14 - Steve Gibson (Host)
From people you trust.

00:17 - Leo Laporte (Host)
This is Twit. This is Security Now, episode 1006. For Christmas Eve, december 24th, 2024, the best of the year. Hey everybody, I'm Leo Laporte and welcome to Security Now's best of 2024. For years, Steve would get mad at us when we said, steve, you can't do a show. It's Christmas Eve, it's New Year's Eve. This year, I guess, because I don't know Steve's married now and wants to spend more time with his family. He's very happy to have Christmas and New Year's Eve off, so we're very happy to give it to him. This week we're going to do a best of and there were a lot of moments all year long in security now that were pretty fascinating. It's almost hard to pick just one, but let's start with something early in the year, something that I think Steve really had the scoop on a mysterious backdoor in Apple's iPhones.

01:26 - Steve Gibson (Host)
I want to begin, as I said, with a bit of a follow-up to last week's news.

01:30 - Leo Laporte (Host)
Yeah, we've been talking about it. We talked about it on MacBreak Weekly today. We talked about it on Twit on Sunday. I think it's really a big deal story which I'm not seeing anywhere but here.

01:39 - Steve Gibson (Host)
That's exactly right and in fact we have some Q&A later and one of our listeners said how is this not getting more attention? Anyway, we will talk about that when we get there, but the way we left things last week in the wake of this revelation was with a large array of possibilities. Since then I've settled upon exactly one which I believe is the best fit with every fact we have. You know, again, no speculation here, although, again, we don't we're never going to have a lot of answers to these questions.

02:14
Many people sent notes following up on last week's podcast NSA conspiracy theory because of those other not easy to effect steps, you know, involving other clearly inadvertent mistakes in Apple's code that were needed by this particular malware. I don't know why it didn't occur to me last week, but it has now. As we know and have covered here in great detail in the past, apple has truly locked their iPhone down every way from Sunday. I believe, from all the evidence and focus that Apple has put into it, that Apple's iPhones are truly secure, are truly secure. But would Apple actually produce a smartphone handset that they and I mean they absolutely positively truly could not get into, even if it meant the end of the world?

03:25 - Leo Laporte (Host)
Oh, that's a very good point right.

03:30 - Steve Gibson (Host)
If apple believed that they could design and field a truly and totally secure last resort backdoor means of accessing their devices in the event that the world depended upon it, I believe that they would have designed such a backdoor and I believe that they did deliberately and purposefully.

03:59 - Leo Laporte (Host)
For their own use.

04:00 - Steve Gibson (Host)
Yes, and I do not think less of them for it. In fact, I think that the case could be made that it would be irresponsible for Apple not to have provided such a backdoor.

04:16 - Leo Laporte (Host)
Yeah, what if Dr Evil had an iPhone with the launch codes on it, right?

04:20 - Steve Gibson (Host)
Well, that's where I'm going here. We'll likely never know whether any external agency may have made them do it and, yes, doing so could hardly be more controversial. But I can imagine a conversation among just a very few at the very top of Apple, tim Cook and his head of engineering and of security. They had to have had a conversation about whether it should be possible, under some set of truly dire circumstances, for them to get into somebody else's locked phone. For them to get into somebody else's locked phone. Obviously the security of such a system would be more critical than anything. But their head of engineering security would have explained, as I did last week, that as long as the secret access details never escaped because it's impossible to probe anything that must be accompanied by a signature hash there would truly be no way for this backdoor to be discovered. As I said last week, from everything we've seen, it was designed to be released in the field, where it would be active yet totally safe. So if Tim Cook were told that Apple could design and build and build in an utterly secure emergency prevent the end of world escape hatch into their otherwise utterly and brutally secure devices and this escape hatch could never possibly be opened by anyone else ever. I imagine Tim would have said under those conditions, yes. I think that most CEOs who are in the position to understand that with great power comes great responsibility, when assured that it could not possibly be maliciously discovered and abused to damage their users, would say yes, build it in. Would say yes, build it in. I trust Apple as much as it's possible to trust any commercial entity operating within the United States. I believe that they absolutely will protect the privacy of their users to the true and absolute limit of their ability. If the FBI were to obtain a locked iPhone known to contain exactly as you said, leo, the location and relevant disarming details of a nuclear weapon, set on a timer and hidden somewhere in Manhattan, and hidden somewhere in Manhattan, I would be thanking Apple for having the foresight to create a super secure means for them, and them alone, to gain entry to their device, and I'd argue that in doing so they did have the best interests of their customers in mind. In this scenario, the great many iPhone users lives would be saved.

07:51
There are all manner of conspiracy theories possible here. Yeah, obviously, and this one of mine is only one of many. But of all the possible theories, I believe this one fits the facts best and makes the most sense. Of course, the first thing everyone will say is yeah, but Gibson, they did lose control of it and it was being used by malware to hurt some of Apple's users. And that's true. In fact, that's the only reason the world learned of it.

08:27
If this scenario is correct, apple never divulged this to any entity and never would. This would never have been meant for the FBI, cia or NSA to have for their own use. If an impossibly high bar were reached, tim Cook would say have an agent, bring us the phone and we'll see what we can do. But somewhere within Apple were people who did know. Perhaps someone inside was set up and compromised by a foreign group. Perhaps Apple had a long-standing mole, perhaps it was a gambling debt or the threat of some extremely embarrassing personal information being disclosed. You know, one thing we've learned and seen over and over on this podcast is that when all of the security technology is truly bulletproof, the human factor becomes the lowest hanging fruit. Just ask LastPass how they feel about the human factor, which bit them badly.

09:38
Okay, so where does this leave us? Today, we know that all Apple iPhones containing the a12 through a16 chips contain this back door and always will. We don't know that it's the only back door those chipsets contain, as we touched on last week, but Apple doesn't need another back door since they still have access to this one. They locked a door in front of this one, but they can always unlock it again. After being contacted by Kaspersky, apple's iOS updates blocked the memory-mapped IO access that was discovered being taken advantage of by malware, but Apple is able to run any software they choose on their own phones. The memory mapped IO access that was discovered being taken advantage of by malware, but Apple is able to run any software they choose on their own phones, which means that Apple still has access to this back door should they ever need to use it and this means that they've lost plausible deniability they have the ability to open any iPhone that they absolutely must. So this poses a new problem for Apple when law enforcement now comes knocking with a court order as it's almost certain to, with you know, way below that bar requests for random iPhone unlocking to assist in this or that case. So this is a new mess for Apple. I'm sure they're facing that.

11:15
Apple's most recent silicon is the A17. Yet Kaspersky told us that this facility had only been seen in the a12 through a16. If the malware did not contain that initial unique per chip generation unlock code for the a17 silicon and we know that it didn't then this same back door might still be present in today's iPhone 15 and other a 17 based devices. That's the most reasonable assumption, since it was there for the first, you know, for for the previous five generations. Apple obviously likes to have it, but what about the next iteration of Silicon for the ACE, for the a 18?

12:05
Another thing we don't know is what policy change this disclosure may have for the future. We don't know how committed apple was to having this capability, but I think I've made a strong argument for the idea that it has to have it. Have they been scared off? Well, maybe We'll see what happens now, you know, as I said, with law enforcement asking them to unlock you know, everybody's iPhone. Will they move it to a different location within the ARM 64-bit address space? Yet keep it around as we were?

12:44
After last week, we're left with a handful of unanswered and unanswerable questions, but my intention here was to hone this down, to explore what appears to be the single most likely scenario Apple designed and is quite proud of the GPU section of those chips, which contains the backdoor hardware. There's no chance they were not aware of the little hash algorithm-protected DMA engine built into one corner of the chip. People within Apple knew Listeners of this podcast know that I always separate policy decisions from mistakes, which unfortunately happen, so I sincerely hope that Apple's policy was to guard this as perhaps their most closely held secret and, specifically, that it was never their policy to disclose it to any outside agency of any kind for any reason. Somewhere, however, a mistake happened, and I'd wager that by now Apple knows where and how that mistake occurred.

13:58 - Leo Laporte (Host)
Well, I hope you're enjoying this best of. It's always fun to do these and I have to tell you it's a lot of work, and I really appreciate our team, the people who work so hard Anthony Nielsen, our creative director, our producers and editors, benito Gonzalez, kevin King, john Ashley. They work so hard to put this all together for you. All of our hosts, our contributors do. And, of course, there's the office people who do work in continuity, like Viva and Sebastian, our CEO, lisa.

14:28
Twit is a big effort and we think what we're doing is really important. I hope you do too. I hope you enjoy the company and learn from the information and, if you do, I'd like you to consider joining our club because, frankly, in 2025, that's the only thing that's going to keep TWIT going. If you like what you hear and you want to continue, please seven bucks a month consider joining TWIT. Twittv slash club TWIT. You get ad-free versions of all the shows, access to our Discord, special programming you don't get anywhere else but really the main thing you get is that warm and fuzzy feeling that you're keeping TWIT going else, but really the main thing you get is that warm and fuzzy feeling that you're keeping Twit going. We need your help. I hate to beg, but we really do Twittv slash club Twit. But enough of that. On with the show.

15:15 - Steve Gibson (Host)
Everybody knows how bullish and excited I am about Google's privacy sandbox. Yes, we all know. We all know I'm a bit of a fan boy for technology, and this is a bunch of very interesting new technology that solves some very old problems. Google clearly understands that their economic model is endangered due to the fundamental tension that exists between advertisers primarily themselves, who demand to know everything possible about the viewers of their ads, and those viewers, along with their governments, who are becoming increasingly concerned about privacy and anonymity. The emergence of global privacy control and the return of DNT Do Not Track has not gone unnoticed by anyone whose cash flow depends upon knowing something about the visitors to their websites. As we've been covering this through the years, we've watched Google iterate on a solution to this very thorny problem, and I believe though the final solution was to transfer the entire problem into the user's browser that they found a solution that really can work, but and this is a huge but that informs today's title topic it appears that the rest of the world does not plan to go down without a fight. Not everyone is convinced. Apparently, not everyone believes that they're going to need to follow Google, and it turns out that there is a workaround. That is not good. So a recent Financial Times headline read Amazon strikes ad data deal with reach as Google kills off cookies, which was followed by the subhead media sector scrambles to deal with fallout from phase out of cross website trackers. So, with a little bit of editing for the content for our listeners, the financial times rights tech giant the content for our listeners.

17:24
The Financial Times writes Tech giant Amazon has struck a deal with the UK's largest publisher, reach, to obtain customer data to target online advertising as the media industry scrambles to respond to Google's move to axe cookies. In one of the first such agreements in Europe, amazon and Reach unveiled a partnership on Monday designed to compensate for the loss of third-party cookies that help gather information about users by tracking their activity across websites to help target advertising. That it had started to remove cookies on its Chrome browser, following a similar move by Apple to block them over Safari, aiming to switch off all third-party cookies by the end of the year. Reach said it will partner with Amazon on sharing contextual first-party data, for example, allowing advertisers to know what articles people are looking at, with the US tech group using the information to sell more targeted advertising on the UK publisher's sites. The company said. The deal comes quote as the advertising world tackles deprecation of third-party cookies, a long-anticipated industry milestone that Google kick-started in early January. Unquote Financial details for the arrangement were not revealed.

18:49
The partnership involves the contextual advertising of Mantis, originally a brand safety tool that could ensure that brands were not being presented next to potentially harmful or inappropriate content. The tool is also now used to place ads next to content users may want to see, helping to better target specific audiences with relevant advertising. Other publishers also use Mantis. Amazon's ad director of EU ad tech says. Fraser Locke said that, quote. As the industry shifts towards an environment where cookies are not available, first party contextual signals are critical in helping us develop actionable insights that enable our advertisers to reach relevant audiences without sacrificing reach, relevancy or ad performance. I'd quote the loss of cookies means that almost all internet users will become close to unidentifiable for advertisers. The risk for advertisers is that their advertising offer becomes much less valuable at a time when they're already losing ad revenues, which has led to thousands of job cuts in the past year. Reach last year announced 450 roles would be axed.

20:12
Other media groups are also looking at deals involving their customer data. According to industry executives, some publishers are experimenting more with registration pages or paywalls that mean people first give first-party information that they can use, such as email addresses and logins. Reach is already seeking to harvest more such data from readers. John Steinberg, chief executive of Future, said that the quote. Steinberg, chief executive of Future, said that the quote elimination of third party cookies is one of the biggest changes to the advertising market. First-party data unquote and predicted that quote advertisers, agencies and quality publishers will work even more closely together to reach audiences that drive outcomes for brands. Unquote. Sir Martin Sorrell, chief executive of advertising firm S4 Capital, said that some clients that did not have access to first party data on their customers were panicking. He said that there would be more focus on getting customers to sign up to websites with their information as companies attempted to boost their stores of consented data unquote.

21:49
Okay, so let's think about this for a minute. This notion of requiring more user signups is interesting and it's not something that had occurred to me before. This article makes it clear that the advertising industry is not going to let go and go down without a fight. They don't want to change. They don't want to adopt Google's strongly anonymous, interest-based solution. No, they want to continue to know everything they possibly can about everyone, which is something Google's dominant Chrome browser will begin actively working to prevent, at least using the traditional tracking methodology. So what are they gonna do? And what's up with this signing into sites business?

22:46
It occurred to me that one way of thinking about the traditional presence of third-party tracking cookies is that, because they effectively identify who is going from site to site on the Internet, there's no need for us to explicitly sign up when we arrive somewhere for the purpose of identifying ourselves to the site and its advertisers. Cookies do that for us, silently and unseen, on our behalf and unseen on our behalf. Who we are when we visit a website is already known from all of the cookies our browsers transmit in response to all of the transparent pixels and beacons and scripts and ads that laden today's typical website. But soon all of that traditional, silent, continuous background identification tracking is going to be prevented and the advertising industry is finally waking up to that reality. What this means for a website itself is significant, perhaps even drastic, a reduction in advertising revenue, since, as we know, advertisers will pay much more for an advertisement that's shown to someone whose interests and history they know. That allows them to choose the most relevant ads from their inventory, which makes the presentation of the ad that the viewer sees more valuable and thus generates more revenue for the website that's hosting the ad. And that's, of course, been the whole point of all this tracking. That's why websites themselves have never been anti-tracking, and it's the reason so many websites cause their visitors browsers to contact so many third party domains. It's good for business from the website's perspective and it increases the site's revenue. And besides, visitors don't see any of that happening. So tomorrow, when visitors swing by a website with Chrome, which no longer allows tracking, and those visitors are therefore anonymous and far less valuable to that site's advertisers anonymous and far less valuable to that site's advertisers how does a website itself de-anonymize its visitors to know who they are for the purpose of identifying them to its advertisers, so that those advertisers will pay that site as much money as possible? The answer is horrible and is apparently on the horizon the website will require its visitors to register and sign up before its content and its ads can be viewed.

26:05
At the end of that Financial Times piece, they quoted Sir Martin Sorrell, the chief executive of advertising at S4 Capital, saying quote some clients that did not have access to first party data on their customers were panicking and that there would be more focus on getting customers to sign up to websites with their information as companies attempt to boost their stores of consented data. Now these websites won't be charging any money for this sign up. It's not money from their visitors they want. It's not money from their visitors. They want it's the identities of those visitors that, for the first time, they need to obtain from that first party relationship in order to share that information with their advertisers so that they can be paid top dollar for the ads displayed on their websites. And you can be 100% certain that the fine print of every such site's publicly posted privacy policy will state that any information they obtain may be shared with their business partners and affiliates, meaning the advertisers on their sites.

27:30
We thought those cookie permission pop-ups were bad, but things might soon be getting much worse and those sign-up-to-create-an-account forms may also attempt to obtain as much demographic information as possible about their visitors. You know, oh, while you're here creating an account, please tell us a bit more about yourself by filling out the form below, so that we can better tailor our content to your needs and interests. Uh-huh, right. Such form fill will likely be a one-time event per browser, since a persistent first-party logon cookie will then be given to our browser. To create an account at every site which might begin to require one will be that our visits to that site will no longer even have the pretense of anonymity. We will be known to that site and thus we will in turn be known to every one of that site's advertisers, and thus we will in turn be known to every one of that site's advertisers. We may forget that we have an account there, or we may find our name shown in the upper right-hand corner of the screen with a menu allowing us to log out, change our email address, our password, etc. And password managers are likely going to become even more important because typical Internet users will be juggling many more internet login accounts than they've ever needed before.

29:10
Historically, we only ever needed to log on to a site when we had some need to create an enduring relationship with that site. That is what promises to change. Sites with which we have no interest or need to be known will begin insisting that we tell them who we are in exchange for access to their content, even though it'll be free, and the reason for their insistence will be that we become a much more valuable visitor once they're able, in turn, to tell their advertisers exactly who we are. And it's all perfectly legal because no tracking is happening. We sign up and implicitly grant our permission for our real-world identities to be shared with any and all of that site's business associates. Most people will have no idea what's going on. Maybe it won't actually be that big a deal. It won't be obvious why sites they've been visiting for years are suddenly asking them to create an account. They already have lots of other accounts everywhere else, and the site won't be asking for money just for their identities, which most people are not concerned about divulging.

30:34
One thing we can be certain of is that a trend of forced identification before the content of an advertising-supported website can be viewed will cause the EFF to have a conniption.

30:50
Nothing could ever be more antithetical to their principles. The EFF wants nothing short of absolute and complete anonymity for all users of the Internet, so this represents a massive step directly away from that goal. The EFF would be well served, in fact, to get behind Google's initiative, which is far more privacy preserving than this end around that appears to be looming. It almost makes third-party cookie tracking look attractive by comparison. I don't want to be forced to create accounts for every low-value website I might visit briefly. If this happens, it's going to change the way the internet feels. It's going to be interesting to see how all this shakes out and, yes, I am more glad than ever to be going past episode 999, since it's going to be very interesting to be observing and sharing what comes next it's been a crazy year with chrome and google and all of the weird things they're doing to try to make it possible to hold two thoughts simultaneously advertising good, advertising bad.

32:07 - Leo Laporte (Host)
Uh, we'll see if they can figure it out. You're watching security now's best of 2024. We're glad you're here. 24. 24 continued, uh, with a little shock for people who had used Microsoft's BitLocker encryption Watch.

32:27 - Steve Gibson (Host)
Okay. So BitLocker chipped or cracked. The number one most sent to me news item of the past week Wow, it was like everybody did Seen this, seen this, seen this, oh yeah. Wow, it was like everybody did see this. Seen this, see this, oh yeah.

32:50
Um was the revelation that pcs whose secret key storage trusted platform module functions are provided by a separate tpm gpm chip outside of the main cpu are vulnerable to compromise by someone with physical access to the machine. This came as a surprise to many people who assumed that this would not be the case and that their mass storage systems were more protected than they turn out to be by microsoft's bitlocker. During system boot up, the small, unencrypted startup portion of windows sees that bitlocker is enabled and configured on the machine and that the system has a tpm chip which contains the, the decryption key. So that pre-boot code says to the tpm chip hey there, I need the super secret encryption key that you're holding. And the tpm chip replies yeah, got it right here, here it comes, no problem, and then sends it over to the processor. The only glitch here is that anyone with a hardware probe is able to connect the probe to the communicating chips of the processor, or the TPM chip, or perhaps even to the traces on the printed circuit board which interconnect those two, if those traces happen to lie on the surface, once connected, the computer can be booted and that entire happy conversation can be passively monitored. Neither end of the conversation will be any the wiser, and the probe is able to intercept and capture the TPM chips. Reply to the processors. Request for the bitlocker decryption key.

34:51
These are the sorts of tricks that the NSA not only knows about but has doubtless taken advantage of who knows how many times. But it's not made more widely obvious until a clever hacker like this stack-smashing guy that was his handle comes along and shines a very bright light on it. So it's a wonderful thing that he did, and I should note that this is not the first time this has come to light. It happened a few years ago and a few years before that. So it's the kind of thing that surfaces every so often and people go what? Oh my God? Okay, the fundamental weakness in the design is that the TPM's key storage and the consumer of that stored key are located in separate components whose communication pins are readily accessible, and the obvious solution to this dilemma is to integrate the TPM's storage functions into the system's processor so that they're highly sensitive communication, remains internal and inaccessible to casual eavesdropping and, as it turns out, remains internal and inaccessible to casual eavesdropping and, as it turns out, that's exactly what more recent Intel and AMD processors have done. So this inherent vulnerability to physical attack occupies a window in time where discrete TPM modules exist and are being maybe overly dependent upon for their security and before their functions had been integrated into the CPU. It's also unclear, like just broadly, whether all future CPUs will always include a fully integrated TPM, or whether Intel and AMD will only do this for some higher-end models or, perversely, it turns out some lower-end models.

36:54
Anyway, all of this created such a stir in the industry that yesterday, on Monday, the 12th, ars Technica posted a very nice piece about the whole issue and under the subhead what PCs are affected. The Ars guy wrote BitLocker is a form of full disk encryption that exists mostly to prevent someone who steals your laptop from taking the drive out, sticking it into another system and accessing your data without requiring your account password. In other words, they're unable to start up your laptop, so they just take the hard drive out and stick it in a different machine which they know how to start up. Many modern Windows 10 and 11 systems, they write, use BitLocker by default systems, they write, use BitLocker by default when you sign into a Microsoft account in Windows 11 Home or Pro. On a system with a TPM, your drive is typically encrypted automatically and a recovery key is uploaded to your Microsoft account.

37:57
In a Windows 11 Pro system, you can turn on BitLocker manually, whether you use a Microsoft account or not, backing up the recovery key any way you see fit, they say. Regardless, a potential BitLocker exploit could affect the personal data on millions of machines. So how big of a deal is this new example of an old attack? For many individuals, the answers probably not vary. One barrier to entry for attackers is technical. Many modern systems use firmware, tpm modules or FTPMs that are built directly into most processors.

38:45 - Leo Laporte (Host)
I think all AMD systems do that right.

38:47 - Steve Gibson (Host)
Right, yeah, in cheaper machines, he writes, this can be a way to save on manufacturing. Why buy a separate chip if you could just use a feature of the CPU you're already paying for? In other systems, including those that advertise compatibility with Microsoft's Pluton security processors, it's marketed as a security feature that specifically integrates these kinds of so-called sniffing attacks. That's because there's no external communication bus to sniff for an FTPM. It's integrated into the processor, so any communication between the TPM and the rest of the system also happens inside the processor. Virtually all self-built Windows 11 compatible desktops will use FTPMs, as will modern budget desktops and laptops, as will modern budget desktops and laptops. We checked four recent sub-$500 Intel and AMD laptops from Acer and Lenovo. All used firmware TPMs. Ditto for four self-built desktops with motherboards from Asus, gigabyte and ASRock. Ironically, if you're using a high end windows laptop, your laptop is slightly more likely to be using a dedicated external TPM chip, which means you might be vulnerable. The easiest way to tell what type of TPM you have is to go into the windows security center, go to the Device Security screen and click Security Processor Details. If your TPM's manufacturer is listed as Intel, for Intel systems, or AMD, for AMD systems, you're most likely using your systems FTPM and this exploit won't work on your system. The same goes for anything with Microsoft listed as the TPM manufacturer, which generally means the computer uses Pluton, but if you see another manufacturer listed that is not Intel, amd or Microsoft, you're probably using a dedicated external TPM.

41:12
He said I saw STM microelectronics TPMs. That's a very popular one in a recent high-end Asus, zenbook, dell, xps 13, and mid-range Lenovo ThinkPad Stack Smashing. The guy who publicized this again, you know, reminded everybody of this also posted photos of a ThinkPad X1 Carbon Gen 11 with a hardware TPM and all the pins. Someone would need to try to nab the encryption key as evidence that not all modern systems have switched over to FTPMs admittedly something I had initially assumed. He wrote. Laptops made before 2015 or 2016 are all virtually guaranteed to be using external hardware TPMs when they have any. That's not to say FTPMs are completely infallible. Some security researchers have been able to defeat the firmware TPMs in some of AMD's processors with quote two to three hours of physical access to the target device. Unquote Firmware TPMs just aren't susceptible to the kind of physical Raspberry Pi-based attack that stack smashing demonstrated.

42:38
Okay, so there is some good news here, at least in the form of what you can do. If you really need and want the best possible protection, it's possible to add a pin to the boot up process even now, so that the additional factor of something you know can be used to strongly resist TPM-only attacks. Microsoft provides a couple of very good and extensive pages which focus upon hardening BitLocker against attacks. I've included links to those articles in the show notes, but to give you a sense for the process of adding a pin to your system right now, ours explains under their subhead so what can you do about it? They say most individual users don't need to worry about this kind of attack. Many consumer systems don't use dedicated TPM chips at all, and accessing your data requires a fairly skilled attacker who's very interested in pulling the data off your specific PC rather than wiping it and reselling or stripping it for parts. He says this is not true of business users who deal with confidential information on their work laptops, but their IT departments, hopefully, do not need to tell anyone to do that.

44:04
Okay, so if you want to give yourself an extra layer of protection, microsoft recommends setting up an enhanced pin that is required at startup, in addition to the theoretically sniffable key that the TPM provides. It admins can enable this remotely via group policy. To enable it on your own system, open the local group policy editor using Windows R to open the run and then type gpeditmsc, hit enter. Then navigate to computer configuration, administrative templates, windows components, bitlocker, driver encryption and operating system drives. Enable both the require additional authentication at startup and allow enhanced pins for startup. Then open a command prompt window as an admin and type manage-bde-protectors-adc-tpm and pin that command. And this is all in the show notes. Of course, that command will immediately prompt you to set a pin for the drive.

45:30
I would think of it as a password. Anyway, he says once you've done this, the next time you boot the system will ask for a pin before it boots into windows. He says an attacker with physical access to your system and a sufficient amount of time may be able to gain access by brute forcing this pin. So it's important to make it complex, like any good password, and again, I would make it really good. If you're taking the time to do it at all, why not? He see if he finishes, a highly motivated, technically skilled attacker with extended physical access to your device may still be able to find a way around these safeguards. Regardless, having disk encryption enabled keeps your data safer than it would be with no encryption at all, and that will be enough to deter lesser skilled casual attackers from being able to get at your stuff lesser skilled casual attackers from being able to get at your stuff. So, ultimately, we're facing the same trade-off as always convenience versus security.

46:34
In the absence of a very strong pin password, anyone using a system that is in any way able to decrypt itself without their assistance should recognize the inherent danger of that. If the system escapes their control, bad guys might be able to arrange to have the system do the same thing for them, that is, decrypt without anything that they don't know. Requiring something you know is the only true protection, maybe something else that you have, if that could be arranged. That's what I did when I did my little European trip to introduce Squirrel is. I had my laptop linked to my phone and my iPhone had to be present At the same time. Bitlocking or bitlockering a drive is certainly useful, since it will strongly prevent anyone who separates the drive from the machine from obtaining anything that's protected in any way. So bitlocker, yes, pin, yes, and, as we've seen, it's possible to add a pin after the fact, and if your pin is weak, you can still strengthen it, and you should consider doing so.

47:57 - Leo Laporte (Host)
Do we still like VeriCrypt? Would you prefer VeriCrypt to BitLocker? Bitlocker is so convenient.

48:04 - Steve Gibson (Host)
It's convenient and vericrypt is 100 strong. Um, I was thinking the same thing. Bitlocker suffers a little bit from the, you know the, the, the monoculture effect of everybody having it and it just being built in. On the other hand, it's convenience means that it won't get in anyone's way.

48:24 - Leo Laporte (Host)
Right, yeah, you just log in the computer as normal. Yeah, yeah. But if you wanted really better security, I think VeriCrypt is we still that's still our choice. Yep, now that, what was it? Its predecessor, I forgot. Now what was it? Truecrypt?

48:42 - Steve Gibson (Host)
TrueCrypt. Truecrypt is gone.

48:44 - Leo Laporte (Host)
Yeah, yeah, yeah, yep. All right, if your pin is weak, you can still straighten it. The motto of the day.

48:51 - Steve Gibson (Host)
If the pin is weak, you can still straighten it. I like it Leo.

48:56 - Leo Laporte (Host)
That's Kalia, who is a textile worker. I do want to ask a little tiny favor from all of you, not just Club Twit members. Every year, you may remember, we do a survey of our audience. We want to get to know you a little bit better. It helps us with sales because we can say, you know, as we often do, 70% of our audience are IT decision makers, that kind of thing. It's a very quick survey, shouldn't only take you a couple of minutes. It'stv slash survey. This is the new 2024 2025 survey. We're starting a little bit earlier this year than we usually do. Uh, it just helps us and it would be a be doing us all a favor if you, if you, did it so in between shows maybe. Uh, twittv slash survey.

49:44 - Steve Gibson (Host)
Thank you so much we can guess. We know that PQ stands for post-quantum, and of course we'd be right. So is this Apple's third attempt at a post-quantum protocol, after one and two somehow failed or fell short? What? No, you know PQ3, what happened to PQ1 and PQ2? No, apple has apparently invented levels of security, oh, and put themselves at the top of the heap. So PQ3 offers what they call level 3 security.

50:27
So here's how sear s-e-a-r, apple's security engineering and architecture group introduced this new protocol. In their recent blog posting, they write today we are announcing the most significant cryptographic security upgrade in iMessage history with the introduction of PQ3, a groundbreaking post-quantum cryptographic protocol that advances the state of the art of end-to-end secure messaging, with compromised resilient encryption and extensive defenses against even highly sophisticated quantum attacks. Pq3 is the first messaging protocol to reach what we call level three security, providing protocol protections that surpass those in all other widely deployed messaging apps. To our knowledge, pq3 has the strongest security properties of any at-scale messaging protocol in the world. Well, so they're proud of their work and they've decided to say that not only will iMessage be using an explicitly quantum-safe encrypting messaging technology, but that theirs is bigger than I mean better than anyone else's, since so far, this appears to be as much a marketing promotion as a technical disclosure. It's worth noting that they've outlined the four levels of messaging security, which places them alone at the top of the heap.

52:14
Apple has defined four levels of messaging, with levels zero and one being pre-quantum offering no protection from quantum computing breakthroughs being able toquantum offering no protection from quantum computing breakthroughs being able to break their encryption. And levels two and three being post-quantum. Level zero is defined as no end-to-end encryption by default, and they place QQ, skype, telegram and WeChat into this level zero category. Now, this causes me to be immediately skeptical of this as being marketing nonsense, since, although I have never had any respect for Telegram's ad hoc basement cryptography, which they defend not with any sound theory, but by offering a reward to anyone who can break it we all know that Telegram is encrypted and that everyone using it is using it because it's encrypted. So lumping it into level zero and saying that it's not encrypted by default is, at best, very disingenuous, and I think it should be beneath Apple. But okay, level one are those pre-quantum, because zero and one are both pre-quantum, you know, non-quantum safe. So level one are those pre-quantum algorithms that Apple says are encrypted by default. The messaging apps they have placed in level one are Line, viber, whatsapp and the previous signal, you know, before they added post-quantum encryption, which we covered a few months ago, and the previous iMessage, before it added, you know, pq3, which actually it hasn't quite added yet, but it's going to. Now we move into the quantum safe realm for levels two and three.

54:38
Since Signal beat Apple to the quantum safe punch, it's sitting all by its lonesome at level two with the sole feature of offering post-quantum crypto with end-to-end encryption by default. As Signal's relatively new PQXDH protocol moves into WhatsApp and other messaging platforms based on Signal's open technologies, those other messaging apps will also inherit level two status, lifting them from the lowly levels of zero and one. But Apple's new PQ3 iMessaging system adds an additional feature that Signal lacks, which is how Apple granted themselves sole dominion over Level 3. Apple's PQ3 adds ongoing post-quantum re-keying to its messaging, which they make a point of noting signal currently lacks. This blog posting was clearly written by the marketing people, so it's freighted with far more self-aggrandizing text than would ever be found in a technical cryptographic protocol disclosure, which this is not, but I need to share some of it to set the stage, since what Apple feels is important about PQ3 is its attribute of ongoing re-keying. So, to that end, apple reminds us quote when iMessage launched in 2011,.

56:05
It was the first widely available messaging app to provide end-to-end encryption by default, and we've significantly upgraded its cryptography over the years, the most recently strengthened I'm sorry, we most recently strengthened the iMessage cryptographic protocol in 2019 by switching from RSA to elliptic curve cryptography and by protecting encryption keys on device with the secure enclave, making them significantly harder to extract from a device, even for the most sophisticated adversaries. That protocol update went even further with an additional layer of defense, a periodic re-key mechanism to provide cryptographic self-healing even in the extremely unlikely case that a key ever became compromised Again, extremely unlikely case that a key ever became compromised, they acknowledge, but now they have re-keying, so it's possible. Each of these advances were formally verified by symbolic evaluation. Each of these advances were formally verified by symbolic evaluation, a best practice that provides strong assurances of the security of cryptographic protocols, and about all that I agree. Ok. So Apple has had, on the fly, ongoing rekeying in iMessage for some time, and it's clear that they're going to be selling this as a differentiator for PQ3 to distance themselves from the competition. After telling us a whole bunch more about how wonderful they are, they get down to explaining their level system and why they believe PQ3 is an important differentiator. Here's what they say, and I skipped all the other stuff they said.

58:01
To reason through how various messaging applications mitigate attacks, it's helpful to place them along a spectrum of security properties. There's no standard comparison to employ for this purpose, so we lay out our own simple, coarse-grained progression of messaging security levels. We start with classical cryptography and progress toward quantum security, which addresses current and future threats from quantum computers, threats from quantum computers. Most existing messaging apps fall either into level 0, no end-to-end encryption by default and no quantum security, or level 1, with end-to-end encryption by default but no quantum security. A few months ago, signal added support for the PQXDH protocol becoming the first large-scale messaging app to introduce post-quantum security in the initial key establishment. This is a welcome and critical step that, by our scale, elevated signal from level 1 to level 2 security. Level one to level two security At level two.

59:19
The application of post-quantum cryptography is limited to the initial key establishment, providing quantum security only if the conversation key material is never compromised. But today's sophisticated adversaries already have incentives to compromise encryption keys, because doing so gives them the ability to decrypt messages protected by those keys for as long as the keys don't change. To best protect end-to-end encrypted messaging, the post-quantum keys need to change on an ongoing basis to place an upper bound on how much of a conversation can be exposed by any single point-in-time key compromise, both now and with future quantum computers. Therefore, we believe messaging protocols should go even further and attain level three security, where post-quantum cryptography is used to secure both the initial key establishment and the ongoing message exchange, with the ability to rapidly and automatically restore the cryptographic security of a conversation even if a given key becomes compromised.

01:00:36
Okay, it would be very interesting to hear Signal's rebuttal to this, since it's entirely possible that this is mostly irrelevant. Largely irrelevant marketing speak, largely irrelevant marketing speak. It's not that it's not true and that continuous key rotation is not useful. We've talked about this in the past. Key rotation gives a cryptographic protocol a highly desirable property, known as perfect forward secrecy. Essentially, the keys the parties are using to protect their conversation from prying eyes are ephemeral. A conversation flow is broken up and compartmentalized by the key that's in use at the moment, but the protocol never allows a single key to be used for long. The key is periodically changed. The reason I'd like to hear a rebuttal from signal is that their protocol signals protocol has always featured perfect forward secrecy. Remember axololotl.

01:01:56 - Leo Laporte (Host)
Yeah.

01:01:57 - Steve Gibson (Host)
Yes, which is for?

01:01:58 - Leo Laporte (Host)
the same purpose as re-keying.

01:02:02 - Steve Gibson (Host)
Yes, it is re-keying. Here's what Wikipedia says. I'm quoting from Wikipedia.

01:02:07
In cryptography, the double ratchet algorithm, previously referred to as the Axolotl ratchet previously referred to as the axolotl ratchet is a key management algorithm that was developed by Trevor Perrin and Moxie Marlinspike in 2013.

01:02:25
Oh, uh-huh. It can be used as part of a cryptographic protocol to provide end-to-end encryption for instant messaging. After an initial key exchange, it manages the ongoing renewal and maintenance of short-lived session keys. It combines a cryptographic so-called ratchet based on the Diffie-Hellman key exchange and a ratchet based on a key derivation function, such as a hash function, and is therefore called a double ratchet. The algorithm provides forward secrecy for messages and implicit renegotiation of forward keys, properties for which the protocol is named Right In 2013. In other words, it would be nice to hear from Signal, since Apple appears to be suggesting that they alone are offering the property of perfect forward secrecy for quantum safe messaging, when it certainly appears that Signal got there 11 years ago. That Signal got there 11 years ago. This is not to say that having this feature in iMessage is not a good thing, but appears that Apple may not actually be alone at PQ's level three, much as they would like to be.

01:03:49 - Leo Laporte (Host)
That's what Steve is so good at those deep dives. You're just seeing a section of the entire discussion. It was more than half an hour on Apple's PQ3. But that's what we love, steve. Right, we get down to the nitty-gritty details. Glad you're here. Happy holidays. You're watching our best of 2024. All right, now we get into something that was a huge topic towards the end of the year Microsoft's Huge topic towards the end of the year. Microsoft's recall. This was Steve's first reaction 50 gigabyte pile of nonsense.

01:04:36 - Steve Gibson (Host)
So, since we began the podcast with a general theme of how AI, which is not even close to being intelligent, is being misapplied during these early days, I feel as though a security and privacy-focused podcast like this one ought to take note of the new recall feature that will be part of the next generation ARM-based Windows 11, what's known as CoPilot Plus laptop PCs. What's known as co-pilot plus laptop PCs. First of all, yes, it does appear that ARM processors have finally come far enough along to be able to carry the weight of Windows on their processors, and while having Windows on ARM will certainly create a new array of challenges, like, for example, the lack of specific hardware drivers that only exist for Intel kernels. In the more self-contained applications you know, where drivers are much less used, such as laptops, where power consumption and battery life trumps pretty much any other consideration, it's foreseeable that Windows may finally be able to find a home on ARM. Today, laptop and tablet form factor machines containing Qualcomm Snapdragon ARM processors running Windows 11 have been announced and are, in some cases, available for pre-order from Acer, asus, dell, hp, lenovo, microsoft and Samsung for pre-order from acer, asus, dell, hp, lenovo, microsoft and samsung. It's also worth noting that intel pcs will also be getting copilot plus at some time in the future, but they will need to have a neural processing engine.

01:06:11
Answering the question what makes makes Copilot Plus PCs unique? Microsoft writes Copilot Plus PCs are a new class of Windows 11 PCs that are powered by a turbocharged neural processing unit, an NPU, a specialized computer chip for AI-intensive processes like real-time translations and image generation that can perform more than 40 trillion operations per second. So we have tops. Trillion operations per second, so more than 40 tops. And later, microsoft writes, we are partnering with Intel and AMD to bring coilot Plus PC experiences to PCs with their processors in the future. So potentially everybody's going to be able to get this. Ok. So what is recall? Microsoft explains. Explains they said you can use recall on copilot plus PCs to find the content you have viewed on your device.

01:07:23
Recall is currently in preview status. During this phase, we will collect customer feedback, develop more controls for enterprise customers to manage and govern recall data and improve the overall experience for users. On devices that are not powered by a Snapdragon X series processor, installation of a Windows Update will be required to run recall. Recall is currently optimized for select languages, including english, simplified chinese, french, german, japanese and spanish. This means recall is able to retrieve snapshots from your pc's timeline based on more sophisticated searches in these languages. During the preview phase, we will enhance optimization for additional languages. Recall can also retrieve snapshots from your PC's timeline based on text to text searches in more than 160 languages. Okay, fortunately.

01:08:32
They then ask themselves how does recall work? To which they reply Recall uses Copilot Plus PC advanced processing capabilities to take images of your active screen every few seconds. The snapshots are encrypted and saved on your PC's hard drive. You can use recall to locate the content you have viewed on your PC using search or on a timeline bar that allows you to scroll through your snapshots. Once you find the snapshot that you were looking for in recall, it will be analyzed and offer you options to interact with the content. Recall will also enable you to open the snapshot in the original application in which it was created whoa, really. Okay. And as recall is refined over time, it will open the actual source document, website or email in a screenshot which, okay, is mind-boggling, but they said this functionality will be improved doing during recalls preview phase.

01:09:59
So, before they let it loose, they said we, they said copilot plus PC storage size determines the number of snapshots that recall can take and store. The minimum hard drive space needed to run recall is 250 gig and 50 gigabytes of space must be available. The default allocation for recall on a device with 256 gigabytes will be 25 gig, which can store approximately three months of snapshots. You can increase the storage allocation for recall in your PC settings. Old snapshots will be deleted once you use up your allocated storage, allowing new ones to be stored. Okay, so it's sort of a rolling 90-day window, the most recent 90 days of screen snapshots taken every few seconds.

01:10:59
Okay, then they ask what privacy controls does Recall offer? They respond Recall is a key part of what makes Copilot plus PC special, and Microsoft built privacy into Recalls design from the ground up, which, of course, we all recognize. A standard boilerplate which we all hope is true, they said. On a copilot plus PCs powered by a Snapdragon X series processor, you will see the recall taskbar icon After you first activate your device. You can use that icon to open recalls settings and make choices about what snapshots recall collects and stores on your device. You can limit which snapshots recall collects. For example, you can select specific apps or websites visited in a supported browser to filter out of your snapshots. In addition, you can pause snapshots on demand from the recall icon in the system tray, clear some or all snapshots that have been stored or delete all the snapshots from your device you call that I'm going to watch porn button now.

01:12:14 - Leo Laporte (Host)
So yeah, Press the porn button, yeah.

01:12:19 - Steve Gibson (Host)
And it occurs to me that I later talk about how snapshots of the Windows-based Signal app would be a problem.

01:12:33
Oh because that's in the clear right, right right, I mean, it's what the user sees. Maybe this allows you to say don't take snapshots of that window. And we should also remember that what we see is a graphic user interface, but Windows knows the text behind the actual controls that it's displaying, so it doesn't actually have to be. I mean, I guess it's who knows what it's doing in detail. But my point is that while we see graphics, there's actual text which is being mapped into bit mapped fonts which is then being displayed on the screen. So behind the screens, so to speak, microsoft actually has the raw text which was used to generate the screen.

01:13:28 - Leo Laporte (Host)
Yeah, yeah, that makes sense okay.

01:13:30 - Steve Gibson (Host)
So um, they, they said recall also does not take snapshots of certain kinds of content, including in private web browsing sessions. In Microsoft edge and, by the way, they only said edge, but I saw elsewhere that it's any of the browsers that have a well-defined private browsing mode they do not record that and they said it also treats material protected under digital rights management. You know DRM stuff. Similarly, like other Windows apps such as the snipping tool, recall will not store DRM content. And they note that recall does not perform content moderation. It will not hide information such as passwords or financial account numbers. That data may be in snapshots that are stored on your device, especially when sites do not follow standard internet protocols like cloaking password entry. Okay, so we're rolling toward an entirely new capability for Windows PCs where we'll be able to store data which I presume is somehow indexed first, then encrypted for storage and later access and, unless otherwise instructed and proscribed, this system is indiscriminately taking snapshots of our PC screen content every few seconds and is, by Microsoft's own admission, potentially capturing and saving for later retrieval financial account numbers, monetary balances, contract language, proprietary corporate memos and communications and who knows what private things we'd really rather never have recorded, or whatever else the user might assume will never go any further, or whatever else the user might assume will never go any further. This is where our much beloved and overworked phrase what could possibly go wrong comes to mind. Does anyone not imagine for an instant that having searchable access to the previous 90 or more days of a pc's screen might be hugely interesting to all manner of, both legal and illegal, investigators? Corporate espionage is a very real thing. China is moving their enterprises away from Windows as rapidly as they can, but you have to know that cyber attackers, many of the most skillful and persistent, who seem to be persistently based in China, must be besides themselves with delight over this new prospect that we decadent capitalists in the west are going to start having our pcs recording everything that's displayed on their screens. What a great idea. If history teaches us anything, it's that we still have not figured out how to keep a secret, and especially not microsoft, not Microsoft. So what Microsoft is proposing to plant inside all next generation PCs is tantamount to a 50 gigabyte privacy bomb. Maybe it will never go off, but it will certainly be sitting there trying to and just ask yourself whether law enforcement and intelligence agencies don't also think this sounds like a terrific idea. Oh you betcha. With great power comes great responsibility, and here, clearly, there's much to go wrong. Microsoft understands this perception, and so they ask how is your data protected when using recall? They explain recall snapshots are kept on copilot plus PCs themselves on the local hard disk and are protected using data encryption on your device and if you have Windows 11 Pro or an enterprise, windows 11 SKU bitlocker.

01:18:02
Recall screenshots are only linked to a specific user profile and recall does not share them with other users. Make them available for Microsoft to view or use them for targeting advertisements. Screenshots are only available to the person whose profile was used to sign, for Microsoft to view or use them for targeting advertisements. Screenshots are only available to the person whose profile was used to sign into the device. If two people share a device with different profiles, they'll not be able to access each other's screenshots. If they use the same profile to sign into the device, then they will share a screenshot history and thus you know, be able to scroll back to see what the other person has been doing. Otherwise, recall screenshots are not available to other users or accessed by other applications or services.

01:18:48
Okay, so all that really means there is they've done. The obvious thing, right, is that they've. You know they've divided the machine in the same way they do currently've. You know they've they've divided the machine in the same way they do currently. Now you know, with things like apps that you install for only one profile. So, okay, that's what Microsoft had to say.

01:19:07
The guys from Ars Technica watched Microsoft's presentation of this last Monday and gave their write-up an impressively factual and neutral headline. They said new Windows AI feature records everything you've done on your PC. And then they said recall uses AI features. Okay to take, to quote take images of your active screen every few seconds. Unquote. So Ars wrote.

01:19:41
At a Build conference event on Monday, microsoft revealed a new AI-powered feature called Recall for CoPilot Plus PCs that will allow Windows 11 users to search and retrieve their past activities on their PC. To make it work, recall records everything users do on their PC, including activities in apps, communications in live meetings and websites visited for research. Despite encryption and local storage, the new feature raises privacy concerns for certain Windows users. Microsoft says on its website. Quote Recall uses Copilot Plus PC advanced processing capabilities to take images of your active screen every few seconds. The snapshots are encrypted and saved on your PC's hard drive. The snapshots are encrypted and saved on your PC's hard drive. You can recall to locate the contents you viewed on your PC using search or on a timeline bar that allows you to scroll through your snapshots. Unquote quotes Ars Technica. Ars wrote by performing a recall action, users can access a snapshot from a specific time period, providing context for the event or moment they're searching for. It also allows users to search through teleconference meetings they've participated in and videos watched using an AI powered feature that transcribes and translates speech.

01:21:11
At first glance, the recall feature seems like it may set the stage for potential gross violations of user privacy. Despite reassurances from Microsoft, that impression persists for second and third glances as well well. They said, for example, someone with access to your Windows account could potentially use recall to see everything you've been doing recently on your PC, which might extend beyond the embarrassing implications of pornography viewing and actually threaten the lives of journalists or perceived enemies of the state. And I'll interject to say, in other words, this puts examining someone's web browser history to shame. How quaint that becomes. Ars continues despite the privacy concerns.

01:22:08
Microsoft says that the recall index remains local and private on device encrypted in a way that is linked to a particular users account. Microsoft says recall screenshots are only linked to specific user profile and recall does not share them with other users. Make them available to Microsoft to view anyway. Blah, blah, blah what I just wrote about that. So they said users can pause, stop or delete captured content and connect can exclude specific apps or websites. Recall won't take snapshots of in private web browsing sessions and Microsoft Edge or DRM protected content. However, recall won't actively hide sensitive information like passwords and financial account numbers that appear on screen.

01:23:01
Microsoft previously explored a somewhat similar functionality with the timeline feature in Windows 10, which the company discontinued in 2021. But it didn't take continuous snapshots. Recall also shares some obvious similarities to Rewind, a third-party app for Mac we covered in 2022 that logs user activities for later playback, they said. As you might imagine, all this snapshot recording comes at a hardware penalty. To use Recall, users will need to purchase one of the new CoPilot Plus PCs powered by Qualcomm's Snapdragon X Elite chips, which include the necessary neural processing unit. There are also minimum storage requirements for running Recall, with a minimum of 256GB of hard drive space and 50GB of available space. The default allocation for Recall on a 256GB device is 25GB, which can store approximately three months of snapshots. Users can adjust the allocation in their PC settings. As far as availability goes, they conclude. Microsoft says that recall is still undergoing testing. Microsoft says on its website recall is currently in preview status. During this phase we'll collect customer feedback, develop more controls for enterprise customers to manage and govern recall data and improve the overall experience for users.

01:24:38
Okay, I just should note that the amount of storage recall uses does scale upward with the size of the system's mass storage and presumably the duration of the scroll back increases. Similarly, it'll take 25 gigs when 256 is available, 75 gigabytes on a 512 gig drive and 150 gigabytes from a system with a one terabyte drive of primary mass storage. So presumably the more storage the system is able to commandeer, the further it's possible to scroll back through the system's display history. Okay now, while trying to be objective about this, the first question that leaps into the foreground for me is whether anyone actually needs or wants this. This is a big. You know what I mean. Is this a big, previously unappreciated problem that everyone has?

01:25:49
Okay, but trying to be objective first of all, be objective first of all, compared to the static contents of a hard drive recall would would be, objectively, a goldmine of additional new information about the past 90 plus days of someone's life as viewed through their computer activities. And more than ever before, people's entire lives and their private lives are reflected in what's shown on the screens of their computers. Maybe that makes scrolling back through their recorded lives compelling, I don't know. But we know from Microsoft that it will be snapping video conference content on the fly, the wind and I and, as I mentioned, the windows signal app that goes to extremes to protect the contents of its chats would presumably be captured. Unless you're able, as I mentioned before, and they sort of suggest you can tell recall don't record specific applications. So you probably want to turn that off, or maybe you trust Microsoft and it'll be part of your scroll back. But you know email screens and nearly everything that happens on a PC would be captured, and of course that's the point right. But the vast majority of that content will not have been stored on the machine's hard drive ever until now.

01:27:25
So, objectively, the presence of recall clearly introduces a new, never-before-existing liability. A new, never-before-existing liability, and that's what everyone who talks about this sees as a potential for creating havoc where none existed before. So the question, it seems to me, is whether the new value that's created and is returned by recalls scrolling usage history justifies whatever new risk might also be created by its retention of that data. How useful will having all that actually be? I've tried to imagine an instance where I wish I could look back in time at my computer screen. I suppose I don't feel the need, since I've never had the option. So if I knew I could scroll my computer screens back in time, I suppose it might be an interesting curiosity, but it really doesn't feel like a feature I've been needing and missing until now. I suppose an analogy would be that the world had no idea what it was missing before the creation of social media, and hasn't that been a big boon to mankind. Now, unfortunately, we seem unable to live without it. Perhaps this will be the same. The bottom line is, I think we're just going to need to live with this thing for a while. We're going to need to see whether this is a capability desperately searching for a need, or whether, once people get used to having this new thing, they start thinking how did I ever live without this? However, one thing that is also absolutely objectively true is that everyone will be carrying around a 50 gigabyte privacy bomb that they never had before. Maybe it'll be worth the risk. Only time will tell.

01:29:39
Oh, and Simon Zarafa posted a tweet from someone who has been poking into recalls storage.

01:29:47
He's detective at mastodonsocial, who wrote, can confirm that recall data is indeed stored in a SQLite 3 database.

01:30:00
The folder it's in is fully accessible only by system and the administrators group.

01:30:06
Attempting to access it as a normal user yields the usual you don't currently have permission error.

01:30:14
And he said here's how the database is laid out for those curious. And he said figured you might appreciate a few screenshots. So I put one in the show notes and, sure enough, it's got a DB browser for SQLite and shows the layout of the table with all of the various components you know window capture, text index content, window capture text index, data window, text doc size and relations and all kinds of stuff. So anyway, I guess what this means is that, if nothing else, if that data should ever escape from anyone's PC, it will not be difficult for anybody who gets it to open it up and browse around in it, because it's just a SQLite 3 database. And, leo, you know, I guess you know if search really worked and you were able to search on something that you remember but you didn't write down or didn't record, didn't save, but it was just like right there at your fingertips and bang it popped up and showed it to you. I guess I could see that that could be compelling.

01:31:39 - Leo Laporte (Host)
We spent a lot more time talking about recall throughout the year and I have a feeling we're not done in 2025. There's going to be a lot more. You're watching the best of 2024 with Steve Gibson's Security Now so glad you're here On we go. Let's talk about what it was probably. Well, it's hard to pick one of the biggest. Let's say that security nightmares certainly kept a lot of it professionals up all weekend so I start this on in the show notes with a picture from a listener.

01:32:12 - Steve Gibson (Host)
This shows the blue screens of death at the deserted delta terminal of the seattle tacoma airport. Yeah, which was taken friday morning by a listener who's been listening since episode one, and we see three screens. It looks like maybe there's someone back in the distance. They're sort of behind one of the screens facing away from us, but otherwise there's nobody in line uh, there's an empty wheelchair.

01:32:41 - Leo Laporte (Host)
4 000 flights canceled. 4 000 flights canceled. That's my for delta alone. By the way, paul thurot, being a little bit uh of a pedant, says that's not the blue screen of death, that's a recovery screen. But you know what? It's a blue screen of death.

01:32:59 - Steve Gibson (Host)
It's a blue screen. Okay, paul, point taken. So it's fortunate that GRC's incoming email system was already in place and ready for this CrowdStrike event.

01:33:13 - Leo Laporte (Host)
I bet you got the mail.

01:33:15 - Steve Gibson (Host)
Oh did I, and I'm going to share some right from first-hand accounts from the field, because it enabled a number of our listeners to immediately write last week to send some interesting and insightful feedback.

01:33:28 - Leo Laporte (Host)
A lot of our listeners, I'm sure have very sore feet going from machine to machine all weekend.

01:33:33 - Steve Gibson (Host)
Holy cow, in one case, 20,000 workstations and these people don't hang out on Twitter, so email was the right medium for them. Brian Tillman wrote I can't wait to hear your comments next week about the current cloud outages happening today. My wife went to a medical lab this morning for a blood test and was turned away because the facility cannot access its data storage Wow. Another listener wrote good morning. I'm new to this group, only been. I love this. I'm new to this group, only been listening for the last eight years. Newbie, that's right, you're going to have to go back and catch up my friend he says oh newbie, that's right, You're going to have to go back and catch up.

01:34:17
My friend. He says I'm sure CrowdStrike will be part of next week's topics. Uh-huh, he said I would love to hear your take on what and how this happened. I'm still up from yesterday fixing our servers and end users' computers. I work for a large hospital in Central California and this has just devastated us. We have fixed hundreds of our critical servers by removing the latest file pushed by CrowdStrike and are slowly restoring services back to our end users and community. Thank you for all you do, keeping us informed and educated of issues like this, looking forward to 999 and beyond.

01:34:59 - Leo Laporte (Host)
Oh, I like that could be our new slogan 999 and beyond. I like it, that's it.

01:35:07 - Steve Gibson (Host)
Tom Jenkins posted to GRC's news group. Crowdstrike is a zero-day defense software, so delaying updates puts the network at risk. I don't know how they managed to release this update with no one testing. It seems obvious at this point even casual testing should have shown issues. And of course, tom raises the billion dollar and I'm not probably exaggerating the billion dollar question. How could this have happened? I will be spending some time on that in a few minutes, but I want to first paint some pictures of what our listeners experienced firsthand. Tom's posting finished with we had over 100 servers and about 500 workstations offline in this event, and recovery was painful. Their fix required the stations to be up. Unfortunately, the bad ones were in a boot loop that, for recovery, required manual entry of individual machine bitlocker keys to apply the fix. And of course that was often a problem because machines that were protected by bitlocker needed to have the recovery keys present in order for their maintainers to be able to get to the file system, even after being booted into safe mode, because safe mode is not a workaround for BitLocker.

01:36:38
Seamus Maranon, who works for a major corporation, which he asked me to keep anonymous, although I asked for permission. He told me who it is, but asked for anonymity there. He said for us the issue started at about 1245 am Eastern time. We were responding to the issue and get a load of this response and the way his team operated. So the issue started for him at 1245 am Eastern time. He said we were responding to the issue by 1255,. 10 minutes later and had confirmed by 105 am that it was a global-level event and communicated that we thought it was related to CrowdStrike. We mobilized our team and had extra resources on site by 1.30 am, so 25 minutes later. The order of recovery we followed were the servers, production systems, our virtual environment and finally the individual PCs. In all there were about 500 individually affected systems across a 1, a 1500 acre campus. We were able to get to 95% recovery before our normal office hours started and we were back to normal by 10 AM. Okay Now I am quite impressed by the performance of of Sheamus's team. To be back up and running by 10 am the day of it's amazing after 500 machines were taken down across a 1500 acre campus, taken down in the middle of the night, is truly impressive and I would imagine that whomever his team reports to is likely aware that they had a world-class response to a global scale event Since, for example, another of our listeners in Arizona was walking his dog in a mall because it's too hot to walk pets outside during the day in Arizona too hot to walk pets outside during the day in Arizona. He took and sent photos of the sign on Dick's sporting goods the following day, on Saturday, stating that they were closed due to a data outage. So it took, you know, many other companies much longer to recover.

01:39:13
A listener named Mark Hull shared this. He said, steve, thanks for all you do for the security community. I'm a proud Spinrite owner and have been an IT consultant since the days of DOS 3 and NetWare 2.X. He said I do a lot of work in enterprise security, have managed CrowdStrike, write code and do lots of work with SCCM. He says parenz, ms endpoint management as well as custom automation. So I feel I have a good viewpoint on the CrowdStrike disaster. He says CrowdStrike is designed to prevent malware and, by doing so, provide high availability to all our servers and endpoints. The fact that their software may be responsible for one of the largest global outages is completely unacceptable. As you have said many times, mistakes happen, but this kind of issue represents a global company's complete lack of procedures, policies and design that could easily prevent such a thing from happening. Now, of course, this is. You know what do they call it? Something quarterbacking.

01:40:24 - Leo Laporte (Host)
Monday night yeah, monday night quarterback, yeah, yeah, okay, right, 2020 hindsight yeah.

01:40:30 - Steve Gibson (Host)
And I do have some explanations for this which we'll get to. Anyway, he said, given the crowd strike is continually updated to help defend against and I should just say no one's disagreeing with him and you know Congress will be finding out before long. You know what happened here. But he said, given the crowd strike is continually updated to help defend against an ever-changing list of attacks, the concept of protecting their customers from exactly this type of issue should be core to their design. Working in automation, the rule is that you always have a pilot group to send out software before you send it to everyone. He says. I work with organizations with easily over 100,000 users. If you don't follow these rules, you eventually live with the impact. In the old days, companies would have a testing lab of all kinds of different hardware and OS builds where they could test before sending anything out to production. This would have easily caught the issue, he says. Now it seems that corporations have eliminated this idea. Since this is not a revenue generating entity, they should research opportunity cost. He says. With the onset of virtualization, I would argue the cost of this approach continues to decrease. And again, at this point this is speculation because we don't understand how this happened. But he says, since it appears this was not being done, another software design approach would be to trickle out the update, then have the code report back metrics from the machines that received the update at some set interval, for instance every 5, 10, 30 minutes. The endpoints could send a few packets with some minor reporting details, such as CPU utilization, disk utilization, memory utilization. Then if CrowdStrike pushed an update and the first thousand machines never reported back after five minutes, there would be some automated process to suspend that update and send emails out to the testing team. In the case of endpoints that check back every 10 minutes, you could set a counter and blah, blah, blah. Anyway, he goes on to explain, you know, the sorts of the sorts of things that make sense for means of preventing this from happening. And yes, I agree with him completely from a from a theoretical standpoint. There are all kinds of ways to to prevent this. Um and again, as I said, we'll wrap up by by looking at some of that in detail. Samuel goal Gordon Stewart in Canberra, australia, wrote. He said here in Australia it was mid-afternoon on a Friday Most broadcast media suffered major outages limiting their ability to broadcast news or anything else for that matter.

01:43:28
Sky News Australia had to resort to taking a feed of Fox News as they couldn't even operate the studio lights. They eventually got back on air with a limited capacity from a small control room in Parliament House. The national government-funded broadcaster, abc had to run national news instead of their usual state-based news services and couldn't play any pre-recorded content, so reporters had to read their reports live to camera. A lot of radio stations were still unable to broadcast even Friday night. Supermarkets had their registers go down. One of the big supermarkets near me had half their registers offline. A department store nearby had only one register working.

01:44:14
Train services were halted as the radio systems were all computerized. Airports ground to a halt. Half a dozen banks went offline. Telecommunication companies had outages. Many hospitals reverted to paper forms. A lot of state government systems seemed to be affected, but the federal government seemed less impacted, and who knows how long it will take for IT departments to be able to physically access PCs which won't boot so they can implement the fix, as you would say, steve, about allowing a third party unilaterally updating kernel drivers worldwide whenever they want. What could possibly go wrong? After I thanked Samuel for his note, he replied with a bit more writing In my own workplace. We're offline until Monday. I think we got lucky because our network gateway was the first to take the update and failed before anything else had a chance to receive the update.

01:45:17 - Leo Laporte (Host)
So it took them offline, but fortunately it stopped the update for everybody else yeah, it's good he said nothing will get fixed until the head office looks at it.

01:45:27 - Steve Gibson (Host)
But I think they'll be pleasantly surprised that only a couple of devices need fixing and not dozens or more. Not my problem or role these days, although I did foolishly volunteer to help Good man and I saw the following on my Amazon app on my iPhone. It said I got a little pop-up that said a small number of deliveries may arrive a day later than anticipated due to a third-party technology outage yeah, yep.

01:45:59
Meanwhile, in the us, almost all airlines were grounded, with all their flights canceled. One, however, was apparently flying the friendly skies all by itself before you repeat this story, it's been debunked. I'm not surprised.

01:46:15 - Leo Laporte (Host)
Yeah, it didn't seem possible yeah.

01:46:18 - Steve Gibson (Host)
Yes, digital Trends reported under the headline, a Windows version from 1992, is saving Southwest's butt right now.

01:46:29 - Leo Laporte (Host)
Anyway yes, southwest got saved because they didn't use CrowdStrike is how they got saved, not because they were using Windows 3.1.

01:46:37 - Steve Gibson (Host)
Exactly. There are companies all over the world who are not CrowdStrike users, and it was exactly. And so it was only those who had this csagentsys device driver loading at boot time in their kernel that had this problem. Yeah, yeah, so, and and I think that this was made more fun of because in the past, remember that, um uh, the southwest airlines has come under fire for having outdated systems. Yes, but not not that. Yes, they had scheduling systems they hadn't updated for a long time.

01:47:18 - Leo Laporte (Host)
It's an interesting story and somebody on Mastodon kind of went through it and it really is. This happens a lot in journalism nowadays. Somebody tweeted that the Southwest scheduling software looked like it was Windows 95. It wasn't, but looked that way. It got picked up and like telephone it got elaborated to the point where, uh, digit trends and a number of other outlets and including, I might add, myself, uh reported this story, uh, and then we found out it was you know, southwest confirmed it.

01:47:49 - Steve Gibson (Host)
Yeah, and really I mean, even I'm having problems today on Windows 7 because increasingly things are saying what are you thinking, gibson? Like what is wrong with you?

01:48:04 - Leo Laporte (Host)
You'd have to really, really work hard to keep 3.1 up and running.

01:48:07 - Steve Gibson (Host)
I think yeah, so I was initially going to share a bunch of TechCrunch's coverage. Initially going to share a bunch of TechCrunch's coverage, but then yesterday, catalin Simpanou, the editor of the Risky Business Newsletter, produced such a perfect summary of this event that only one important point that TechCrunch raised made it later in today's podcast, which I'll get to in a minute. But first here's Catalan's summary, which was just, it's perfect. So he writes around 8.5 million Windows systems went down on Friday in one of the worst IT outages in history. The incident was caused by a faulty configuration update to the CrowdStrike Falcon security software that caused Windows computers to crash with a blue screen of death. Paul, we realize that's not what it is, thank you. Since CrowdStrike Falcon is an enterprise-centric EDR.

01:49:11
The incident caused crucial IT systems to go down in all the places you don't usually want them to go down. Outages were reported in places like airports, hospitals, banks, energy grids, news organizations and loads of official government agencies. Planes were grounded across several countries. 9-11 emergency systems went down, hospitals canceled medical procedures, atms went offline, stock trading stopped, buses and trains were delayed, ships got stuck in ports, border and customs checks stopped. Windows-based online services went down, he says, for example, icann, and there's even an unconfirmed report that one nuclear facility was affected. The Mercedes F1 team, where CrowdStrike is a main sponsor, had to deal with the aftermath, hindering engineers from preparing the carts for the upcoming Hungarian GP. Heck, he wrote. Even Russia had to deal with some outages. Whoops, I guess they're not quite windows free yet. Over there he says it was a cluster you know what, on so many levels that it is hard to put into words how much of the world was upended on Friday, with some outages extending into the weekend. Reddit is full of horror stories where admins lost their jobs, faced legal threats or were forced to sleep at their workplace to help restore networks. There are reports of companies having tens of thousands of systems affected by the update.

01:50:51
The recovery steps aren't a walk in the park either. It's not like CrowdStrike or Microsoft could have shipped a new update and fixed things in the span of a few minutes. Instead, users had to boot Windows into safe mode and search and delete a very specific file. The recovery cannot be fully or remotely automated, and an operator must go through the process on each affected system. Microsoft has also released a recovery tool which creates a bootable USB drive that IT admins can use to more quickly recover impacted machines, but an operator still needs to be in front of an affected device. Still needs to be in front of an affected device.

01:51:34
For some super lucky users, the BSOD error corrected itself just by constantly rebooting affected systems. Apparently, some systems were able to gain short enough access to networking capabilities to download the fixed CrowdStrike update file and overwrite the old buggy one. However, this is not the universal recommended fix. There are people reporting that they managed to fix their systems after three reboots, while others needed tens of reboots. It took hours for the debug information to make its way downstream, meaning some of the world's largest companies had to bring their businesses to a halt, losing probably billions in the process and he said extremely rough estimation, but probably in the correct range, he writes.

01:52:28
Unfortunately, the Internet is also full of idiots willing to share their dumb opinions. In the year of the Lord, 2024, we had people argue that it's time to ditch security products since they can clause this type of outage. Oh yes, that's the solution. I roll. But CrowdStrike's blunder is not unique or new for that matter. Something similar impacted loads of other vendors before, from Panda Security to Kaspersky and McAfee. Ironically, crowdstrike's founder and CEO, george Kurtz was McAfee's CTO at the time, but don't go spinning conspiracy theories about it. It doesn't actually mean that much. He writes stuff like this tends to happen, and quite a lot. As an InfoSec reporter, I stopped covering these antivirus update blunders after my first or second year because there were so many and the articles were just repetitive. Most impact only a small subset of users, typically on a particular platform or hardware specification. They usually have the same devastating impact, causing BSOD errors and crashing systems, because the nature of security software itself, which needs to run inside the operating system kernel so it can tap into everything that happens on a PC.

01:53:59
Crowdstrike released an initial post-mortem report of the faulty update on Saturday. It blamed the issue on what the company calls a channel file update, which are special files that update the Falcon endpoint detection and response. That's EDR. That's what EDR stands for client with new techniques abused by threat actors. In this case it was channel file 291. And then he gives us you know the full file name, c-0000291, and then something you know starsys that causes the crashes. Crowdstrike says this file is supposed to update the Falcon EDR to detect malware that abuses Windows named pipes to communicate with its command and control server. Such techniques were recently added to several C2 frameworks, tools used by threat actors and penetration testing teams, and CrowdStrike wanted to be on top of the new technique. The company says the Falcon update file unfortunately yeah, unfortunately triggered a logic error, since Falcon ran in the Windows kernel. The error brought down the house and caused Windows to crash with a BSOD. After that, it was just a house of cards.

01:55:20
As the update was delivered to more and more CrowdStrike customers, the dominoes started falling all over the world. Kurtz, the CEO, was adamant on Friday that this was just an error on the company's part and made it explicitly clear that there was no cyber attack against its systems. Us government officials also echoed the same thing. For now, crowdstrike seems to be focused on bringing its customers back online. The incident is likely to have some major repercussions going beyond the actual technical details and the global outages.

01:55:54
Going beyond the actual technical details and the global outages what they will be? I cannot be sure he writes, but I smell some politicians waiting to pounce on it with some ideas he has in quotes. Oh great, this might also be the perfect opportunity slash excuse for Microsoft to go with Apple's route and kick most security vendors and drivers out of the kernel, but before that, microsoft might need to convince the EU to dismiss a 2009 agreement first. Per this agreement, microsoft cannot wall off its OS from third-party security tools. The EU and Microsoft reached this arrangement following an anti-competitive complaint filed by security software vendors after Microsoft entered the cybersecurity and AV market with Defender, with vendors fearing Microsoft would use its control over Windows to put everyone out of business by neutering their products. Doesn't this sound like well? No, sorry, we can't cancel third-party cookies because some people are making money with them, right, he writes.

01:57:06
After the recent Chinese and Russian hacks of Microsoft cloud infrastructure, we now know very well what happens when Microsoft has a dominant market position, and it's never a good thing. So the existence of this agreement isn't such a bad idea. If Microsoft wants to kick security software out of the kernel, defender needs to lose it too. Unfortunately, blinding security tools to the kernel now puts everyone in the iOS quandary, where everyone loses visibility into what happens on a system. That's not such a good idea either. So we're back with this argument where we started. In closing, just be aware that threat actors are registering hundreds of crowd strike related domains that will most likely be used in spear phishing and malware delivery campaigns.

01:57:58
They're so evil. It's honestly one of the best and easiest phishing opportunities we've had in a while. So, as suggested by this week's picture of the week, which shows the windows kernel crash dump resulting from CrowdStrike's detection file update, I will be getting down to the nitty-gritty details that underlie exactly what happened, but I want to first finish laying out the entire story. The part of TechCrunch's coverage that I wanted to include was their writing. This CrowdStrike, founded in in 2011, has quickly grown into a cybersecurity giant. Today, the company provides software and services to 29,000 corporate customers, including around half of fortune 500 companies, 43 out of 50 us States and eight out of the top 10 tech firms.

01:58:58
According to its website, the company's cybersecurity software, falcon, is used by enterprises to manage security on millions of computers around the world, and now we know exactly how many millions. These businesses include large corporations, hospitals, transportation hubs and government departments. Most consumer devices do not run Falcon and are unaffected by this outage, and here was with that lead up. This was the point. One of the company's biggest recent claims to fame was when it caught a group of Russian government hackers breaking into the democratic national committee ahead of the 2016 U S presidential election. Crowdstrike is also known for using memorable animal themed names for the hacking groups it tracks based on their nationality, such as fancy bear.

01:59:52 - Leo Laporte (Host)
Oh, that's where that came from.

01:59:54 - Steve Gibson (Host)
Oh, believed to be part of Russia's General Staff Main Intelligence Directorate, or GRU. Cozy Bear, believed to be part of Russia's Foreign Intelligence Service, or SVR. Gothic Panda, believed to be a Chinese government group, and Charming Kitten, believed to be an Iranian state backed group. The company even makes action figures to represent these groups, which it sells as swag. Oh cool, crowdstrike is so big. It's one of the sponsors of the Mercedes F one team and this year even aired a Super Bowl ad a first for a cybersecurity company.

02:00:40 - Leo Laporte (Host)
And they were also one of the first cybersecurity companies to advertise on this show, Steve. In fact, we interviewed the CTO some time ago and it's an impressive company, you know. I'm very curious to hear what happened here.

02:00:56 - Steve Gibson (Host)
So, as I have here written, I have not counted the number of times this podcast has mentioned CrowdStrike. It's certainly been so many times that their name will be quite familiar to everyone who's been listening for more than a short while, and not one of those previous mentions was due to some horrible catastrophe they caused. No, as the writer for TechCrunch reminds us, crowdstrike has been quite instrumental in detecting, tracking and uncovering some of today's most recent and pernicious malware campaigns and the threat actor groups behind them. How are they able to do this? It is entirely due to exactly this same Falcon sensor instrumentation system that's been spread far and wide around the world.

02:01:50
It's this sensor network that gives them the visibility into what those 8.5 million machines that had been running it are encountering day to day in the field Exactly, and we need that visibility, yeah.

02:02:06 - Leo Laporte (Host)
By the way, here is the. Here's the aquatic Panda figurine for only $28 on the crowd strike swag shop.

02:02:15 - Steve Gibson (Host)
Wow, you got to be really deep into that. Whatever, wow.

02:02:23 - Leo Laporte (Host)
Obviously, we spent a lot of time talking about this CrowdStrike incident. In fact, the conversation goes on for another 40 minutes. You can go back in time if you want, to episode 984 and watch more. In fact, everything you're seeing here really is a small portion of of the larger topics steve delves into every tuesday on our network. I hope you're enjoying it, uh, and I always encourage you, if you want to know more, to go back to the original episode, because usually discussions are very, very elaborate. Uh, happy, uh happy holidays, elaborate, happy holidays. You're watching Security Now, the best of 2024. On, we go with the show into what was something that Steve said would never happen a 1,000th episode.

02:03:09 - Steve Gibson (Host)
I wanted to say, as we conclude this 1,000th episode of of security now, that providing this weekly podcast with Leo has been, and I'm sure shall continue to be, my sincere pleasure. As I've said before, I'm both humbled by and proud of the incredible listenership this podcast has developed over the years. It has been one of the major features of my life and I'm so glad that you, leo, thought to ask me 20 years ago whether I might be interested in spending around 20 minutes a week to discuss various topics of Internet security. Just look what happened, oh, my goodness. So thank you, leo, for making this possible. Thank you, steve. I didn't see where the next thousand will take.

02:04:01 - Leo Laporte (Host)
I just provided you with the platform and, uh, you took it from there. Uh, it's been really amazing. Our uh web engineer, patrick delahandy, posted some statistics, um, about uh the show. He said the shortest show we ever did. Do you remember this? We did like an extra thing that was three minutes. I think it was like an update of some kind. I can't remember why, but we had to do an update for some reason. So I guess that will always be the shortest show or that there wasn't a whole lot in it. Um, I'm like I'm trying to scroll back see if I can find uh his post. And then he said the longest one we did I think was close to three hours, was two hours and 57 minutes.

02:04:44 - Steve Gibson (Host)
Wow, yeah I didn't know that we actually I thought that week or two ago was that that was two and a half hours and I thought that one was the well there's always the outliers.

02:04:55 - Leo Laporte (Host)
You keep it to two hours. Pretty nice. I think that's good.

02:04:58 - Steve Gibson (Host)
I think that's a target. I think that's a reasonable uh uh time. We've got a couple listeners who complain I, I don't have two hours to spend.

02:05:07 - Leo Laporte (Host)
It's like well, okay, don't listen to the whole thing then yeah, nobody's making you, it's not like you have to. My attitude's always been give people usually you know you're supposed to give them less than they want. And in my attitude towards podcasting is as long as it's longer than your commute that's you don't want it to end halfway to work and we know how people feel about those YouTube shorts.

02:05:32 - Steve Gibson (Host)
We don't want to be accused. There you go. We don't want to be shorts.

02:05:35 - Leo Laporte (Host)
Uh-uh, no, we're longs. Yeah, in the early days of Twit, I tried to keep everything under 70 minutes because people were burning the shows to CDs and that was the maximum length of a CD, right? Yep, I don't worry about that anymore. As you probably know, I think we are now on almost all of our shows. Two hours is the shortest that I do. Almost all of them are two and a half to three hours, so you actually have the honor of hosting our shortest show. Congratulations. Hosting our shirt is show my shirt.

02:06:13 - Steve Gibson (Host)
Congratulations and, dare I say, most focused.

02:06:17 - Leo Laporte (Host)
Yeah, very focused, and we love that. It is easily the geekiest show we do and I say that proudly. I think that we, you know, we try to serve a broadish audience, cause I don't want people to say, oh, I don't understand anything he ever talks about, uh. But at the same time we also want to serve the hardcore person who really gets this and really wants to know deeply what's going on and we do have listeners who write and say well, I think I understand about 15% of what you guys talk about, but I like it.

02:06:57 - Steve Gibson (Host)
I don't. I'm not sure what it is, but it, you know, it makes me feel good and I always get a little something. It's like okay, great.

02:07:04 - Leo Laporte (Host)
Yeah, that's okay too. I mean, I've often thought of what we do is aspirational. I was. There's a good documentary about Martha Stewart on Netflix right now. It's actually fascinating. I would watch it, even if you're not interested in Martha Stewart. But people said about her and her magazine nobody can live that way. Nobody can be that perfect. You're setting too high a bar. She says it's aspirational. Everybody might want beauty in their life and want to be able to have that. Everybody wants to understand what's going on in the world of technology, and if you don't understand at all, you will Just keep listening, right? Yep, steve, it has been my great honor to know you and work with you for more than 30 years. I can't believe it's been 30 years.

02:07:48 - Steve Gibson (Host)
It doesn't feel like that at all and that's the good news, because you know, we're only at 1,000.

02:07:55 - Leo Laporte (Host)
Yeah, Look, we're going to keep doing this as long as we can, but I am so honored and thrilled that you were willing to do this way back then and continue to do it. I know it's a lot of work. I'm very aware of how much work you put in. It's a lot of work, but I'm happy to do it. Yeah, here's Patrick Delahanty's note. I found it. The shortest episode of Security Now was 42 minutes. Four minutes and 12 seconds. That's this one. Security Now 103. Se Vote for.

02:08:27
Steve, do you remember that that was? You were trying to win the podcast.

02:08:31 - Steve Gibson (Host)
Oh, right, right, the podcast.

02:08:32 - Leo Laporte (Host)
And I think you did.

02:08:33 - Steve Gibson (Host)
Didn't you? We won, oh right, right, the podcast, and I think you did, didn't you we?

02:08:37 - Leo Laporte (Host)
went. We won the first several years of podcast awards. Yeah, yeah, well, and rightly so. And then the the longest episode and I have the receipts to prove it three hours and 57 seconds. But it was a best of, so you don't have to take credit for that.

02:08:50 - Steve Gibson (Host)
Ah, thank, goodness, I can't imagine I would have participated in that I would have been on the floor.

02:08:56 - Leo Laporte (Host)
Yeah Well, the reason was there were so many good sections, segments, in 2018.

02:09:02 - Steve Gibson (Host)
We couldn't do less than three hours, that's neat, yeah.

02:09:05 - Leo Laporte (Host)
So that's good, that's fair. I think that's okay. Steve, thank you from the bottom of my heart for continuing on. I would have been bereft sitting here on this Tuesday afternoon without a security now, and I know I'm not alone on that. So thank you for all the work you so much work every week there's no end in sight.

02:09:26 - Steve Gibson (Host)
They used to be saying our listeners were saying to nine, nine, nine and beyond. Now I think it's going to be to one nine, nine, nine.

02:09:34 - Leo Laporte (Host)
How about 9-9-9-9? How long would that take? 200 years, yeah.

02:09:42 - Steve Gibson (Host)
I'm feeling great but, as I said, I do believe in a rational universe.

02:09:49 - Leo Laporte (Host)
Well, but wait, maybe we're laughing now, but somebody in the future will be listening to AI. Steve, that's true, and episode 10,000.

02:10:00 - Steve Gibson (Host)
I'm sure you could dump all the transcripts into an AI and say, okay, give me the last week's news as Steve would present it Exactly. I should note that I already have everything I need with, thanks to today's chat, gpt 4.0. And it has changed my life for the better. I've been using it increasingly as a time saver, in sort of, in the form of a programming language, super search engine and even a syntax checker. I've used it sort of as a crutch when I need to quickly write some throwaway code in a language like PHP, where I do not have expertise, but I want to get something done quickly. I just, you know, I'd, like you know, solve a quick problem, you know, parse a text file in a certain way into a different format, that sort of thing. In the past I would take, you know, if it was a somewhat bigger project than that an hour or two, putting queries into Google, following links to Programmer's Corner, stack Overflow or other similar sites, and I would piece together the language construction that I needed from other similar bits of code that I would find online. Um, if I was unable to find anything useful, like you know, solve the problem. I would then dig deeper in through the languages, actual reference texts, to find the usage in the syntax that I needed, uh, and then build up from that. You know cause. You know the, the, the procedural languages. It's just a matter of like. Okay, what do I use for inequality? What do I use for you know how? How exactly are the looping constructs built, that kind of thing, because I now have access to what I consider a super programming language search engine.

02:12:18
Now I ask the experimental coding version of ChatGPT for whatever it is I need. I don't ask it to provide the complete program, since that's really not what I want. You know, I love coding in any language because I love puzzles, and puzzles are language agnostic, but I do not equally know the details of every other language. There, you know, there's nothing chat GPT can tell me about programming assembly language that I have not already known for decades. But if I want to write a quick throwaway utility program, like in visualbasicnet, a language that I've spent very little time with, and because I like to write an assembly language, you know, but I need to, for example, quickly implement an associative array, as I did last week, rather than poking around the internet or scanning through the Visual Basic syntax to find what I did last week. Rather than poking around the internet or scanning through the visual basic syntax to find what I'm looking for, I'll now just pose the question to chat GPT. I'll ask it very specifically and carefully for what I want and in about two seconds I'll get what I may have previously spent 30 to 60 minutes sussing out online.

02:13:39
It has transformed my work path for that class of problem that I've traditionally had. It's useful whenever I need some details where I do not have expertise is, I think, the way I would put it, and I've seen plenty of criticism levied by other programmers of the code produced by today's AI. To me it seems misplaced. That is, their criticism seems misplaced and maybe just a bit nervous, and maybe they're also asking the wrong question. And maybe they're also asking the wrong question.

02:14:15
I don't ask ChatGPT for a finished product because I know exactly what I want and I'm not even sure I could specify the finished product in words or that that's what it's really good for. So I ask it just for specific bits and pieces and I have to report that the results have been fantastic. I mean, it is literally. It's the way I will now code languages. I don't know. I think is probably the best way to put it. It is, you know, it's ingested, the internet and you know obviously we have to use the term it knowing them, very advisedly. It doesn't know them, but whatever it is, I am able to like ask it a question and I actually get like really good answers to to to tight problem domain questions Okay, to tight problem domain questions, okay. But what I want to explore today is what lies beyond what we have today, what the challenges are and what predictions are being made about how and when we may get more.

02:15:32
Whatever that more is, you know the there, where we want to get, is generically known as artificial general intelligence, which is abbreviated AGI. Okay, so let's start by looking at how Wikipedia defines this goal. Wikipedia says artificial general intelligence is a type of artificial intelligence that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks, which is limited to specific tasks. Artificial superintelligence ASI, on the other hand, refers to AGI that greatly exceeds human cognitive capabilities.

02:16:30
Agi is considered one of the definitions of strong AI. They say creating AGI is a primary goal of AI research and of companies such as OpenAI and Meta A 2020 survey identified 72 active AGI research and development projects across 37 countries. The timeline for achieving AGI remains a subject of ongoing debate among researchers and experts. As of 2023, some argue that it may be possible in years or decades. Others maintain it might take a century or longer, and a minority believe it may never be achieved. Or longer and a minority believe it may never be achieved. Notable AI researcher, jeffrey Hinton, has expressed concerns about the rapid progress toward AGI, suggesting it could be achieved sooner than many expect. There's debate on the exact definition of AGI and regarding whether modern large language models LLMs, such as GPT-4, are early forms of AGI. Contention exists over whether AGI represents an existential risk. Many experts on AI have stated that mitigating the risk of human extinction posed by AGI should be a global priority. Others find the development of AGI to be too remote to present such a risk.

02:18:07
Agi is also known as strong AI, full AI, human level AI or general intelligent action. However, some academic sources reserve the term strong AI for computer programs that experience sentience or consciousness. In contrast, weak AI or narrow AI is able to solve one specific problem but lacks general cognitive abilities. Some academic sources use weak AI as the term to refer more broadly to any programs that neither experience consciousness nor have a mind in the same sense as humans. Related concepts include artificial superintelligence and transformative AI. An artificial superintelligence is a hypothetical type of AGI that is much more generally intelligent than humans, while the notion of transformative AI relates to AI having a large impact on society, thus transforming it, for example, similar to the agricultural or industrial revolutions. A framework for classifying AGI levels was proposed in 2023 with Google DeepMind researchers, or by Google DeepMind researchers. They define five levels of AGI. They define five levels of AGI emerging. Other words, an artificial superintelligence is similarly defined, but with a threshold of 100%. They consider large language models like ChatGPT or LAMA2 to be instances of the first level emerging AGI.

02:20:23
Okay, so we're getting some useful language and terminology for talking about these things. The article that caught my eye last week, as we were celebrating the thousandth episode of this podcast, was posted on perplexity ai, titled altman predicts agi by 2025. The perplexity piece turned out not to have much meat, but it did offer the kernel of some interesting thoughts and some additional terminology and talking points, so I still want to share it. Perplexity, wrote Open AI CEO Sam Altman has stirred the tech community with his prediction that artificial general intelligence, agi, could be realized by 2025. Realized by 2025, a timeline that contrasts sharply with many experts who foresee AGI's arrival much later. Despite skepticism, altman asserts that OpenAI is on track to achieve this ambitious goal, emphasizing ongoing achievements and substantial funding, while also suggesting that the initial societal impact of AGI might be minimal. In a Y Combinator interview, altman expressed excitement about the potential developments in AGI for the coming year. However, he also made a surprising claim that the advent of AGI would have surprisingly little impact on society, at least initially. This statement has sparked debate among AI experts and enthusiasts, given the potentially transformative nature of AGI, and Altman's optimistic timeline stands in stark contrast to many other experts in the field, who typically project AGI development to occur much later, around 2050. Despite the skepticism, altman maintains that OpenAI is actively pursuing this ambitious goal, even suggesting that it might be possible to achieve AGI with current hardware. This confidence, coupled with OpenAI's recent $6.6 billion funding round and its market valuation exceeding $157 billion, underscores the company's commitment to pushing the boundaries of AI technology.

02:23:00
Achieving artificial general intelligence faces several significant technical challenges that extend beyond current AI capabilities. So here we have four bullet points that outline what AGI needs that there's no sign of today. First common sense reasoning. Agi systems must develop intuitive understanding of the world, including implicit knowledge and unspoken rules to navigate complex social situations and make everyday judgments. Number two context awareness. Agi needs to dynamically adjust behavior and interpretations based on situational factors, environment and prior experiences. Third handling uncertainty. Agi must interpret incomplete or ambiguous data, draw inferences from limited information and make sound decisions in the face of the unknown.

02:24:11
Learning. Developing AGI systems that can update their knowledge and capabilities over time without losing previously acquired skills remains a significant challenge. Sexual awareness, uncertainty and learning is that? None of the AIs I've ever interacted with has ever asked for any clarification about what I'm asking? That's not something that appears to be wired into the current generation of AI. I'm sure it could be simulated, you know if it would further raise the stock price of the company doing it, but it wouldn't really matter, right, because it would be. Why do you think you're feeling sort of cranky today? You know it wasn't really asking a question. It was just programmed to seem like it was. You know, understanding what we were typing in. The point I hope to make is that there's a hollowness to today's AI. You know it's truly an amazing search engine technology, but it doesn't seem to be much more than that to me. There's no presence or understanding behind its answers.

02:25:45
The Perplexity article continues saying Overcoming these hurdles requires advancements in areas such as neural network architectures, reinforcement learning and transfer learning. Additionally, agi development demands substantial computational resources and interdisciplinary collaboration among experts in computer science, neuroscience and cognitive psychology. While some AI leaders, like Sam Altman, predict AGI by 2025, many experts, also known as Security, now episode 2860. 90% of the 352 experts surveyed expect to see AGI within 100 years. 90% expect it, so not to take longer than 100 years, but the median is by 2060. So, you know, not next year, as Sam suggests they wrote.

02:27:10
This more conservative outlet stems from several key challenges. First, the missing ingredient problem. Some researchers argue that current AI systems, while impressive, lack fundamental components necessary for general intelligence. Statistical learning alone may not be sufficient. Again, the missing ingredient problem. I think that sounds exactly right. Also, training limitations Creating virtual environments complex enough to train an AGI system to navigate the real world, including human deception, presents significant hurdles. And third, scaling challenges. Despite advancements in large language models, some reports suggest diminishing returns in improvement rates between generations, between generations. These factors contribute to a more cautious view among many AI researchers, who believe AGI development will likely take decades rather than years to achieve.

02:28:28
Openai has recently achieved significant milestones in both technological advancement and financial growth. The company successfully closed and here they're saying again a massive $6.6 billion funding round, valuing at $157 billion. But you know who cares? That's just. You know, sam is a good salesman, they said. This round attracted investments from major players like Microsoft, nvidia and SoftBank, highlighting the tech industry's confidence in OpenAI's potential. The company's flagship product, chatgpt, has seen exponential growth, now boasting over 250 million weekly active users and you count me among them and you count me among them. Openai has also made substantial inroads into the corporate sector, with 92% of Fortune 500 companies reportedly using its technologies. Despite these successes, openai faces challenges, including high operational costs and the need for extensive computing power. The company is projected to incur losses of about $5 billion this year, primarily due to the expenses associated with training and operating its large language models.

02:29:49
So when I was thinking about this idea of you know, we're just going to throw all this money at it and it's going to solve the problem, and oh look, you know the solution is going to be next year.

02:30:03
The. The analogy that hit me was curing cancer, because there were there sort of is an example of you know, oh, look, we just, we had a breakthrough and this is going to, you know, cure cancer. It's like no, we don't really understand enough yet about human biology to to say that we're going to do that. And you know, I know that the current administration has been you know, these cancer moonshots, and it's like, ok, have you actually talked to any biologists about this? Or do you just think that you can pour money on it and it's going to do the job? So that's not always the case. So to me, this notion of the missing ingredient is the most salient of all of this is like what we may have today has become very good at doing what it does, but it may not be extendable. It may never be what we need for AGI, but I think that what I've shared so far gives a bit of calibration about where we are and what the goals of AGI are.

02:31:20 - Leo Laporte (Host)
That's just the beginning, I think, of Steve's deep dive into AI and artificial general intelligence. He's already pledged to learn more about it and to help us understand AI in the new year. That's one of the things I think Steve does better than anybody else. It's one of the things I'm very proud about our network, about TWIT is that we cover technology without fear or favor. We try to spread more light than heat. It's not about link bait for us, it's about helping you understand the technology that is coming down the pipe and what you can do with it and what you can do to defend yourself against it, and Steve's one of the best at doing that.

02:31:58
We're so glad you were here for our year end episode. Steve will take next week off because it's New Year's Eve and we will get back, reunite and begin talking about security once again in 2025 on the following Tuesday, every Tuesday, 11 am I'm sorry, 2 pm Pacific, 5 pm Eastern, at 2200 UTC. You can watch us live or watch us after the fact at twittv. Slash SN. Thank you for joining me, thank you for Steve Gibson what a blessing he is to all of us and we will see you in 2025. Have a happy holiday and a wonderful, peaceful and productive 2025. Bye-bye.

All Transcripts posts