Tech News Weekly 431 Transcript
Please be advised that this transcript is AI-generated and may not be word-for-word. Time codes refer to the approximate times in the ad-free version of the show.
Mikah Sargent [00:00:00]:
Coming up on Tech News Weekly, Abrar Al-Heeti is here. We talk about how Meta and the MPAA have come to an agreement about using movie ratings on the platform. Then I talk about that Claude code leak and all the stuff Anthropic has in the works. Before we discuss the very scary DarkSword toolkit for hacking into modern iPhones. And Meta and YouTube facing big trials about the way that their platforms do or don't harm kids. All of that coming up on Tech News Weekly.
Mikah Sargent [00:00:50]:
This is Tech News Weekly. Episode 431 with Abrar Al-Heeti and me, Mikah Sargent. Recorded Thursday, April 2, 2026: iPhone Hacking Tools Go Public. Hello and welcome to Tech News Weekly, the show where every week we talk to and about the people making and breaking that tech news. I am your host, Mikah Sargent, and I am joined this week by the wonderful Abrar Al-Heeti. Hello, Abrar.
Abrar Al-Heeti [00:01:16]:
Hello, friend. How are you?
Mikah Sargent [00:01:18]:
I am exceptional, thank you very much.
Abrar Al-Heeti [00:01:22]:
That's a great adjective.
Mikah Sargent [00:01:24]:
Thank you. Thank you for people tuning into the show for the first time. Welcome. For those of you who are back, also welcome. We appreciate having you. This is the part of the show where we kick off with our stories of the week. These stories we find interesting and want to talk about. So that's what we're going to do here on the show.
Mikah Sargent [00:01:45]:
Abrar, could you tell us about your story of the week?
Abrar Al-Heeti [00:01:48]:
I would love to. So I'm going to talk about Meta and the Motion Picture association butting heads over how Meta runs its Instagram teen account. So first, let's take a step back to October real quick. So Meta rolled out an update to its teen accounts because, you know, obviously there's a lot of pushback about the health and safety of platforms like Instagram, which are very popular among teenagers. But what Meta had said at the time was in the same way that you might in a PG13 film, see some suggestive content or hear some strong language, that is essentially what Instagram teen accounts will be. So you'll, you'll get a little bit of kind of more adult oriented content, but it won't be as, as intense as for, you know, for people who aren't teenagers. And so they kept drawing this parallel and they kept bringing up PG13 movies and they kept, you know, it just seemed like a really good comparison point for Meta to say, hey, this is just like watching a PG13 film. So that means, you know, blocking suggestive and graphic content, strong language.
Abrar Al-Heeti [00:02:58]:
That's whether on an explore or feed or story. And then there's also a more strict limited content filter where you can. Parents can even have that even more scaled back if PG13 feels like a little bit too much because it varies. Some parents might be okay with a PG13 film for their teens, and some might not. How did the Motion Picture association respond? Not very well.
Mikah Sargent [00:03:24]:
True.
Abrar Al-Heeti [00:03:25]:
Yeah, they were not about it. They sent a cease and desist letter to Meta and they argued that using that label of PG13 could confuse parents and infringe its trademark. So I think a lot of companies with the heat that Meta faces, especially around teen safety and teen accounts on Instagram might not enjoy being lumped into this without their blessing. And surely the Motion Picture association felt that way. So this week, on Tuesday, it looks like Meta and the Motion Picture association came to an agreement. And so they have decided Meta has agreed to scale back those references to the PG13 film rating and also include a disclaimer that the Motion Picture association was not involved with those ratings. So I went and I looked at that page and right now, as far as I can tell, I mean, I don't know if this is the. I don't know if the update is coming later because I'm looking at the page and I still see lots of PG13 references.
Abrar Al-Heeti [00:04:28]:
And I don't see that disclaimer that the Motion Picture association is not involved. But this just came out like two days ago. This agreement just, you know, landed. So that's probably coming. But I, I was definitely amused when the Motion Picture association was like, absolutely not. Please don't involve us in this. And, and I think it just kind of continues to highlight this tricky path of, of how much does matter need to do to keep teens safe. And is this the right path? And of course, Metta was referencing surveys done with parents and, you know, more than 90% of them were like, this is a great idea.
Abrar Al-Heeti [00:05:11]:
This seems like a good fix, but, you know, it. I, I don't, I don't know if that, you know, remains to be true as this has rolled out. I don't know if it's fixed anything. I don't really talk to teens on Instagram, so I don't have a direct source into that. But, but I'd love to know what you think about kind of just this general effort. And one other thing is, you know, they always talk about age or age prediction, Techn technology as being kind of the next, the next step to ensuring that people, who, people are seeing what they need to be Seeing depending on how old they are. And that's technology that still I feel like is rocky at best and also potentially invasive. So it's just.
Abrar Al-Heeti [00:05:57]:
It's all very tricky. But yeah, I'd love to hear your thoughts on the PG13 comparison and anything you think about the Motion Picture Association's response.
Mikah Sargent [00:06:08]:
Yeah. So firstly, I will say back before this news originally was released, Meta did reach out and offer a briefing. And I attended this briefing and learned about the Motion picture choice here, that they would start putting these content labels. Here's the thing about it. I remember when this happened and then I remember when it was announced and then I remember when the MPAA responded.
Abrar Al-Heeti [00:06:39]:
Yeah.
Mikah Sargent [00:06:40]:
And the way that I felt in that moment was so annoyed with myself because in all of the questions I asked, I never thought to ask. Yeah, does the Motion Picture association of America know about this?
Abrar Al-Heeti [00:06:58]:
Right.
Mikah Sargent [00:06:59]:
Because for me, that was just a given like, that I didn't see this. Right. Like you would just not expect a company would go forth with something without sort of like clearing it with that. That was wild to me.
Abrar Al-Heeti [00:07:13]:
Yes.
Mikah Sargent [00:07:14]:
So I think the. The concept of. Of coming up with this new way of doing things without consulting PAA fully or whatever. Yeah. Is wild. Now, that said, I do think that if the company had reached out and asked for permission to do this and maybe even what went as far as to work with the MPAA to dial in, what made sense, that could have been a good thing. I think that it is helpful and it is something that's been in our popular culture for long enough that it becomes simpler to understand what these different ratings might mean. So the concept, I think, is a great concept.
Mikah Sargent [00:08:12]:
The execution just absolutely fumbled the ball there. And honestly, again, I was shocked that they had not asked in the first place about this. That kind of blew my mind a little bit. You know, all of this, all these attempts at making social media, if not safer, more controlled and more sort of easy for a parent to move their child through the process. I think, like, that's. I celebrate that. But so often it feels like it is a. So often it feels like an attempt to just do the bare minimum.
Abrar Al-Heeti [00:09:06]:
Yeah, totally. Absolutely. I feel like the other piece of this is that personally, what I have also experienced on Instagram, and I'm obviously not a teenager, but in general that the safety elements, whenever I try to report something on Instagram, whether it's harassment or just anything that I feel like I'd rather not have that comment on a post. Instagram never agrees with me. It's always like, no, we didn't find any issues with this. And so I'm like, so you. Clearly people are saying things that should not be, like hate speeches being spewed and this. You don't think it's an issue? That makes me think about teen accounts where I'm like, okay, you say you're protecting people from things, but you know, there's, there's a lot of vitriol on, on social media beyond just any suggestive or graphic content that you might see on your Explore page.
Abrar Al-Heeti [00:09:59]:
But what are those interactions look like? And are you actually flagging these things and, and making people feel safe? Because Instagram can be very hateful place. And I think that's something that maybe isn't addressed as much in these things, where it's like, not just about the content that you see or don't see, but what do those interactions look like? And are you actually listening to people when they say they feel uncomfortable with an exchange?
Mikah Sargent [00:10:20]:
Wow. Yeah, that's. The fact that you have had issues being able to go through with this process is really upsetting that. Clearly, if the company doesn't put its money where its mouth is and doesn't sort of follow through with this promise, then that's going to be an issue. And like, I'm not surprised then to, to hear this ongoing argument, this ongoing discussion about the company's perhaps inability to, to protect the people on its platform, you know, with, with this agreement, this idea that it's substantially reducing references. Right.
Abrar Al-Heeti [00:11:12]:
Yeah.
Mikah Sargent [00:11:14]:
It's kind of weird. Yeah. The idea that it doesn't have to completely get rid of them.
Abrar Al-Heeti [00:11:20]:
I agree. Yeah.
Mikah Sargent [00:11:21]:
It's. It's like, why do it halfway or exactly, you know, a quarter of the way or what? It's just strange. It's very strange.
Abrar Al-Heeti [00:11:30]:
So it feels like one of those compromises where nobody actually ends up winning because it's like, well, the parents will probably be upset that their teens are still exposed to certain things and the teens would be like, well, why am I just getting a little taste of this? Like, what is going on here? And so, you know, they'll always say is, we're refining this, we're going to keep working on it and, you know, making it, making it better. But it is a very strange in between. And I am just so curious how, how this will evolve and how people will continue to respond without those, you know, at least the Motion Picture association is thrilled to not be involved in this. But. But, yeah, we'll see what happens.
Mikah Sargent [00:12:08]:
Alrighty. We are going to take a quick break and then come back with the next story of the week. All right, we are back from the break, joined by Abrar Alheti. And we continue a surprise leak of Anthropic's Claude Code. Source code is giving the tech world a peek behind the curtain, and what's in there is fascinating writing for Ars Technica. Kyle Orland digs through the more than 512,000 lines of code and 2,000 plus files that were exposed, uncovering reference to disabled, hidden and inactive features that paint a picture of where Anthropic may be headed next. Now we're talking about persistent background agents, an AI dream system for memory consolidation, a stealth mode for open source contributions, and yes, a virtual assistant named Buddy that looks like ASCII art with a tiny hat. It's a roadmap Anthropic probably didn't intend to share just yet, and it raises some genuinely interesting questions about the future of AI powered coding tools.
Mikah Sargent [00:13:09]:
Let's talk about what was uncovered in this leak. So the first thing is Kairos. This always on agent. It seems to be the one that's getting the most attention from folks. It's a persistent tool that's designed to keep running in the background even after you close Claude Code. Kairos would use periodic sort of tick prompts to check whether new actions are needed, and it includes a proactive flag for surfacing things the user didn't ask for but might need to see. So you can think of it less as a coding assistant and more as a persistent AI coworker that's kind of checking in on the project and going, oh, you know, maybe you didn't ask about this, but I see that there's something wrong here. It's built around a file based memory system that carries context across sessions.
Mikah Sargent [00:13:59]:
A prompt hidden behind a disabled flag in the code says the system is designed to have a complete picture of who the user is, how they'd like to collaborate with you, what behaviors to avoid or repeat, and the context behind the work the user gives you. So think about it. It's not just kind of autocomplete, but a personalized coding partner that actually learns how you work. Now we've seen a kind of a shift right from these AI companies where they're really going all in on coding. In fact, OpenAI closed down. Sora stopped with Sora, the tool for video creation that was sort of part social media partner video and the Disney deal fell through and OpenAI then reportedly shifted its focus to focus on coding. We've seen Anthropic really take the lead on this. And of course, Google with its Gemini tools, doing much of the same, many different development environments, adding coding.
Mikah Sargent [00:14:59]:
And so I'm not surprised to see these innovations coming to. Coming to sort of a focus on how they can be used in AI. But thinking about this, always on proactive tool. I'd love to see it for me, outside of coding, I would love to see this tool being used as a way to keep an eye on the work that I'm doing and offer suggestions like it's a modern clippy of.
Abrar Al-Heeti [00:15:27]:
I was just thinking Clippy. That's the worst thing in the entire time. Clippy's grown up, man. Like, here we are.
Mikah Sargent [00:15:32]:
Look at you, little buddy. Well, actually, you're not a little buddy anymore. You're a grown man now.
Abrar Al-Heeti [00:15:37]:
Pretty powerful buddy.
Mikah Sargent [00:15:38]:
Yeah, yeah. A grown. A grown paperclip now. What does a paperclip grow into?
Abrar Al-Heeti [00:15:44]:
Yeah, maybe you just could drop the Y. Just clip with like a capital clip. Oh, my God, yes.
Mikah Sargent [00:15:50]:
Oh, and it's still two P's. I love that C, L, I P P clip. What's up Clip.
Abrar Al-Heeti [00:15:55]:
Exactly. That's a great ring to it. I'm surprised Melan's jumped on this Microsoft. Come on, man.
Mikah Sargent [00:16:00]:
Yeah, yeah, Come on. He wears sunglasses now. He's cl.
Abrar Al-Heeti [00:16:07]:
Drives a Rolls Royce, man. That thing.
Mikah Sargent [00:16:09]:
Whoa. Wow. I want to meet this guy. There's also another part of Kairos that is, I think, interesting. It's called Auto Dream and it's a fascinating thing. The name really does sort of match what it does when a user goes idle or tells Claude code to sleep. At the end of a session, the system enters a quote reflective pass over your memory files and. And what we believe about humans and their.
Mikah Sargent [00:16:43]:
We really don't know for sure why we dream. We know that dreaming is important. I have to be careful because I will talk for 45 minutes about sleep science.
Abrar Al-Heeti [00:16:51]:
Oh, I love that next episode, man. Let's do it.
Mikah Sargent [00:16:53]:
Yeah, exactly. So I'm gonna quickly summarize this. We don't quite know why we dream, but we know dreaming is important because we know that our body fights to. Our brain fights to keep us in REM sleep even when we could otherwise wake up. And so things like when you are in a dream and you're hearing an alarm in your dream or a cat meowing in your dream, and you start dreaming about that when it's happening in real life, your body is trying. Your brain is trying to keep you asleep instead of letting you wake up to that alarm or wake up to that cat meowing, trying to get in. And so, so we have a belief that REM sleep is so important that our brains do what it can to keep us in it. So while we sleep and dream, the thought is that we are consolidating things.
Mikah Sargent [00:17:51]:
We do kind of believe that sleep gives us the ability to store files sort of in longer term memory, and kind of break down the information and the knowledge that we have. So it's no surprise that they're calling this auto Dream whenever it comes to this, because it's similar to what humans in theory do. The CLAUDE code scans the day's transcripts for new information worth keeping, consolidates it to avoid duplicates and contradictions, and then prunes anything that's become outdated or verbose. The prompt also instructs the system to watch for existing memories that drifted. So this is an issue where users maybe tried to bolt memory systems onto their AI setups and it just gets filled with a bunch of, of nonsense or sort of gets pushed. It's like you're missing the point. You're missing the point. So you synthesize what you've learned recently into durable, well organized memory so that future sessions can orient things quickly.
Mikah Sargent [00:18:49]:
Maintaining useful, accurate context over time is difficult. And so that in particular is something that the company is trying to focus on now. These tools, obviously, you know, there's always new innovations coming out and people are trying to figure out what companies are doing next. There's been some belief that, ooh, what if Claw, what if Anthropic did this on purpose? I disagree. I don't believe that the company did that because it is dangerous to put your plan out there before you've completed your plan. Right. Like that makes sense.
Abrar Al-Heeti [00:19:37]:
Totally. Especially in such a competitive landscape. I mean, the whole time I've just been thinking, how much is Anthropic freaking out on a scale of 1 to 10? And it's probably closer to the 10. So I can't imagine any company in the AI space saying, hey, let's just leak this and other people can take notes from here. It's so, so, so competitive. Yeah, I agree with you there.
Mikah Sargent [00:19:57]:
Yeah. And then a few more features outside of the sort of those main ones, the leaked code references something called Ultra Plan, which is a feature that would let OPUS level Claude models. So that means for people who aren't familiar with all of this, there are different AI models that are less or more powerful. OPUS is the more powerful model from Anthropic from Claude, and it would let them draft Advanced plans that you can edit and approve. These will run for 10 to 30 minutes at a time. So that means where before. It's sort of like a prompt and response. This is something that can keep working in the background for a really long period of time.
Mikah Sargent [00:20:42]:
I have. Well, actually no, we'll come back to that because I'm curious if you've used this tool yet. So some more tool or some more potential features. Voice mode where you can talk to some AI systems. This would let you talk to Claude Code Bridge, which expands Anthropic's existing dispatch tool. Dispatch is the tool that lets you use your phone to ask the computer or the PC version or Mac version of Claude to do stuff. And so it's just, it's basically remote sessions that you can control from a browser or mobile device and then Coordinator, which is a tool for spawning and orchestrating software engineering tasks across multiple parallel workers communicating via websockets. What it means basically that you have an AI boss for all of your other AI tools that you're doing.
Mikah Sargent [00:21:37]:
Now. What I wanted to ask you. The Ultra Plan talks about running for 10 to 30 minutes at a time. One of my favorite features across all of the main AI tools is the sort of research mode or Deep research or deep whatever. And, and basically what you can do is say, hey, I want you to go out on the web and look for this, that or the other. And the AI system will find a bunch of different sources, read a bunch of different sources, combine all that information and synthesize it for you. So one of the things that I did recently was ask it to look online and find, find the. This was around the holidays.
Mikah Sargent [00:22:26]:
Find the best gluten free sugar cookie that would hold, would retain its shape. Because that's been my problem with the gluten free version is that they don't retain their shape as well. And so what I had it do was look for the best recipe that retains its shape, but also look at what people are saying and find out if there are any sort of, of tips or tricks that would make it so that, you know, you've got a tried and true recipe. So it kind of built its own recipe. It went out and looked across Reddit and you know, all over the place and then combined all that. I love that sort of tool. I was curious if you've ever made use of like the Deep Research or anything like that.
Abrar Al-Heeti [00:23:11]:
Okay. Listening to you, I'm realizing I'm deeply underutilizing these AI tools because, you know, I feel like a Lot of times we think, I don't need this, I don't need that. But that is something that like is actually a very useful thing whether you're looking for cookies or any type of information, especially if it's just kind of aggregating it and giving it to you rather than just kind of generating something that doesn't make any sense. So that is very, very cool. I think that's something that I'm actually have to dip my toes in because that sounds amazing.
Mikah Sargent [00:23:41]:
Yeah, yeah, it's, it's. I mean that's the thing is you can use it for so much. I needed my first place that I go anytime I need a product recommendation is Wirecutter. Wirecutter didn't have the category I was looking for. So I set the task like what do people say is the best blank? And with Claude it'll typically have some follow up questions to help kind of narrow things down. And then yeah, you're like you're sending a little pal out into the world to collect all of that research for you and put something together. So I found that an indispensable tool for all sorts of stuff. And because what I like about it too is that it's based in.
Mikah Sargent [00:24:25]:
I find that those tools are more based in reality because they're basing their answers on actual stuff on the web as opposed to just their own knowledge and understanding.
Abrar Al-Heeti [00:24:35]:
Right. Not just building something but like actually aggregating from those sources. That is very correct. Yeah, I support this.
Mikah Sargent [00:24:41]:
Yeah. One thing I'll say, one of our listeners said in the chat that they believe that this is a sort of leak from Claude or from Anthropic, saying, I don't feel like there has been anything that magical revealed. It's kind of obvious how this would have to work. So I think they just want people talking about them more than OpenAI. And the special sauce is still in the servers. I think that's a fair take.
Abrar Al-Heeti [00:25:11]:
It can be seen as a flex.
Mikah Sargent [00:25:12]:
Yeah, yeah, yeah, exactly. It's a bit of a flex. And then sometimes some of the features that they're talking about is a little bit catching up. So those especially I could see Claw or Anthropic choosing to say, don't worry, we're getting voice mode soon, we're working on it. But then I'm just like, why not just say it?
Abrar Al-Heeti [00:25:28]:
Yeah.
Mikah Sargent [00:25:28]:
Instead of doing a weird leak about it.
Abrar Al-Heeti [00:25:31]:
Well, I never know.
Mikah Sargent [00:25:32]:
Yeah, yeah, exactly. That's the other thing too is that we saw Anthropic working to have the code pulled from all of the places where it's been published. So I don't know. I'm of two minds about it for sure.
Abrar Al-Heeti [00:25:47]:
Could go either way. Absolutely.
Mikah Sargent [00:25:49]:
Exactly. That is the second story of the week. Abrar is going to stick around for our next one, and then we'll say goodbye. All right, back from the break, as I mentioned, joined this week by Abrar Al-Heeti and this next story, it's a little scary. Update your phones, everybody. A powerful set of iPhone hacking tools called DarkSword has leaked online, and it's a big deal. Security researchers have uncovered a series of cyber attacks targeting Apple customers around the world using two advanced toolkits, Coruna and DarkSword, that have been used by both government spies and cybercriminals to steal data from people's iPhones and iPads. The DarkSword tools were published on GitHub, making them available to essentially anyone, and they're capable of hacking devices running iOS versions as recent as 18.7.
Mikah Sargent [00:26:40]:
Apple has since rushed out a patch for older devices, but with nearly 1 in 3 iPhone and iPad users still not running the latest software, potentially hundreds of millions of devices remain at risk. Lorenzo Franceschi Bikiari and Zach Whitaker have been reporting on this over at TechCrunch, and of course we'll include links in the show notes to these stories. But let's kind of kick off by talking about Coruna and DarkSword. These are two separate advanced hacking toolkits, and they each contain a range of exploits, all needed to break into iPhones and iPads and steal a person's data. You could steal messages, browser history, location data, and if you have a crypto wallet, it's possible that cryptocurrency could be stolen as well. Security researchers say Coruna's exploit targets devices running iOS 13 through 17 and then DarkSword. Why it's getting more attention and sort of making the headlines is because it does target recent versions 18.4 through 18.7. Keep in mind, 18.7 was just released in September of last year.
Mikah Sargent [00:27:46]:
So again, more people are on that. That more immediate threat to the general public is DarkSword, and that's also because someone out there posted part of its code on GitHub, making it easy for anyone to download and deploy. The principal researcher at mobile security firm lookout told TechCrunch that DarkSword is now essentially plug and play. Researchers posting on X have already tested leaked tools by hacking into their own Apple devices running vulnerable software. People, I think a lot of times don't know how these tools work. They just hear oh, there's something out there that can break into my iPhone. I find this fascinating. It's called a watering hole attack, which means that it's indiscriminate.
Mikah Sargent [00:28:32]:
You know, it'll affect anyone who comes, comes around it. Victims can be hacked simply by visiting a website that's hosting the malicious code, including legitimate websites that have been compromised by attackers. Once a device is infected, the exploits chain together multiple iOS vulnerabilities, which give hackers full control of the target's device, allowing them to siphon private data and upload it to a server. They control these hacking tools written in HTML and JavaScript, so they're easy for anyone to configure and host. And TechCrunch confirmed that they've seen the tools, but declines to link to the GitHub repository, given the potential for misuse. I am curious, have you ever had to do these exploits, do the hacking tool. Does it ever cross your mind? Does it ever freak you out?
Abrar Al-Heeti [00:29:23]:
It freaks me out. And at the same time, I'm like, another one. It's just like, it's like, okay, the sun is shining at this point. It's like, it's just another day. And it's actually sad that that's the reality of like, okay, here's another vulnerability. But when it's this widespread, I mean, you mentioned the fact that a lot of people have iOS 18 because, you know, it takes a while for people to upgrade to the latest software. And some people might not be excited about Liquid Glass. I promise you guys, it's not bad update to iOS 26.
Abrar Al-Heeti [00:29:50]:
It's worth it. But, yeah, when something is so widespread and when there's nothing that you could necessarily be doing wrong to be, be a victim of, this is, it's, it's not like clicking on a spammy text that you're getting. This is just visiting a website that's been compromised. And that's, that's what makes it so scary. So it's like, yeah, I, I think about it, I see these headlines, I, I cringe at the reality. And then I'm like, well, what are we gonna do about it? And it's kind of like, it's just like a sad response, but it's like, genuinely, what, what answer is there to, to this reality where, you know, people can just, just Post this on GitHub and anyone can have access to it?
Mikah Sargent [00:30:28]:
Yeah, yeah. I mean, that. You are dead on with that. It is, it's frustrating because what do you do, right? And especially if these tools can be placed on placed on sites that someone visits. You know, just like a regular site that someone visits. If you don't know, then you don't know until you know. And when you know, it's too late.
Abrar Al-Heeti [00:30:53]:
Exactly.
Mikah Sargent [00:30:53]:
And yeah, that's very frustrating. Now, we do believe that at least part of one of the tools were originally developed by Trenchant, which is a hacking and spyware unit within the US defense contractor L3Harris. So we understand that this is a company that sells exploits to the US government and its closest allies. Kaspersky also linked two of the exploits in Coruna to Operation Triangulation, which is a sophisticated and what we believe to be government led cyber attack carried out against Russian iPhone users. Somehow these exploits made their way from Trenchant into the hands of Russian spies, Chinese cybercriminals. It could be through intermediaries in the underground exploit market. And honestly, we've seen this before. These powerful hacking tools are developed under tight security restrictions.
Mikah Sargent [00:31:47]:
They're developed with lots of resources at play because they're government backed. But then unfortunately, they make their way out into the wild. The most notable precedent was in 2017 when an NSA developed exploit for remotely breaking into Windows computers leaked online and then was used in the WannaCry ransomware attack which hit hundreds of thousands of computers worldwide. We don't know as much with DarkSword. Researchers have observed attacks targeting users in China, Malaysia, Turkey, Saudi Arabia and and Ukraine. But it's unclear who originally developed it, how it ended up with different hacking groups, or who ultimately leaked it online. Now, one question you might also had is why is GitHub, you know, keeping the code up? GitHub told TechCrunch that it has not taken down the leaked DarkSword code and intends to preserve it for security research. Online Safety Council Jesse Gurachi explained that the platform's policies prohibit posting content that directly supports unlawful activity.
Mikah Sargent [00:32:50]:
Excuse me, supports unlawful active attacker malware campaigns, but added that they do not prohibit posting source code that could be used to develop malware or exploits, as the publication and distribution of such source code has educational value and provides a net benefits to the security community.
Abrar Al-Heeti [00:33:10]:
What a fascinating take, huh?
Mikah Sargent [00:33:11]:
Yeah, I'm trying. So I. Here's the thing. From a practical sort of logical sort of objective standpoint, I get the idea of wanting to have tools available or wanting to have this code available to security researchers. My problem is I don't think that what the Council has said lines up with the actions. It says the policies prohibit posting content that directly supports unlawful active attack or malware campaigns.
Abrar Al-Heeti [00:33:48]:
Yeah, ding, Ding, ding.
Mikah Sargent [00:33:49]:
Yeah, yeah.
Abrar Al-Heeti [00:33:52]:
Huh.
Mikah Sargent [00:33:53]:
But then says it's okay if the code could be used to develop malware or exploits. Why? They do not prohibit posting source code that could be used to develop malware exploits. But we've seen this being used. So anyway, my point is I don't see why this code is still up there, but it is. What can we do? Bottom line is straightforward. Update your iPhone or your iPad. If you're running anything older than iOS May 7, 18 or 26, your device is potentially vulnerable to attacks that are now, and this is the big point, trivially easy to execute if you can't update or if you don't want to. Okay, turn on lockdown mode.
Mikah Sargent [00:34:39]:
Turn on lockdown mode.
Abrar Al-Heeti [00:34:41]:
Just stop using your phone. It's fine.
Mikah Sargent [00:34:42]:
Yeah, yeah, yeah. Put it away. Put it in lockdown mode. That means put it in a safe and forget about it. No, if you have automatic updates enabled, then you should receive the patch without needing to do anything. But. But hey, go into settings, go into general, go into software update, let that thing load and get that update quick as possible. That's how I feel abur.
Mikah Sargent [00:35:05]:
Thank you so much for being here with us today. It's always a pleasure to get to chat with you. I appreciate your stories of the week and your time. If people would like to follow you online and check out all the great work you're doing, where should they go to do so?
Abrar Al-Heeti [00:35:19]:
I am on Instagram @abraralheeti, no spaces, no dash. I is also on Twitter @alheti3 and you can find all my stories on cnet.com, and thank you Mikah, so much. Always a pleasure.
Mikah Sargent [00:35:30]:
Always a pleasure. Bye bye.
Abrar Al-Heeti [00:35:32]:
Take care.
Mikah Sargent [00:35:34]:
All righty folks, we're going to take another quick little break. All righty folks, back from the break. And I'm rounding things out with a quick little story of the week. We kicked off the show talking about Meta and its MPAA ratings. Meta has had quite a time in the past week. It may have just become a turning point for social media accountability. We've talked in the past about the sort of big tobacco moment for social media and that continues on. In the span of two days, juries in New Mexico and California delivered back to back verdicts against Meta.
Mikah Sargent [00:36:13]:
And in that California case, it was also against YouTube over the harm their platforms caused to young users. Cecilia Kang and Eli Tan reported on both cases for the New York Times. And these two pieces together paint a picture of an industry that is for the first time facing real legal consequences for how its products are designed The New Mexico jury ordered Meta to pay $375 million for misleading consumers about platform safety and enabling sexual exploitation of minors. One day later, a Los Angeles jury found both Meta and YouTube negligible in a bellwether social media addiction trial awarding $6 million in damages to a young woman who says she became hooked on the platforms as a child. The dollar amounts are different. 375 million versus 6 million. But the implications, frankly, similar. Juries are now willing to hold Big Tech accountable for the way their products are being shown to affect kids.
Mikah Sargent [00:37:13]:
When it came to the New Mexico verdict, this was on March 24th. It found that Meta had violated New Mexico's consumer protection laws, doing that by misleading users about the safety of its platforms. The state's attorney general filed the suit in 2023, arguing that Meta's lax safety protocols allowed sexual predators to contact minors on Instagram and Facebook. Now, in order to build this case out, these investigators posed as underage users to lure online predators. Editors showing these real instances of solicitation. It's described Instagram as a breeding ground for sexual exploitation. And the six week trial featured testimony from teachers, from investigators, from whistleblowers who spoke about safety concerns on Meta's platform. Again, Meta ended up having to pay through or the jury has ordered Meta to pay.
Mikah Sargent [00:38:07]:
Excuse me. The judge would have ordered meta to pay $375 million in damages. He said that they would actually be asking the judge for additional financial penalties. And that is scheduled to start May 4, where he plans to push the court to force actual design changes to Meta's app. So this is about more than just money. They want to. Well, I mean, ultimately is it. But they want to also get Meta to change the way that its apps are designed.
Mikah Sargent [00:38:38]:
Meta, of course, said it would absolutely be appealing, saying we will continue to defend ourselves vigorously and we remain confident in our record of protecting teens online. As far as the California case goes, again, this was focused on one person. Arguably, this is the more consequential case because of the legal theory that it tests. The personal injury trial is the first of its kind to go before a jury and expected to influence the outcome of of thousands of similar lawsuits that are pending across the country. That's the big thing. You know, what we talk about when we come to legal rulings. It's precedents and the precedence is being set. The plaintiff, identified in court by her initials, filed suit in 2023 against Meta, against Snap, against YouTube and against TikTok, said she began using YouTube at age 6 and Instagram at age 9 and claimed the platforms caused personal injury, including body dysmorphia and thoughts of self harm.
Mikah Sargent [00:39:35]:
TikTok and Snap both settled before the trial for undisclosed terms. So it was just meta and YouTube that were there in the case and continue to remain as defendants. This is the thing where we looked at the legal playbook of big tobacco, arguing that these companies created addictive products that harmed users. And, and there's also the key legal strategy, which is about product design, not product content. So that way section 230, which would have shielded the companies, was not part of the process. Because of section 230, these platforms are not responsible. Right. For the content that's on there.
Mikah Sargent [00:40:23]:
So they had to say, you know what? We're not looking at the content. We're looking at the way you set up this stuff in the first place to make it more addictive. This is just the start, of course. I want to mention something that I thought was kind of interesting. On March 25, the jury found both companies liable. Right. All but two jurors determined that Meta and YouTube were negligent in designing their platforms and that their products harmed the young individual. Meta was assigned 70% of responsibility for the harm.
Mikah Sargent [00:40:55]:
YouTube was responsible for 30%. The compensatory damages came to $3 million. Then came the punitive damages phase. And here we go. This is one of the more memorable courtroom moments. Lanier, who is one of the. The lawyers held up a jar of M M's saying, each piece of candy represented a billion dollars of the company's value. You can take out a handful and not make a difference.
Mikah Sargent [00:41:22]:
You can take out two handfuls and not make a difference. YouTube's lawyer took a different approach, apologizing directly to the young woman, saying, we are sorry for the things you have suffered. We at YouTube truly hope there have been things at YouTube that have enriched your life and allowed you to express yourself. The other lawyer responded saying, a lawyer apology is not the same as accountability. Cracked the shell off of a single blue and Eminem with his teeth and said, this is like $200 million. They do not want to feel the pain for what they did. So that I think there's a lot of drama, which I think is really fascinating. But there's more to go, more to understand, more to figure out as the company continues, as these companies continue to defend themselves, but as we continue to look at the effects that social media has on, frankly, anyone, but in particular our youth and whether the argument holds weight that these tools, these tools these networks, these platforms are damaging to anyone and in particular kids.
Mikah Sargent [00:42:34]:
Folks that is going to bring us to the end of this episode of Tech News Weekly. I want to thank you so much for tuning in this week. It's always a pleasure to get to bring you this show if you'd like to subscribe to the show. If you're not, you can head to twit.tv/tnw to subscribe to the show. Audio video formats. You can also follow me online @mikasargent on many social media network or head to Chihuahua Coffee that's C h I H u a h u a Coffee where I've got links to the places I'm most active online. Be sure to check out my other shows that'll publish today, iOS today and hands On Apple. And of course you can check out my show Hands on Tech which publishes every Sunday.
Mikah Sargent [00:43:11]:
We will be recording new episodes at yeah, we'll be recording new episodes for the month of April, so be sure to tune in then. Thank you so much and we'll catch you again next week for another episode of Tech News Weekly. Bye bye.