Transcripts

Tech News Weekly 399 Transcript

Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.

0:00:00 - Mikah Sargent
Coming up on Tech News Weekly. Abrar Al-Heeti is here and we talk about Tesla being held partially liable for a crash in 2019 involving autopilot. Then Project IRE, Microsoft's way of detecting malware using AI, Before I talk about age verification across the Internet, both in the US and around the world. And then Sabrina Ortiz of ZDNet stops by to give us the lowdown on OpenAI's new GPT-5. All of that coming up on Tech News Weekly.

This is Tech News Weekly, Episode 399, with Abrar Al-Heeti and me, Mikah Sargent, Recorded Thursday August 7th 2025. OpenAI Announces GPT-5. Hello and welcome to Tech News Weekly, the show where every week, we talk to and about the people making and breaking that tech news. I am your host, Mikah Sargent, and, given that it is the first Thursday of the month wow, August is already here. We are joined by the wonderful Abrar Al-Heeti. Welcome back.

0:01:21 - Abrar Al-Heeti
Abrar, thanks for having me. Wow, you have a great voice. I think everyone should sing my name whenever they can.

0:01:28 - Mikah Sargent
You know I have a habit of singing. It must be with the microphone, it's just-.

0:01:32 - Abrar Al-Heeti
Yes, it helps. Yeah, it's necessary. Between that and ASMR.

0:01:37 - Mikah Sargent
Well, it's good to see you and I have been watching some of your videos on Instagram. I saw you doing an unboxing the other day. Very, cool.

0:01:48 - Abrar Al-Heeti
I'm tapping into the personal interest, mixing it in with the work stuff you know, showing I have a personality every now and then it helps yeah.

0:01:54 - Mikah Sargent
Not just a news personality, but a personality.

0:01:57 - Abrar Al-Heeti
Exactly, exactly.

0:01:59 - Mikah Sargent
So, for people who are tuning in for the first time, the start of the show always involves our stories of the week. These are stories that we have found interesting or, in some cases, stories we've written ourselves. It's never me having written the story, maybe one day. But tell us about your story of the week.

0:02:18 - Abrar Al-Heeti
Yeah, I wanted to talk about Tesla's autopilot troubles. So a few days ago, a jury found Tesla to be partly liable for a crash in 2019 that ended up killing one person and seriously injuring another, and, as a result, tesla now has to pay $243 million in damages, which is no pocket change here. So, essentially, what happened? The family of a woman who was killed sued the driver and reached a settlement, but then they ended up. The two families ended up filing a joint federal lawsuit against Tesla last year, and the allegation was that autopilot didn't warn the driver that the road was ending, and essentially he was looking down to pick up a phone that he had dropped, and so when he did that, he ended up crashing into this couple.

What's been interesting about this is, when Tesla has faced similar suits, it tends to settle this out of court, and so now we kind of got to look at how Tesla handles a situation like this.

So, probably not too surprisingly, tesla placed the sole blame on the driver. They said the company's terms and conditions deemed that the driver is ultimately responsible for the vehicle, no matter what feature is engaged. They pointed to the fact that the driver was reaching for his phone and that in any car. What would have happened is what ended up happening here, and so then, on the other hand, the plaintiff's attorney pointed to a lot of the things that Elon Musk has said about autopilot, essentially touting how good it is, the quote superhuman sensors, and that it could see any object on the road, including an alien spaceship. So, essentially, very, very bold claims about the capabilities of autopilot, which is nothing new. It's something that Tesla has been accused of maybe overhyping for a long time, and there's a quote from the plaintiff's attorney that I thought was really good, which is in the Tesla showroom it's the greatest car ever made. In the courtroom they say it's a jalopy, jalopy, jalopy.

I don't know how to pronounce that it's one of those words that you read but don't hear Um so, uh, so which is a really interesting contrast there of here's you know, here's the two sides of the argument there. Which side is is is the truth here? Is this technology advanced enough? And so you know what they ended up finding the jury ruled that the autopilot technology was partially to blame for allowing the driver to kind of take his eyes off the road and not warning him that the road was ending. Tesla, of course, is going to appeal this, and they say that the verdict is wrong.

And then you hear about all this, and then you think about Tesla's truly fully self-driving robo-taxi that it's now rolling out in Austin and what that could mean for that, if any of that could spill over any of those concerns. So, as I'm reading about all of this, the question that I'm now thinking is this was a really tough battle for the plaintiffs, and so to think that there was such an uphill battle here to prove that this technology doesn't do all that it can do, it makes me wonder. Is there even a point to having a technology like that if, in the end, you know so much of the blame is going to be pinned on the driver, and it takes this much to prove that the technology itself should take a part of the blame. So I'm curious what your thoughts are about this case and the result of it.

0:05:59 - Mikah Sargent
Yeah. So the first thing I want to say is it is as you point out. First thing I want to say is it is, as you point out, wild the difference between what Tesla has to say about itself versus what everyone outside of Tesla has to say, the jalopy versus you know, I can see spaceships, nonsense, because the experts that I have talked to regularly talk about how Tesla is behind in sensing technology and that the way that the company has it's almost like. It's almost like that relative that you have who refuses to be wrong and will continue to do something out of spite, even though it is not good for the person, and Tesla sort of stuck to this sensing technology and continues to tout it as being the best. Meanwhile, other companies and, more importantly, car manufacturers, who have been doing this for a lot longer by that I mean making cars a lot longer have chosen other sensing tech that, arguably, are better at doing the sensing.

Yeah, that Tesla is so stuck in its ways and so standing behind a technology that everyone outside is going, no, this isn't as good as it could be, and using this instead, or more sensors instead and different kinds of sensors, would be better here, so that on its own really sticks out to me as just a okay, tesla, if you say so, sort of situation, but when it's something as serious as this, then we've got a problem right. This is not a simple situation where you can just sort of go okay, you do you? No, you are actually potentially causing harm by insisting on your technology being so good in comparison to what else is out there, and that's not great. It's dangerous and perplexing, and I agree that there shouldn't be this much work involved in holding someone or holding a company responsible for what this is. I think that the technology sure, I do think that there's blame involved with the human being who chooses to not use this technology appropriately.

But the argument that becomes, or actually the question becomes did Tesla do enough, or does any self-driving technology creator do enough, to make it clear to the end user that it is not something that can be done all on its own and, more importantly, not giving them the opportunity to do that is, I think, incredibly important.

And so, even if it means that the self-driving car sees that you are not holding to the rules about needing to keep your hands on the wheel and look out the window, and it uses all of these magical sensors. It has to look around and then find a way to come to a complete stop in a safe place and says sorry, but you can't use autopilot until you start doing what you require. You know what I mean. There could be more protections in place for this technology. Or maybe it's like a strike system system, like you have to. If you, if you use autopilot incorrectly X amount of times, then the only way to re-enable autopilot is to go back to your dealer and have it re-enabled there. There should be things in place that kind of discourage the misuse of this technology.

0:10:22 - Abrar Al-Heeti
Exactly, yeah, if there's. If there are no strikes, then why would somebody not depend on it if it's supposed to make that drive easier? And another piece of this is the argument was that Tesla was reckless for allowing autopilot to function on a road that it's not designed for it. But then Tesla points to the owner's manual and says well, it's up to you to really pay attention here. And the other piece of this that's interesting is the families told the jury that they initially didn't know that the driver was using autopilot when they sued him and then, as time went on, they realized they were, and they used the word two components in the accident. So you know there's the driver and then there's the car, and so you have to think about you know there's two things at play here, and then to think that the driver you know the reason people engage that kind of technology is and what you mentioned was that you know it's supposed to be helpful and it's supposed to protect. It's supposed to be helping with moments where you might not be, where you might have missed something, a human failure, and then it's supposed to catch that, but it doesn't even do that. So if it had failed him in that regard, and there are many more instances of that where people think, oh, this technology is going to be helpful for me on longer drives and that's why I get a car that has this type of tech. But yeah, if people then become too dependent on that and there are no ramifications for what could happen based on that dependency, then, yeah, it's just going to lead to more issues.

And I think one other piece is Tesla really really likes to. Its image is very important in terms of look how ahead we are right. And so when you look at whether it's branding something, autopilot or full self-driving when it's not necessarily that, or developing robo taxis and saying we're going to solely depend on cameras, we don't need to do what the other guys are doing with their, you know, lidar and radar. Whatever technology, we're going to be better. We're going to be cooler and sleeker. You have to think about what that means safety wise. You know the buzzwords are one thing, but what, what are the results of of those actions and how effective is that technology in reality? And I think this was a sharp wake-up call.

0:12:27 - Mikah Sargent
One hopes. It is a lot of money, but will there be appeals, will there be reductions in the amount of money? That's always the question. And then you also have to ask is this a slap on a wrist to a company backed by, in theory, a multibillionaire Right? If the company is sort of required to foot this bill out of what Tesla has, then it does become more than just a slap on the wrist, particularly with Tesla sales not doing super well at the moment. So it's a real kind of question of what kind of difference does this make? Kind of question of what kind of difference does this make. And I think, more importantly for me, I really think that self-driving technology, if done correctly, tested appropriately and rolled out appropriately, is allowed to flourish, that we could have a really good, really safe system, and that would be awesome. And every instance of this stuff messing up because of the shortcuts that are being taken only makes that more difficult to happen, and that really grinds my gears.

0:14:01 - Abrar Al-Heeti
Yes, exactly, I don't think anyone's opposed to technology getting better and helping us with tasks like driving, but it has to be actually safe and effective. And, yeah, this was a blow, but hopefully things get better and we take these kinds of things more seriously.

0:14:18 - Mikah Sargent
Absolutely Well. I think it's time for us to take a little break. Before we come back with my story of the week, I want to tell you about our dear friends at Zscaler who are bringing you this episode of Tech News Weekly. Zscaler, the leader in cloud security. We know hackers are using AI to breach your organization. Because AI powers the innovation, it drives efficiency. But there's the flip of that it also helps bad actors deliver more relentless and effective attacks. Phishing attacks over encrypted channels increased by 34.1%, fueled by the growing use of generative AI tools and phishing-as-a-service kits service kits. Organizations in all industries, from small to large, are leveraging AI to increase employee productivity with public AI for engineers with coding, assistants, marketers with writing tools and finance creating spreadsheet formulas. It's also used to automate workflows for operational efficiency across individuals and teams. It's used to embed AI into applications and services that are customer and partner facing and, ultimately, to move faster in the market and gain competitive advantage. Companies need to rethink how they protect their private and public use of AI and how they defend against AI-powered attacks. Stephen Harrison, the CISO of MGM Resorts International, says With Zscaler, we hit zero-trust, segmentation across our workforce in record time and the day-to-day maintenance of the solution, with data loss protection, with insights into our applications. These were really quick and easy wins from our perspective. Traditional firewalls, vpns, public-facing IPs they all expose your attack surface and, frankly, are no match in the AI era. So it's time for a modern approach with Zscaler's comprehensive zero-trust architecture and AI that ensures safe public AI productivity, protects the integrity of private AI and, of course, stops AI-powered attacks. Thrive in the AI era with Zscaler Zero Trust Plus AI to stay ahead of the competition and remain resilient even as threats and risks evolve. Learn more at zscaler.com/security. That's zscaler.com/security and, as always, we thank Zscaler for sponsoring this week's episode of Tech News Weekly.

All right, we are back from the break here on Tech News Weekly, joined by Abrar Al-Heeti Time for my story of the week.

Imagine, if you will, an AI system that can look at a piece of unknown software and determine whether it's malicious, not by comparing it to known threats, but instead by actually understanding what the code is trying to do. Microsoft has developed exactly that with Project IRE, an autonomous AI system that achieved nearly 90% precision in detecting malware that had stumped all other automated tools. This isn't simply pattern matching. It's true code comprehension at a scale never before possible. So when we look at kind of the situation right now we were just with Zscaler talking about this we're looking at code and code comprehension and malware detection, and the way things are right now has kind of hit a wall. Microsoft security teams face this overwhelming challenge because you've got thousands of suspicious files that automated systems simply can't classify. They call them hard target files that require expert reverse engineers to manually examine human beings and, as you can imagine, a very time-consuming task. It's a bottleneck in an era where you got new malware appearing all the time because it's AI versus AI, and in this piece where they talk about Project IRE, when it says the more demanding test involved nearly 4,000 hard target files not classified by automated system and slated for manual review by those expert reverse engineers. So you've got these signature-based detection methods that only catch known threats, because it's signature-based detection, meaning looking for something that already exists and trying to pattern match. And then there's behavioral analysis sure, but it can be fooled by sophisticated evasion techniques. So Microsoft said, look, we got to kind of throw out the playbook a little bit and start fresh, teaching AI to think like a reverse engineer.

Project IAR is a breakthrough in autonomous malware analysis because, unlike conventional systems that look for specific patterns, those signatures, project IAR actually decompiles and analyzes the code's functionality, understanding what the software does, and not just matching it against known signatures. So how does it go about doing this? Well, first and foremost, it decompiles binary files into readable code. It analyzes function, behavior and system interactions. It identifies malicious patterns through code comprehension and then afterwards generates detailed reports explaining its findings. So being able to kind of not only do these things but do them at scale with speed and accuracy, that's something that they haven't seen before.

According to Microsoft's real-world evaluation, Project IRE achieved pretty wild results 89% precision rate, so nearly 9 out of 10 files that Microsoft knew were malicious this Project IRE was able to identify. This project IR was able to identify. It had a 26% recall rate, so meaning it detected roughly a quarter of all actual malware in the test set and then had only a 4% false positive rate. So that's a pretty low error rate. That would mean that security teams could then go and look at and find those false positives and say, oh, this is a false positive, versus up to this point where you might get a lot more false positives. It's impressive because the system was tested on files that had already stumped all other automated detection systems, and these, of course, were the cases that would normally require human experts, and these, of course, were the cases that would normally require human experts.

There are different kinds of malware that the AI worked on, and I thought this was kind of interesting. So one of the case studies was process manipulation malware, a sample that exhibited evasion techniques, identifying functions for enumerating and manipulating system processes, detected HTTP communication capabilities for command and control, and then was able to discover these process injection techniques using entry point patching. So these are a lot of big words that basically mean it found malware that used kind of an ability to access a network, a server, whatever it happens to be, and communicate what it needed to do process manipulation, so affecting these processes. Another case was an antivirus killer tool. So it found a tool that was designed to disable security software, detected code that targeted specific antivirus processes by name, identified process termination functions aimed at security software and, which is kind of cool, in doing so it caught and corrected its own initial misidentification by using validation.

So there's obviously potentially more here, and I think, whenever it comes to Microsoft in particular, you know Microsoft already sells and creates one of the biggest operating system platforms that exists and in an interesting kind of way then also sells security packages for that software. So any improvements there I think are good, particularly when we've seen these breaches in the past have such an impact on infrastructure. But yeah, I kind of wanted to get your take abroad just in general on. Kind of wanted to get your take abroad just in general on. I find it interesting seeing AI being being a very technical thing that, frankly, ai should be good at. One would think.

0:23:13 - Abrar Al-Heeti
Yeah, less AI creating art, more AI catching. This, I think, would be a very, very welcome change. Especially hearing things about how it could correct itself is very cool, I mean. I think this is really the epitome of how AI can be helpful without being threatening. I mean, of course, there's still jobs that are currently and tasks currently run by humans that AI stepping in. But if it can be more efficient and can be more helpful and it's not tapping into the oh, here's a piece of literature written by AI or here's a piece of artwork, I think it's less off-putting when it's here's a technical application of AI, where this can keep people safer and protect systems and actually be something that most people might be willing to actually get behind. I think that's a very refreshing thing to see.

0:24:05 - Mikah Sargent
Yeah, and I like too that this is once again a tool for augmenting the human. That is that's what's so important, because there is still human involvement. At the end it's just helping the human to focus on those tasks that are the most difficult and in this case you know they're coming across the stuff that's the most difficult. That is nice that that you it's almost a reminder when you throw something that we're seeing to be pretty capable at a problem and in the most intense cases you still do need a human to be involved in it, and I think that's pretty cool.

In looking at where this is, this Project Iyer, this binary analyzer they kind of broke down some of the involvement like speed, so being able to do instant analysis of suspicious files without human bottlenecks, of course.

Scale being incredibly important to analyze thousands of samples simultaneously. Consistency whenever it comes to fatigue involved in analysis that is something that is not. As a former copy editor, that was always something that we had to watch out for was fatigue, because your brain wants to automatically autocorrect to what you know is accurate, and having to come up with techniques to circumvent that human error from fatigue is really important so the AI doesn't get bored and can continue to just do what it needs to do. And then, of course, the idea that it would be able to combat novel attack techniques, versus again working on the kind of signature detection that we've had up to this point. So I think this is really cool. Microsoft is working on rolling this out into its kind of defender program and adding it to future security tools as well, and by golly again, I think that if this is something that can help security researchers but security organizations do the part that they do well, that is a great use of AI that I'm glad there are researchers kind of working on.

0:26:33 - Abrar Al-Heeti
So yeah, that augmented future angle is really that key point. Yeah, exactly.

0:26:38 - Mikah Sargent
Absolutely. I want to, of course, thank you so much for taking the time to join me today on Tech News Weekly. If people are looking to follow along with the work you're doing and watch those unboxing videos from time to time, where should they go to do that?

0:26:54 - Abrar Al-Heeti
Well, you should come join me on my Instagram, Abrar Al-Heeti no spaces or hyphens. I'm also on CNET, on CNETcom, cnet's YouTube, cnet socials and, for fun, also on TikTok, also at Abraar Al-Heeti. And I'm on X2, elhiti, underscore three, all the places. Really it's too much, but please follow along.

0:27:14 - Mikah Sargent
Thank you, Abrar. We appreciate it. Thanks for having me. All righty folks, we're going to take another quick break before we come back with a story of the week and then our interview after that with ZDNet AI Senior Editor, very excited about that coming up as well. But let me tell you about Acronis and the Acronis Threat Research Unit sponsoring this week's episode of Tech News Weekly.

You listen you do. You deserve fewer headaches in your life. Even something as simple as watching TV can become a headache when your favorite shows are scattered across different streaming services. Where can I watch the show I want to watch? It's nearly impossible to find one place that has everything you need.

Acronis takes the headache out of cybersecurity with a natively integrated platform that offers comprehensive cyber protection in a single console. And if you want to know what's happening in cybersecurity, the Acronis Threat Research Unit, or TRUE, is the place to go. It's your one-stop source for cybersecurity research. True also helps MSPs stop threats before they can damage your or your client's organization. Acronis Threat Research Unit is a dedicated unit composed of experienced cybersecurity experts. The team includes cross-functional experts in cybersecurity, ai threat intelligence. True conducts deep intelligence-driven research into emerging cyber threats, proactively manages cyber risks and responds to incidents, and provides security best practices to assist IT teams in building robust security frameworks. They also offer threat intelligence reports, custom security recommendations and educational workshops. Whether you're an MSP looking to protect clients or you need to safeguard data in your own organization, Acronis has what you need. It's all there in Acronis Cyber Protect. Cloud, EDR, XDR, remote monitoring and management, managed detection and response, email security, Microsoft 365 security and even security awareness training, and it's all available in a single platform with a single point of control for everything, so it's easy to deploy and manage. If managing cybersecurity gives you a headache, it's time to check out Acronis. Know what's going on in the cybersecurity world by visiting go.acronis.com/twit and take the headache out of cybersecurity. That's go.acronis.com/twit. Thank you, Acronis, for sponsoring this week's episode of Tech News Weekly.

All right, we are back from the break and I've got one more story of the week for you.

This is kind of an overarching story of the way things are going right now. On July 25th of this year, the UK became one of the first countries to widely implement age verification across the internet, requiring sites like Reddit, discord, x, blue Sky to verify that users are over 18 before accessing what was considered harmful content. But the early results have been nothing short of chaotic. While some services complied, others pulled out of the country entirely. Rather than face the risks and expenses, users have already found ways to trick the verification systems or simply bypass them with VPNs. And, of course, as you might imagine, this messy rollout is just a preview of what's coming globally, as lawmakers worldwide push for an age gated Internet age-gated internet despite the warnings from privacy and security experts that the technology simply isn't ready for broad adoption.

The Verge's Emma Roth wrote about the way things are going right now as we see the internet adding age verification around the world. The current state of chaos is thusly and with the UK's implementation it's kind of shown that there are some problems as we mentioned, pulling out of the country rather than having to worry about what it would cost to involve this, and you've got a patchwork of access across the internet with easy ways of getting entry via VPNs. One of the big problems here is that there's a huge variation in the way that age verification is taking place. Some require payment cards, so if you've got a credit card, then that would put you above a certain age. Government ID uploads, selfie verification, data estimation based on account creation dates and user connections All of these are different means of being able to do age verification and they're all being used across different sites because the rules as they are put forth. There's no specific rule on how it has to be done and, as you might imagine, most platforms are simply outsourcing this to third-party services. One of those is Epic Games Kids Web Services that's used by Blue Sky, a service called Persona that is working with Reddit, and KID, which is working with Discord. But, as you might imagine, with these different third-party services being used, all having different privacy policies, all having different methods of verification, all having different people involved, it's a privacy nightmare. The article quotes Cody Vensk, who's a senior policy counsel at the American Civil Liberties Union, aclu. The current situation is a nightmare because there's no standardization of how age verification is supposed to take place.

Some services promise to delete data after seven days. That's what persona promises but no guarantee that they're going to follow through. Data breaches, increasingly common. Last year, one service which is kind of it probably has a way of pronouncing it, but I can't tell by how it is it's AU-U-1-0-T-I-X. It's a service that's used by TikTok, by Uber and by X. Quote left user information and driver's license photos exposed for months. So when uploading your ID, you're handing it over to a third party. You're going to take their word they're going to delete it or remove it after they're done using it, and in many cases they're not.

Despite the fact that we're seeing these services fail to protect user data, there is momentum across the world. Governments worldwide are, quote plowing toward the future of an age-gated internet. Anyway. The movement includes the European Union, which is rolling out government-managed digital IDs hey, at least it's government-managed in that case Australia, which is age-gating search engines, and the US, where multiple states are requiring ID verification for adult websites for now. The US legal landscape shifted dramatically when the Supreme Court overturned the precedent earlier in 2025, saying that adults have no First Amendment rights to avoid age verification if it protects minors from obscene content. So, specifically whenever it comes to obscene content, alabama, idaho, indiana, kentucky, north Carolina and Texas have all implemented these laws.

I mentioned the EU's centralized approach. That, at least, is something a little different. Right, that's good. The European Union the EU is trying a different approach with its government managed digital IDs. The system allows users to upload their passport or their government ID card to a government built system. Kind of odd that you have to upload this, given that it is the government itself, but anyway, afterwards it generates a proof of age attestation that gets passed to websites. So it, of course, solves some of these problems.

But there are some concerns that folks have Surveillance risks digital IDs may phone home to track online activity. Accessibility issues, which could restrict undocumented individuals from accessing content. And, of course, the privacy concerns. The EFF says if I pull up my ID at the liquor store, the DMV doesn't know that, but with digital identification there's potential for that. So again, now you have a situation where the DMV could, in theory, keep track of your trips to a liquor store, and does that mean that they could then use that information for implying that you are drinking and driving, for example, or pass that information along to law enforcement to check on you to see? For some people, maybe that's a thing that they think is a good thing. For others, there, of course, are those privacy concerns, but it is, I think, of all of the approaches. The EU's centralized approach at least limits some of the privacy concerns one might have when it comes to farming out this to, or passing off this age verification stuff to a third party, or passing off this age verification stuff to a third party. And then you have so many different third parties all with your data. So there's a lot more about this.

Steve Gibson over on Security Now has talked a lot about this, but the zero-knowledge proof being the kind of technical solution the EU is attempting to enhance the system with this zero knowledge proof, zkp technology. It's a cryptographic verification method that allows a service to prove something is true or false without revealing any additional information. So, again, it just says yes or no, and that's it. So, yes, the person passes the test of needing to be of a specific age or not, not giving more information about that person's age or anything else. Google has built ZKP into Google Wallet and has open sourced the technology, but it is an advanced approach that has its own limitations, according to EFF's Alexis Hancock. Hancock says I haven't seen anything remotely promising at the moment that actually reels in verifiers. In particular, there's not a lot of scope restriction on who can actually ask for this and if it's even needed in some cases. So, basically saying that, yes, there's a zero-knowledge proof, but does every site need to be able to ask the question of do they pass this age age gate or not, and should it be limited to only sites that need to do age verification based on the fact that they have content that is supposed to be age gated in the first place? Sites are choosing to just go ahead and add the age verification to protect themselves from potential lawsuits or fees from the government, and that in itself is a bit of a concern. It's how we have seen in the past, different sites rolling out features, the cookie notices that pop up that maybe aren't actually necessary, and the types of tack-on accessibility features that aren't as good as built-in accessibility all added to a site because it's about avoiding lawsuits as opposed to actually improving the technology to follow the rules that are set in place. Of course, lots of political push and justification for the ways that this is being rolled out.

Lawmakers and regulators argue that the benefits continue to outweigh the risks. Melanie Dahls, who is the CEO of Ofcom, the UK's communications regulator, says prioritizing clicks and engagement over children's online safety will no longer be tolerated in the UK. As you might imagine, when the question or when the target is the rallying cry is protecting the kids, it's hard to argue against that. Senator Katie Britt said putting in place common sense guardrails to protect our kids. There it is. Protect our kids from the dangers of social media is critical for their future and America's future.

But, as you might imagine, there are some trends involved with this escalating government digital surveillance that we've seen and continue to see. Uh, attempts to declare expressions of LGBTQ plus sexuality. Um, you know, essentially making drag shows obscene and therefore age-gating or blocking access to that content. And with that and the inclusion of personal data, especially from a centralized organization, that becomes more of the question. So I think that the kind of conclusion in Emma Roth's piece is perhaps the most important part of it.

Right now, there just isn't any clear cut way to verify someone's age online without risking a leak of personal information or hampering access to the internet. Until lawmakers stop and think about the bigger picture, everyone's privacy is going to be at risk. So we know the fundamental problem is that lawmakers are implementing these systems before the technology is ready, saying you got to do it and you better hop on it. But the tech isn't ready and the companies aren't ready to roll out the tech, and so it's a whole issue creating a perfect storm of privacy violations, security breaches, restricted internet access, protecting children online. Of course I think that's important, you think that's important, everyone thinks that's important, but the current approach, in its current state, may cause more harm than good, even though there's a part of me that argues that the way that things have been done up until now with big tech is they would do a thing and then ask for forgiveness later. And therefore, unless you force big tech to make a change, they're not going to make a change. So I understand that aspect of it of saying no, we're setting a hard line here. You will do this and you need to do it now. It would be nice if we could come to a middle ground with this. So be prepared to those of you, our dear listeners in the US and elsewhere, as age verification continues to roll out and more countries and, in the case of the US, states that more and more acting like their own countries are hopping on board. All right, we're going to take a quick break before we come back, with Sabrina Ortiz joining us to talk about the freshly announced OpenAI's GPT-5.

Let me tell you right quick about Smarty, who is bringing you this episode of Tech News Weekly. You can discover what's possible when address data actually works for you. Smarty is revolutionizing how you handle address information, bringing automation, speed and accuracy to processes that used to be manual, that used to be error-prone, that used to be frustrating. With Smarties, cloud-based address validation APIs, you can instantly check and correct addresses in real time. So you don't have to worry about that bad data compliance risks, undeliverable mail, costly delays. Add autocomplete to your web forms so your customers select valid, verified addresses as they type. This will improve their user experience and yield much better data for you. Companies like Fabletics have drastically increased conversion rates for new customers, especially internationally. With Smarty's Property Data API unlocks 350 plus insights on every address, from square footage to tax history, automatically enriching your database. It's incredibly fast 25,000 plus addresses per second and very easy to integrate.

The Red Cross needed accurate address data to allocate resources. A project manager says the Smarty tool has been fundamental. I've never experienced any issues with the tools and they seem to be getting better all the time. The address verification really does make an impact. We're able to reach the communities we serve because we have good addresses. Smarty is a 2025 award winner across many G2 categories like best results, best usability, users most likely to recommend and high performer for small business. Smarty is also USPS, CASS and SOC 2 certified and HIPAA compliant.

Whether you're building your first form or modernizing an entire platform, smarty gives you the tools to do it smarter. Try it yourself. Get 1,000 free lookups when you sign up for a 42-day free trial. Visit smarty.com/twit to learn more. That's smarty.com/twit, T-W-I-T. And yes, I did ask the folks at Smarty is 42 the reference? I think it is. And they said if you mean, is it the answer to life, the universe and everything? And I said, yes, I knew you were nerds. I love it. smarty.com/twit, 42 day free trial. Thank you, smarty, for sponsoring this week's episode of Tech News Weekly.

All right, we are back from the break and I'm very excited to be joined by Sabrina Ortiz, senior editor at ZDNet, who knows a thing or two about AI. Welcome to the show, Sabrina.

0:45:35 - Sabrina Ortiz
Hi everyone, I'm super excited to be here Big day.

0:45:37 - Mikah Sargent
Yeah, it absolutely is. So this is pretty cool. I originally was bringing Sabrina on to talk about OpenAI's open source platforms and this morning I got an email and Sabrina said you know, gpt-5 is about to be coming out, right, and I said can we talk about that instead? And you are ready to talk about it, so let's kick things off. Yeah, I mean you, I imagine watching the announcement. Could you tell us kind of what you expected going into it and what we're kind of looking at with this latest model?

0:46:15 - Sabrina Ortiz
Yeah, this is one of the rare cases where what you expect is actually what was released in the best way, right? So, basically, what this model does, and the highlight of it, is that it takes the guesswork out of the entire equation for the end user. So, as you probably known or seen on ChatGPT, if you're using it, there's a model picker. If you were a paid user, specifically, too, there was a model picker and then you could pick amongst the alphabet soup of open AI models, right Like there's O3, o4, this, a mini, that and then you would ideally pick the one that was best suited for your own prompt.

So if you were doing a really complex coding or math problem, you'd probably opt for a reasoning model like 03 or 04, but then 04 kind of sounds like 40, and 40 is one of the GPT models that you could use for mostly everything. So it's like an alphabet alphabet soup and it's confusing. Basically, what GPT-5 does is it has two models already in it where it knows where it's either reasoning models or our standard GPT all-purpose general query model. It'll automatically be able to pick it for you, depending on what you input. So this just makes it so much easier for the end user, because now you don't have to guess and now you get to either balance speed and quality in a much more optimized way where you're not guessing.

0:47:46 - Mikah Sargent
Okay, that makes sense. Now, when it comes to that, does that mean that if I'm using GPT-5, I'll still be able to, for example, give it a photo and it'll analyze the photo properly, or I can say, generate a photo, and it will do that, or I give it some code. It's like the all-in-one package, truly.

0:48:08 - Sabrina Ortiz
Yes, it's all-in-one. So basically everything you were able to do before you'll still be able to do now. But the biggest highlight is that for both free users and paid users, you'll also be able to do some of those reasoning more or take advantage of some of those reasoning capabilities. Free users previously didn't have access to any of these reasoning models. Now, because it's in GPT-5 and everybody's getting access to GPT-5, you'll be able to do the same queries you used to do before, but now also do those bonus like really complex math coding problems and get that higher level assistance understood.

0:48:43 - Mikah Sargent
Now, one of the things that you talk about in this uh early piece, uh related to GPT-5, is performance improvements, and, given kind of the bigger highlight about it being this all-in-one tool and taking out the guesswork, I think some of the other improvements fall by the wayside. Can you tell us about what we're seeing in terms of performance?

0:49:05 - Sabrina Ortiz
Oh yeah, all around it's going to be better performance, right, like one of the biggest perks or advantages is taking out the guest work. But, like you mentioned, that's not to ignore the actual advantages of performance when it comes to coding. When it comes to health, it's a really interesting one. There's really there's new benchmarks showing that it's way better at answering health related queries and you know a lot of people have been turning to ChatGPT to ask things like, hey, I have these symptoms, or hey, what best treatment plan should I take? And now I can better assess those. But altogether it is the most capable model that they've released. It is smarter and it is faster.

0:49:49 - Mikah Sargent
Understood that they've released. It is smarter and it is faster Understood. Another thing that you talk about and again, I think, another important aspect safety. And you say that there are improvements here in safety. What does it mean when an AI company that makes a chatbot a model? What does it mean in terms of safety, because we're not talking about a robot, that automated robot that's suddenly picking up weapons, right? What is safety for this GPT-5?

0:50:18 - Sabrina Ortiz
Hey, that's a great question and also I would like to caveat, before we even talk about it, that we have to take it with a grain of salt.

Right, it is ultimately an AI company claiming that's making a model safer, but they're also in the interest of developing these super intelligent models that are prone to hallucinations and all sorts of things, so we have to take it for a grain of salt.

But, yes, when we're referring to AI models and safety, we're talking about things like hallucinations, where it's just a term describing when they output information that sounds really plausible and sounds really real for lack of a better true, for lack of a better word, and then it's false just because they're trained to, you know, mimic human language and put out a plausible answer, even if it's not actually true. This model, however, it's one of the safety aspects, one of the safety improvements is that it's more honest. So it'll try to tell you, hey, actually, that what you just told me. I don't have the tools to give you the right answer, or I actually don't know the answer. Again, we'll see how it actually performs, but there were benchmarks that OpenAI released to support these claims, and they also made their model safety card available, which is a really long PDF, but you could take your time and read it and go through all the different benchmarks and evaluations to take a look yourself.

0:51:36 - Mikah Sargent
Another thing that ChatGPT, or rather OpenAI, announced are some personalities. For people who don't know, can you start by talking about what is a personality when it comes to this chatbot and what are the new personalities available?

0:51:54 - Sabrina Ortiz
I love that you mentioned this, because this is one of those announcements I feel like it's easy to also ignore, because GPT-5 is like all the craze right now, but they also sprinkled some funner I guess, chat GPT customization tools. Like now you have the ability to pick a color in chat which, granted, is not super, you know, groundbreaking but fun, and this is one of those two the personalities. So if you've ever talked to chat GPT, and especially with 4.0, sometimes it was overly cheerful and you know there's a ton of emojis and all of that kind of thing. Now, with GPT-5, OpenAI shared that it's going to try to like reel that excitement down a bit and in addition to that, there's going to be an option to in custom instruction. So that's just like the tool, the feature which allows you to like customize how you want the responses to be outputted.

You can select different personalities and the personalities are just like the tone that ChatGPT speaks to you in. And the personalities are just like the tone that ChatGPT speaks to you in. So there is an option for like. If you like more sarcasm and you don't want the overly hype, excited chatbot, that's just going to be like. Yes, thank you so much for your answer for your question. Great Like analysis. Here's the response. If you rather be more cut and dry or if you rather be more polite, you could choose from the four different personalities the one that matches what you're more interested in.

0:53:14 - Mikah Sargent
And that is very much a consumer feature, right. When it comes to you and I making use of this, I'm kind of curious, did this announcement? Because one of the fascinating things to me about open ai's announcements is they do feel very much laid back and kind of, uh, let me show you what this can do. When they showed the agent, like they were sitting there and having things go wrong and sort of laughing through that and I really like that authenticity, uh. But we often have it feels like sort of in these videos a focus on the consumer. But a big part of OpenAI's business right is the API is being able for these different companies to use these tools. Is there anything with the new version? The new model that OpenAI talked about is for developers, is for the commercial aspect.

0:54:13 - Sabrina Ortiz
Totally. Yes, that's such a great point. They have so many enterprise customers and also developers that they also have to take care of that audience, and they did so is available in the API and because the model is so, developers can take advantage of it. Because it is the most capable coding model. Like I don't know if you tuned into the live stream and you saw the live demo.

I actually had the opportunity, in a press briefing before the model even came out, to see a demo of all the coding yeah, new capabilities and it's truly outstanding and amazing how fast it can build like a website, a functional web app from a natural prompt, like a language prompt that would take so long to do manually right, like I'm pretty familiar of, like I recently had to go through like building my own like web app app and using JavaScript and CSS and all these different things and it's just such a pain and it does it so quickly and so efficiently.

So, again, because of that, because of the less hallucinations that it is prone to, it's more accurate responses, because of the fact that there are different tiers that you could select for better pricing, developers definitely could take the best advantage of GPT-5. And actually OpenAI said that for developers, gpt-5 is the best option, so it is available for them too in the API, and there's different, you know, versions and selections they can make to make sure it fits their criteria the best for whatever they're working on. Yeah, so they're definitely benefiting from the release too.

0:55:53 - Mikah Sargent
Lastly, I think I'd love to know, as you were watching the announcement, anything that you think we should know about with this that we may not have talked about yet, or even if it wasn't something we talked about, just something that stuck out for you, that you're looking forward to.

0:56:09 - Sabrina Ortiz
Uh, as as things are are rolling out yeah, well, first, I'm actually pretty stoked about this gpt5 release. Uh, because, again, I think it solves a very practical issue. I think when a lot of people are coming to an ai tool, a new ai tool whether it's a chatbot or whatever may may be one interaction of it where it works not to the level they're expecting, it will put them off forever, right, or it'll be enough to be like actually, I don't trust this thing. Now, with this option of it automatically selecting the best model, you'll be able to experience higher quality responses, which I think will get more people to use AI. Not that that's necessarily you know I'm the biggest proponent for, like, everybody must use AI but I think there's some really good practical applications for people's everyday workflows and I hope that people are more open to explore it now that they will be getting higher quality responses from the get-go and also combining speed and you know and quality. So that's great there, and that's what I'm really looking forward to is I think more people will be adopting it and I'm really excited to see what people come up with right, like more people using it, more different applications and use cases. I'm excited to see that.

And then, from my end, something that I would just like to highlight from all the releases that also is easy to just get swept under the rug is that they made advanced voice mode available to free users too, and that's just their voice assistant.

That is, you know, super conversational and can pull information and deliver it in a very casual, like you're talking to, like a really smart friend, and I personally use that feature like all the time. It's one of my favorite features in ChatGPT, but I've been able to take advantage of it because I'm a ChatGPT plus subscriber. But now free users get it too, granted not to the same limit extent, but I thought that was really cool. They're replacing the standard voice mode, which I thought is something you should have done so long ago, because it's kind of obsolete compared to everything else that they could, you know, provide users. So, yeah, that was one of those features where I was like wait, this is like really cool, and I actually think I might write a breakout story on it because I'm like this is sick. This is my favorite feature. Now everybody can use it.

0:58:25 - Mikah Sargent
So, yeah, Now other people will know when you're talking you're like no, I talked to it. What? Now they can actually see, see how it works. Oh, exactly no.

0:58:34 - Sabrina Ortiz
Everybody's going to be able to experience what, like my family and friends are like. Are you talking to chat GPT again? I'm like yes, I am Sorry.

0:58:43 - Mikah Sargent
That's wonderful. I want to thank you so much for taking the time to join us today. Of course, folks can head over to ZDNetcom to check out your prolific coverage of all of the AI. If people would like to follow along with the work that you're doing, is there anywhere they should go to keep up.

0:59:00 - Sabrina Ortiz
Yeah, totally. Actually, my Instagram is probably where I post the most up-to-date of my everyday coverage, and that's just at Sabrina, if an extra A at the end, dot Ortiz. Also, you can follow me on LinkedIn, twitter, all the things. Well, I guess X, all the things. I share all my content on there too, but yeah, Awesome.

0:59:19 - Mikah Sargent
Thank you so much for your time today. We appreciate it.

0:59:21 - Sabrina Ortiz
Thank you. I hope you have some fun trying out the tools too.

0:59:25 - Mikah Sargent
Thanks All right, everybody.

That brings us to the end of this episode of Tech News Weekly. Of course, our show publishes every Thursday at twit.tv/tnw. That's where you can go to subscribe to the show in audio and video formats. Now is the time where I remind you about Club Twit. At twit.tv/clubtwit, we start you off with a 14-day free trial. Afterwards, $10 a month $120 a year gets you access to every single one of our shows ad-free just the content. You also gain access to our Twit+ feeds, including our coverage of news events, like this morning's coverage from Leo on the ChatGPT open AI announcement. You also gain access to little bits and clips that we have and our club Twitch shows. Like my crafting corner, I'm working on doing a D and D one shot adventure coming up, so if you like dungeons and dragons, now's also the time to join. We'd love to have you there. And you gain access to our discord, which is a fun place to go to chat with your fellow Club Twit members and those of us here at Twit. My co-host on iOS today, Rosemary Orchard, is always in the Discord sharing great tips and tricks and answering people's questions. It's a lot of fun and we love having you there, so I look forward to welcoming you to the club at twit.tv/clubtwit.

If you'd like to follow me online, I'm at Mikah Sargent on many a social media network, or you can head to chihuahua.coffee that's C-H-I-H-U-A-H-U-A.coffee, where I've got links to the places I'm most active online. Be sure to check out my other shows, a couple of which published today iOS Today and Hands on Tech, or, excuse me, hands on Apple. Hands on Tech publishes every Sunday. Thank you so much for being here. I'll be back again next week for another episode of Tech News Weekly. Bye-bye.

1:01:20 - Leo Laporte
The tech world moves fast and you need to keep up for your business, for your life. The best way to do that twit.tv. On this Week in Tech. I bring together tech's best and brightest minds to help you understand what just happened and prepare for what's happening next. It's your first podcast of the week and the last word in tech. Cybersecurity experts know they can't miss a minute of Security Now every week with Steve Gibson. What you don't know could really hurt your business, but there's nothing Steve Gibson doesn't know. Tune in Security Now every Wednesday.

Every Thursday, industry expert Mikah Sargent brings you interviews with tech journalists who make or break the top stories of the week on Tech News Weekly. And if you use Apple products, you won't want to miss the premier Apple podcast now in its 20th year, Macbreak Weekly. Then there's Paul Theriot and Richard Campbell. They are the best connected journalists covering Microsoft and every week they bring you their insight and wit on Windows Weekly. Build your tech intelligence week after week with the best in the business. Your seat at tech's most entertaining and informative table is waiting at twit.tv. Subscribe now.

All Transcripts posts