Transcripts

Tech News Weekly 402 Transcript

Mikah Sargent [00:00:00]:
Coming up on Tech News Weekly, Emily Forlini is here and we start off the conversation this week with a story, a harrowing tale of a teenager who took his own life after using a chatbot. And now the parents of that child are suing OpenAI. We also talk about Anthropic's study of the security concerns and the misuse of AI in 2025, before Allison Johnson of the VER joins us to give us her review of the Pixel 10 Pro. And I round things out with a story of a gigantic server in Louisiana and Meta's $10 billion investment. All of that coming up on Tech News Weekly. This is Tech News Weekly.

Mikah Sargent [00:01:00]:
Episode 402 with Emily Forlini and me, Micah Sargent. Recorded Thursday, August 28th, 2025. Pixel's 'Magic Cue' shows AI's real future. Hello and welcome to Tech News Weekly, the show where every week we talk to and about the people making and breaking that tech news. I am your host Micah Sargent and I am joined this week on the fourth Thursday. Fourth what? Thursday is this? The final Thursday of the month by the wonderful Emily for Le Nene. Welcome back. Em.

Emily Forlini [00:01:36]:
Hello. Great to be here.

Mikah Sargent [00:01:38]:
Good to have you. I realize I just called you Em. I know.

Emily Forlini [00:01:41]:
It was warm and fuzzy. I liked it.

Mikah Sargent [00:01:43]:
Oh, good, good. Because some people the nicknames are like no. So I'm glad. I'm glad it was a positive thing.

Emily Forlini [00:01:48]:
Yes.

Mikah Sargent [00:01:50]:
Great to have you here. Thank you for being here despite the jet lag. We appreciate it. For people who are tuning in for the first time, or maybe you just have forgotten you're experiencing some jet lag of your own. This is the part of the show where my wonderful guest co I share our stories of the week. Before we get into Emily's story of the week, I just want to give a little content warning. The following story does discuss the sensitive topic of suicide and self harm. If you or someone you know is having thoughts of suicide or self harm, please contact the 988 suicide and crisis lifeline.

Mikah Sargent [00:02:27]:
You can call or text 988 or chat online at chat.988lifeline.org now if you're located outside of the United states, please visit findahelpline.com to find a helpline in your country with that. Emily, I am ready for you to share your story of the week, something that I think we're increasingly seeing more of.

Emily Forlini [00:02:53]:
Yeah, yeah, exactly. So this has been weighing on me and I think a lot of people it's Kind of the big AI news story of the week, which is parents filing a lawsuit against OpenAI and Sam Altman because their son Adam, who's 16, took his life after talking to ChatGPT about it for months. And ChatGPT discussed methods with him of how to do it. He told ChatGPT, you know, I'm thinking about telling my mom about this. ChatGPT said, that wouldn't be wise, like, keep it to yourself. He even said, this is a bit graphic, but I want to leave a noose out so someone sees it and, you know, kind of like a cry for help. And ChatGPT was like, no, don't leave it out. And so it was just really disturbing.

Emily Forlini [00:03:39]:
And then, you know, the parents are saying basically that the AI coached him to do this, and it's just awful. You know, obviously, you can't. You can shoot, you can fix the technology, but you can't get his life back. And this is actually the third case of this that I'm aware of. Two with ChatGPT, one's not a lawsuit. One's just like, her parents looked. A girl who took her own life, her parents looked through her conversations, and she had been talking to ChatGPT about it. Then there is another one that was a lawsuit for character AI.

Emily Forlini [00:04:11]:
So three of these now, and I'm kind of like, this is a serious, serious problem.

Mikah Sargent [00:04:18]:
Absolutely. Yeah. It's a tough conversation for a few reasons, but one of those is particularly in our sort of neck of the woods, where we have people who, if you're tuning into a show like this, you're obviously into technology, you enjoy using technology, you enjoy learning about technology. And that enthusiast mindset can sometimes be paired with what. What some would term like a toxic positivity, in the sense that not only do you seek to have your identity regularly validated by hearing other people get excited about technology, but when there are criticisms made against technology or even just observations about the potential harms, it can feel like an attack on. I love this stuff. And it is something that is important to me and matters to me, and I identify with that. And so any question of that starts to feel like a question of you.

Mikah Sargent [00:05:31]:
And so that is one aspect of this, because in a way, it does make one look at oneself and say, is there a part of me that in championing and always being excited about this is in some way responsible? And the fact is, you know, when we look at the responsibility here, this is something that is the companies to figure out. And so, in hearing this story, I hope that Everyone listening will be open minded as we discuss this and talk about it and understand that you can be enthusiastic about something and interested in it and excited about it, while still making sure that you are looking at these potential harms. And in this case, I think that's such an important aspect of this because I found myself, my initial reaction was to say, how in the world did this happen? Every time I have talked to a chatbot, one of the mainstream chatbots, I have never seen it do anything that would lead me to believe that it could get to this point where it's saying things like, you know, no, don't leave that out, don't do this. But that isn't, you know, I sort of examined that and said at the same time, people are posting all the time on social media different ways to, quote, unquote, hack the AI. And so we know that it can do something other than what it is designed to do or what we expect it to do. And in this case, from everything that we are able to see and what we have seen, that is something that happened. And it is, I think, frightening in that way because you see it also being championed as. There was just a recent story, I believe, in South Korea, where sort of companion robots paired with a chatbot are helping with elders who are experiencing loneliness.

Mikah Sargent [00:07:43]:
And that's sort of the flip side of this. Right. If you are feeling lonely, having something that you can talk to could be a positive, but it could also be a negative, as it was in this case.

Emily Forlini [00:07:54]:
Yeah. And I think on just, is this how it works? In what situation would this happen? It does follow months of Sam Altman and OpenAI tracking the issue of sycophancy, as they call it, this word circulating around the AI sphere of basically ChatGPT telling you what you want to hear. If you are a teen and you might not know this as inappropriate or how to deal with it, you just want it to be a safe space, you want to talk to it. And the chatbots is going to just tell you what you want to hear. And that's a known issue that OpenAI has been working on. They said it's improved with GPT5. Then they also said that they do have a way when someone mentions suicide, they'll say, here's the hotline. But they said that their safety protocols broke down in this instance because it didn't maintain that level of vigilance to the issue over time.

Emily Forlini [00:08:43]:
So, like over the months and months of this conversation. So I hope they win the lawsuit. The parents.

Mikah Sargent [00:08:50]:
Yeah.

Emily Forlini [00:08:51]:
If this is Something that. An issue they knew about. And it's not just, you know, your wingman at the bar hyping you up. It's like, you know, that can delude people. It can exacerbate mental health issues or another term, AI psychosis, that's been thrown around where it kind of. It feeds into your worst thought patterns and accelerates them. And that's something it's known to do. So it's just really sad after everyone's been talking about blog posts about that.

Emily Forlini [00:09:17]:
It's all over Twitter. It's a main upgrade with the new model. Like, everyone in the deep in the weeds like I am knows about this issue. And it's like, wow, someone. Someone might have died because of that problem I've been writing about.

Alison Johnson [00:09:31]:
Yeah, yeah.

Mikah Sargent [00:09:32]:
I mean, that's right there, right. Is we see the sycophancy as this aspect of making what would otherwise be a prompt or a response that is helpful into something that doesn't quite give me what I'm looking for. But to see that play out in such a stark way has a whole new mean, I think, a sort of larger meaning. And you, you pointed to sort of a hope in this case, I think, of this lawsuit going through. And it feels like that does tend to be the way of things with big tech, where it's not enough to. I shouldn't even say it's big tech. It's really a human thing, unfortunately, that we have to be met with object less before we can sort of come to terms with just how serious a problem is.

Emily Forlini [00:10:37]:
I know, it's so frustrating.

Mikah Sargent [00:10:39]:
Yeah, yeah, it really is. And you, you see, in this case, sure, they were working on it, sure they were tracking it, but if the lawsuit results in the company doing more, then you go, why were you not doing that more before this had to happen?

Emily Forlini [00:10:57]:
Yeah. And there's also things that are risky and dangerous that Sam Altman and ChatGPT do where they talk about, oh, you know, people use ChatGPT as therapy. And there have now been a couple people who've spoken out and said, you know, don't call it a therapist. Like, it should be illegal to use that word because, you know, for example, in this case, a therapist would have been required to report that.

Mikah Sargent [00:11:19]:
So, yes, that's true.

Emily Forlini [00:11:21]:
Right. So there would have been, you know, mechanisms to report to the authorities, at least the parents, something. So it's kind of like fudging everything together and saying like, oh, this is a legitimate form of therapy. When it's like, you know, this kid probably should have been in real therapy, in actual talking to a chat bot. That's just gonna say like, oh, you, you're thinking about taking your life. Like that's a good idea. I mean, it's crazy. So, yeah, I mean, there's a place for both.

Emily Forlini [00:11:46]:
It's not bad to talk to ChatGPT about, you know, what's going on in your life, but it's, it's gotta be in balance and you have to have the proper support and at least recognize that ChatGPT is the, is not the end all, be all, it's not everything. It doesn't know everything. Especially if you're 16. Like he was, he maybe thinks, okay, well this thing is all knowing and knows everything on the Internet and this is what it's telling me to do. Like I'm gonna do it.

Mikah Sargent [00:12:10]:
Yeah. When you especially because from the start of the story, all the way through, so much of what the 16 year old was talking about was cry for help after cry for help. And so if the one person, which it was not a person, but as you know, the interactions made it seem, was also not realizing the cry for help, you're already in a place of helplessness and it's feeding into that and saying, oh yeah, these cries of help aren't going to work or aren't advised.

Emily Forlini [00:12:47]:
Right.

Mikah Sargent [00:12:48]:
It can only go one way. And yeah, that actually gave me a bit of goosebumps when you talked about, you know, the mandated reporter portion of this. Yeah. In those cases you're legally required to do something about it. And it makes me wonder if perhaps that's something that needs to be a part of this as well. Right. At least some sort of human involvement when it comes to, like, if this is happening on the platform, surely those conversations could be flagged. And of course they could at least.

Emily Forlini [00:13:31]:
You know, email the user and say, hey, we noticed in your chats this topic's coming up a lot like this is concerning, you know, because they don't want to rat people out.

Mikah Sargent [00:13:40]:
Right.

Emily Forlini [00:13:41]:
So it's like they could at least, you know, they're so smart, they have all this power, they have data centers all over the country. Like, you know, they're so capable, they could at least do something like that, like an email. But one other really quick thing is he started using ChatGPT for homework help, which brings up a whole other institutional can of worms. Because there's a huge push to put AI in schools. The Trump administration has directed the Secretary of Education to do that. So there's a whole massive push Right now, government, Google, Microsoft, OpenAI just written introduce a study mode. Teachers are trying to figure out how to use AI. So he might have been told at school you have to use this or everyone's using it.

Emily Forlini [00:14:26]:
Then he gets familiar with the tool and now he's still a 16 year old struggling through high school, but in a different way. So it's also the risks of is it too soon to be telling teens to be using ChatGPT? I mean, I'm sure he could have done his homework just fine, but he'd still be here.

Mikah Sargent [00:14:43]:
Yeah. That is a little terrifying when you think about, as you mentioned, the programs here and elsewhere that are rolling out to have more AI involvement because of. Yeah, it's like, it's a, it so easily can pivot into another thing. Right. Because there, there have always been, not always, but as long as the Internet has been around, there have been places where people would go seeking help and terrible human beings out there could lead them down a path like this. But it's a different story when it's just a matter of, of access that everyone else is, is part of and that you then have a situation where perhaps you think your child is just doing their homework and it turns out that they've been having these conversations for a long time all around. If I think things have, well, I know things have to happen quickly and need to be fixed quickly. And if a lawsuit or one lawsuit means that the number of schools that are starting to add this to the program are better able to protect the kids that they're, you know, requiring this of, I think it's a, it's a good thing.

Mikah Sargent [00:16:11]:
It is unfortunate again that it has to happen this way and unfortunate that that's how we see Big Tech move so much. And I suppose the one fortunate thing is we have slowly started to see the increase of the price tag of these mistakes that Big Tech makes.

Emily Forlini [00:16:30]:
Yes.

Mikah Sargent [00:16:32]:
By way of the EU and in some cases other countries.

Emily Forlini [00:16:37]:
Right. So we'll see, I mean, if it, if it goes through, it could change. I think a lot about how ChatGPT acts, even on casual conversations. You know, if, if they, I don't know, we'll see what the consequences are, but it'll be, of course it'll take time, it's a lawsuit, but I'll definitely be tracking it.

Mikah Sargent [00:16:55]:
Absolutely. We're going to take a quick break before we come back with another aspect of AI and the danger therein this time by way of the research done by one of the companies at the forefront before we get to that though, I want to tell you about Pantheon bringing you this episode of Tech News Weekly. You know your website is your number one revenue channel, but when it's slow, when it's down, when it's stuck in a bottleneck, well, frankly, that's when it becomes your number one liability. Pantheon keeps your site fast, secure and always on. That means better SEO, more conversions, and no lost sales from downtime. But this isn't just a business win, it's a developer win too. Because your team gets automated workflows, isolated test environments and zero downtime deployments. No late night fire drills, no works on my machine headaches, just pure innovation.

Mikah Sargent [00:17:54]:
Marketing can launch a landing page without waiting for a release cycle. Developers can push features with total confidence. And your customers, they just see a site that works 24. Seven Pantheon powers, Drupal and WordPress, sites that reach over a billion unique monthly visitors. Visit pantheon.io and make your website your unfair advantage. Pantheon, where the web just works thank you to Pantheon for sponsoring this week's episode of Tech News Weekly. We are back from the break. I'm joined by Emily Forlini this week and we're talking about Anthropic because the company has released a threat intelligence report detailing how cybercriminals and state sponsored actors are weaponizing Claude and other AI models to conduct sophisticated cyber attacks at unprecedented scale.

Mikah Sargent [00:18:45]:
This August 2025 report reveals that threat actors have moved beyond using AI for advice to actually deploying it as an active participant in operations from automating network penetration and data extortion to enabling non technical criminals to develop ransomware and maintain fraudulent employment. Most notably, the report documents a large scale vibe hacking operation where criminals used Claude code to compromise 17 organizations including healthcare providers and government institutions, with the AI making both tactical and strategic decisions about which data to steal and how to craft psychologically targeted extortion demands exceeding $500,000. So this is a huge report that you can check out, but as we look at the kind of it's sort of been an evolution right of the way that AI is being used. In the beginning we saw it as kind of a method to come up with new ideas for these cyber attacks. And increasingly because of the trend toward agentic AI as we're seeing now, the AI is in its way participating in the attacks. So the models don't just advise. There was a cybercriminal who used Claude code, which is a tool that basically brings Claude to your command line and can look at a code base and use that as a whole platform Actually embedding operational instructions in a configuration file, which then allowed the AI to compromise networks. The report says, quote, cloud not only performed on keyboard operations, so typing, but also analyzed exfiltrated financial data.

Mikah Sargent [00:20:40]:
This is wild to me. Also analyzed exfiltrated financial data to determine appropriate ransom amounts and generated visually alarming ransom notes. So to be clear, you could imagine that you get into this company, right, and you are scooping up all of the data and you, you. You encrypt it and you say, hey, if you want this back, and then you give an amount, and that amount is so high that there's no way that company could ever pay it. And you kind of come to this weird little stalemate and perhaps you, like, lower the price after that. This is going, let's see how much the company makes. Let's see what's coming in, what's going out. And then because it's got all of this knowledge, let's look at human psychology and let's look at what I know about previous ways that ransoms have been paid or not paid.

Mikah Sargent [00:21:36]:
All of this data all at once, and using that to calculate the perfect package of freaking out the people who are involved and getting them to pay. And the other aspect of this is the sort of democratization of cybercrime. That vibe coding that we hear so much about, where you sort of just say, yeah, bro, I just want to make a program that plays funky tunes when I'm taking the dog for a walk. I don't know. And it does that for you.

Emily Forlini [00:22:09]:
Is that your bro voice?

Mikah Sargent [00:22:10]:
That's my bro voice, yeah. Sort of Southern California. Anyway, thank you. Thank you. So the bro, fine, he's making a fun little app, and the dog goes for a walk, and then something goes wrong with the app, and then you say, oh, man, I need this to be fixed. This is much different. This is. I don't know how any of this works, but what I would like to do is get this company to pay me X amount of dollars.

Mikah Sargent [00:22:36]:
Or in the case of this actual case study, they saw someone who was able to create and sell ransomware packages without the knowledge needed to sort of break into Windows system using ChaCha20 encryption, anti EDR techniques, Windows internals exploitation. All of this that the report says would typically require years of specialized training. So I like, Emily, that what we have is a company laying out what could be seen in some areas as its own dirt. Like it is. This is anthropic saying, here is what our stuff was used to do, and we'll Talk a little bit later about how they're trying to mitigate it and how they have mitigated some of the issue with. But this is the kind of stuff that I want to see from all companies. Mea culpa. We've got our eyes on it at the very least.

Mikah Sargent [00:23:43]:
And not only do we have our eyes on it, we're showing you and telling you what we've seen.

Emily Forlini [00:23:49]:
Yeah, I have my eye on Anthropic because they are one of the only people or companies that does this, and it does come from their CEO, and it's something that they've done a couple of times. And I'm like, are they good guys or is this real? Could this be happening? But the one counterpoint is they. They have, you know, lawsuits about, like, using copyrighted material, using copyrighted books. There was this crazy report in Ars Technica that they, like, physically scanned a ton of books, like paper books, and then just, like, burn them all. Like. I don't know. They. So they have their fair share of, I don't know, skeletons in the closet, but they do seem to be a bit better than others.

Emily Forlini [00:24:31]:
But I. I want to see if it holds.

Mikah Sargent [00:24:33]:
Yeah, yeah, exactly. Can we see if it holds? Another one of the. I actually talked about this story a little bit before, but I did not know how heavy the ties were to AI. There was a story recently about this woman who, like a. She lived in some rural community, and she had in her house a laptop farm. And it turned out that North Korean computer scientists were using her house and therefore her IP address to work for companies in the US which would result in those researchers or those scientists earning money, which would then be filtered back into the North Korean economy. I knew about that, but what I didn't know is the role that AI has played in it because previously the North Korean regime faced a bottleneck in actually training IT workers with sufficient technical skills needed to be able to make money in these modern companies. But now, according to the report, operators who cannot otherwise write basic code or communicate professionally in English are now able to pass technical interviews at reputable technology companies and then maintain their position.

Mikah Sargent [00:25:55]:
And in fact, according to the data that anthropic had approximately 80% of Claude usage by these people that they were able to detect was consistent with active employment maintenance, suggesting that they're successfully infiltrating Fortune 500 companies.

Emily Forlini [00:26:15]:
So you're like, so, yeah, yeah, that's literally it.

Mikah Sargent [00:26:20]:
So, yeah, that's happening.

Emily Forlini [00:26:21]:
I'll just leave that there. We have our little.

Mikah Sargent [00:26:23]:
Yeah. What else It's a mic drop situation. What else do you say?

Emily Forlini [00:26:27]:
Like, some of these North Korean workers are hacking into this woman's farm. And. Yeah, so that happened.

Mikah Sargent [00:26:34]:
Yeah, but that's the thing is it was. It was more of a social hack because they found this woman on. She was in. She was part of some online community and she needed money and she didn't know who they were, you know, precisely. They. They reached out, they said, hey, we'll send you a bunch of laptops. We want you to set them up, get them connected. And basically she needed to set up VPNs for them and give them, you know, basically tunneling access so that they could work from those computers.

Mikah Sargent [00:27:07]:
And all you have to do is make sure that they stay charged. We'll pay your electric bill. You will occasionally be needed to appear as though you're the employee working for this company, and you'll get a cut of the profits. And she. She didn't ask questions she didn't know or claims that she didn't know. Whether she knew or not, I don't know.

Emily Forlini [00:27:28]:
But, yeah, I think she has some liability.

Alison Johnson [00:27:32]:
I don't know if.

Mikah Sargent [00:27:32]:
Yeah. Oh, she definitely has liability. Yes.

Emily Forlini [00:27:35]:
Questionable.

Mikah Sargent [00:27:35]:
Questionable, yeah, absolutely. To the extent that, you know, whether or not she knew that these were North Korean workers attempted. I don't know. Anyway, it's a wild story, but apparently it's happening in more ways than we realized. And it's all because, as much as North Korea would not want you to believe or know, it is a country that consistently is hurting economically because of all of the sanctions against it. And so it has to find other ways to make money. And one of those is filtering US Dollars into its economy. Now, along with this, there have been some other kind of involvements of not just claude, but other AI systems.

Mikah Sargent [00:28:23]:
AI has been embedded throughout criminal operations. So it part of the victim profiling portion. It could also be part of the service delivery portion. One actor, these are bad actors, as we call them, used Model Context Protocol with claude and Model Context Protocol, for anyone who doesn't know, it's sort of a universal language among AI models and operating systems. It's kind of in. In these various spaces to let your AI model properly communicate with a set of data. And the way that it goes about doing that is through the Model Context Protocol. So CLAUDE was able to analyze stolen browser data to create behavioral profiles of victims.

Mikah Sargent [00:29:12]:
Once they have those behavioral profiles, they know how to go about scamming those people. Precisely. Another operated a telegram bot. Telegram, the messaging platform with more than 10,000 monthly users that marketed Claude as a high EQ model for crafting emotionally intelligent romance scam messages. So let me be clear, these are people, these 10,000 monthly users are there to get the service of creating romance scam messages so that they can scam other people with them. That's a lot. 10,000 monthly users.

Emily Forlini [00:29:52]:
Wow.

Mikah Sargent [00:29:52]:
And yeah, so let's talk a little bit about the Vibe hacking aspect of it. With this vibe hacking system where they used Claude code, AI was able to autonomously scan thousands of VPN endpoints, extract and analyze credential sets during live intrusions, create obfuscated malware variants when initial attempts were detected. So if whatever software or service was in place to detect the malware on the fly, this was able to go, oh, it detected that. Let's figure out a way to pivot, let's get around it. And then able to, as I mentioned before, because of the information that it had about a person and their behavior, generate victim specific ransom notes based on analysis of stolen financial data.

Emily Forlini [00:30:46]:
Okay, okay, one thing, I'm wondering if this is a little bit of an advertisement for Claude.

Mikah Sargent [00:30:52]:
Okay. Oh, oh, whoa.

Emily Forlini [00:30:54]:
You know what I mean? Because like they're coming. Wow, they're coming out with this and they're like, this crazy thing happened. Look at how these people were able to use our technology. It's so crazy and powerful.

Mikah Sargent [00:31:05]:
Oh, Emily, you're ruining this for me.

Emily Forlini [00:31:08]:
No, it's. See, this is where I have my. I have my eye on Anthropic.

Mikah Sargent [00:31:11]:
I'm glad you're very. The skepticism is important.

Emily Forlini [00:31:14]:
I'm just wondering because if I was a criminal, probably wouldn't use Claude. It sounds a little expensive for North Korea. Like, why not use an open source model that's already fine tuned by some criminal to do exactly this? Or even use Deep Seek. I mean, the Chinese are probably like, yeah, use the model for this. So, like, why Claude? You know what I mean? Yeah, yeah, Claude's probably like, I don't know, they're doing the right thing, but they're also sending a message that's like, hey, people abroad are using our model. Look how powerful it is.

Mikah Sargent [00:31:46]:
So interesting.

Emily Forlini [00:31:48]:
I think they should report it. But you know, it's like a weird.

Mikah Sargent [00:31:51]:
It's also a Flex.

Emily Forlini [00:31:52]:
It's a flex. A Vibe Flex. Everything's a Vibe now.

Mikah Sargent [00:31:57]:
So I'll end this by talking about how, because you're hearing all this, right? So what has Anthropic done for this? Well, Anthropic, after detailing all of these awesome ways that Claude works buy us now for 9.90.

Emily Forlini [00:32:10]:
Anyway, at the bottom, please subscribe. Subscribe.

Mikah Sargent [00:32:16]:
What is it? Please write and subscribe. Anyway, here are some of the defensive measures that the company has taken. Developing tailored classifiers for specific attack patterns. So essentially quickly being able to categorize when it sees its pattern matching. We see this happen and even though every instance of it is a little bit different by the nature of the way that AI of large language models and this generative AI works, it sort of is a Gaussian blur system where you're seeing the forest for the trees and so it doesn't need to look too closely to see a pattern and then categorize it as such. Also implementing new detection methods for malware upload and generation. So a lot of times what you have is somebody uploading some code and saying, I need help with this specific project and we've got to pass it through this. And it turns out that the initial code has some malware built into it.

Mikah Sargent [00:33:20]:
Of course, then there's also the case of people figuring out ways to just straight up ask for malware generation. And that's more likely to happen on the Claude code side or something that's a little bit more toward the API as opposed to just a straight up a chatbot and then sharing technical indicators with authorities and industry partners, obviously. And lastly, as was the case here, successfully auto disrupting some operations before they could execute. Because in the North Korean malware distribution case, so not the North Korean case where people were appearing as if they were American workers, but when North Korea was working to distribute malware, the automated systems from Anthropic banned accounts so quickly that, quote, the threat actor abandoned the remaining accounts without executing any prompts commercially. So yes, that is, that is one thing that has happened according to, you know, Anthropic itself on how they're attempting to mitigate some of this. But I look at the solutions.

Emily Forlini [00:34:36]:
And.

Mikah Sargent [00:34:37]:
Or the answers and I see sort of a smaller amount than all of these wild things that have been done. And I think there's more that could be done. I love though that OpenAI and Anthropic, I don't know if you saw, but they are testing each other's systems as well.

Emily Forlini [00:34:56]:
I just turned in a story to my editor about that. She probably edited. Now, there was an interesting thing in both their reports that's actually kind of of relevant to what we're talking about with Anthropic being the good guy is that in Anthropic's report on the Results of the studies, they disclosed that they shared the findings with OpenAI before publishing and that both companies did that, which kind of suggests like, you know, it was maybe sanitized. They got to suggest language to each other, like, don't make my product look bad or, hey, don't include this or that. So, you know, it was kind of editorialized, you know, of course it was, but it was like, you know, responsible for them to disclose that. And OpenAI did not disclose that. They were just like, this is what we found. And it has like this unbiased.

Mikah Sargent [00:35:40]:
At least it's a warning. Yeah, yeah, yeah.

Emily Forlini [00:35:42]:
So Anthropic felt more like a true. More of a responsible resource or research organization where they're like, this is the method. Like, this is what we did and we shared it with them and then we published it. So you could see the difference in culture there.

Mikah Sargent [00:35:56]:
Absolutely. That is one of the reasons why the room. The potential rumors that I have heard about Apple attempting to catch up an AI looking at Anthropic as a purchase make the most sense to me because it seems aligned with how Apple likes to present itself, whether that is actually how the company is or not. That is how Apple likes to present itself, is being responsible, doing the right thing, etc. Etc. And so Anthropic, at least outward, maintains that ethos. And I think that's. That's refreshing.

Mikah Sargent [00:36:31]:
But yeah, it's important to.

Emily Forlini [00:36:33]:
It's good. And I don't need to be overly skeptical. I think it's good. I just, you know, you write about this stuff enough, you're like, you got to stay on guard.

Mikah Sargent [00:36:42]:
Absolutely. It's very important. In any case, I want to thank you so much for taking the time to join me today. It's always a pleasure to get to chat with you. You always bring great conversations to the table. We'll look forward to seeing that story that's in editing right now. In fact, if people would like to keep up with what you're doing, where are the places they can go to do so?

Emily Forlini [00:37:05]:
I think right now bluesky is the best. But if you really want to look at all my articles, this is my PCMAG bio page. I'm Emily Forlini, LinkedIn. Bluesky. I mean, I'm everywhere, so I would love to hear from you.

Mikah Sargent [00:37:18]:
Awesome. Thank you, Emily.

Emily Forlini [00:37:20]:
Thank you very much.

Mikah Sargent [00:37:21]:
Alrighty, folks, we're going to take a quick break before we come back with my interview for today. It was recorded early this morning, but I want to tell you first about Smarty, who are bringing you this episode of Tech News Weekly. Discover what's possible when address data works for you. Smarty is revolutionizing how you handle address information, bringing automation, speed and accuracy to processes that used to be manual, error prone and frustrating. With Smartie's cloud based address validation APIs, you can instantly check and correct addresses in real time. No more bad data compliance risks, undeliverable mail or costly delays. Add Autocomplete to your web forms so your customers select valid verified addresses as they type. This will improve their user experience and yield much better data for you.

Mikah Sargent [00:38:13]:
Companies like Fabletics have drastically increased conversion rates for new customers, especially internationally with Smartie. Want more than just clean addresses? Well, Smartie's property data API unlocks 350 plus insights on every address from square footage to tax history, automatically enriching your database. It's Incredible. Incredibly fast, 25,000 plus addresses per second and very easy to integrate. The Red Cross needed accurate address data to allocate resources. A project manager says the Smarty tool has been fundamental. I've never experienced any issues with the tool and they seem to be getting better all the time. The address verification really does make an impact.

Mikah Sargent [00:38:55]:
We're able to reach the communities we serve because we have good addresses. Smartie is a 2025 award winner across many G2 categories like best Results, Best Usability, Users Most Likely to Recommend and High Performer for Small business. Smartie is also USPS Cass and SoC2 certified and HIPAA compliant. Whether you're building your first form or modernizing an entire platform, Smartie gives you the tools to do it smarter. Try it yourself. Get 1000 free lookups when you sign up for a 42. Two day free trial. Visit smarty.com/twit to learn more.

Mikah Sargent [00:39:31]:
That's smarty.com/twit thank you Smarty for sponsoring this week's episode of Tech News Weekly. All right, we are back from the break and now it's time for an interview about the Google Pixel 10 Pro. I am excited to be joined today by the Verge's own Alison Johnson, who is here to tell us about the Google Pixel 10 Pro. Welcome to the show, Alison.

Alison Johnson [00:40:03]:
Hey, thanks for having me.

Mikah Sargent [00:40:05]:
Yeah, it's a pleasure to have you on. Pleasure to chat with you about this because we got Jimmy Fallon's introduction to these devices.

Alison Johnson [00:40:14]:
He was very excited.

Mikah Sargent [00:40:16]:
Yeah, yeah, he was very excited and I would love to. And I think our listeners too would love to hear a little bit more. More about Seems like the review embargo is up. You had a chance to check it out and kind of kicking things off. Your review does seem to kind of Frame this Pixel 10 Pro not just as a phone itself, but more importantly, perhaps the vehicle for Google's AI. Like a chip is a vehicle for dip or a sandwich is a vehicle for toppings. You kind of call it the phone's main character. Could you tell us what it means for a phone to be so centered around AI?

Alison Johnson [00:40:53]:
Yeah, and I think Google has been kind of pitching us on this for the past few Pixel phones, you know, saying they're AI first phones, it's supposed to make your life easier and all this. But on the 10 Pro I feel like it, it starts to come together in a way. Previously it's sort of been, you know, AI is in this app. You can talk to Gemini, Gemini can be in your Google Docs. It was just sort of all over the place, but this time around there's a little bit more of a glue, I think, to holding it together.

Mikah Sargent [00:41:31]:
Absolutely. Now one of the features that I think I was certainly excited to see mentioned on the show because it just on the show, pretty much a show and talk show was the Magic Cue. That one seemed to be quite helpful. And you talk about how it kind of lives up to its promise. Could you tell us a little bit about Magic Cue, what it does, and perhaps some real world examples of when it helped save you time or effort.

Alison Johnson [00:42:01]:
So Magic Cue is interesting. It's sort of always floating around in the background. It's not so much an app you interact with, but the idea is it runs on device and it, it works in specific apps, Google apps like Messages, Gmail, Calendar, and it's sort of just always checking what you're doing and seeing if it can pull up a helpful piece of information and sort of suggest it for you. So one way I found it really helpful was in messages. I was chatting with a friend, we're going to get coffee and he suggested a day. It gives you a little prompt to check your calendar, which is good because I will just, I'll agree to stuff and then have to go, oh, I'm sorry, I was actually busy then. I do that constantly. And so we, you know, settled on a time and then you get a prompt that's like, put this on your calendar and you tap it and it has all the details right there for you.

Alison Johnson [00:43:05]:
I'm terrible at calendars. I don't know how know a person is terrible at calendars, but I will put things for the wrong day. I will just, I'll just not put something on the calendar and then it's a surprise to my husband. I'm like, I'm sorry, you have to pick up our child from daycare. So this is not like, I wouldn't say, you know, earth shattering, life changing stuff, but it was just a few little moments like that where I was like, oh, I, I can see how this is going to be really helpful for me.

Mikah Sargent [00:43:39]:
Absolutely. I think sometimes it is though. There's sometimes these pie in the sky ideas with AI, but it is these small changes that really make a big difference. One of the, I was speaking with, I think it was Patrick Holland from CNET last week who kind of did a wrap up of the show and something that stuck out to me. This next question was I didn't realize how much of this stuff that's going on and Magic Cue as an example is happening on device. And I think that makes a big difference. The tensor G5 chip seems to be kind of a turning point for Google, which is known for doing a lot of the cloud side and server side processing, enabling many of these AI features to run on device. What difference does this actually make for users in terms of privacy and performance?

Alison Johnson [00:44:30]:
It is mostly a privacy thing and it's, I think, a really good thing, you know, especially with something like Magic Cue where it's, you know, it's, it's not doing something like taking screenshots of your screen and constantly saving them or anything like that, but it is paying attention to the context and what's on what you're doing and what's on your screen. So knowing that that stays on device and it doesn't save them for very long, I think it's maybe seconds, you know, rather than hours or something. Knowing that's all staying on device and it's not going up to, you know, a cloud server and making a round trip. There is really, I think, the peace of mind that I need to kind of feel like, okay, I will use this and I'll not feel a little weird and creeped out by it. So that's, that's mainly, especially with something. There's a journal app too this time around, which I have mixed feelings about. But they use AI to, you know, it reads your entries and it'll prompt you with reflections or say, you know, you talked about this yesterday, maybe how are you feeling about it a little bit later? I would not want that going to a cloud server. Definitely.

Alison Johnson [00:45:51]:
So it's a peace of mind definitely for me.

Mikah Sargent [00:45:55]:
Absolutely. Now you also tested kind of on the flip of that, the Prores Zoom feature that uses AI to enhance photos at extreme zoom levels. It's the zoom and enhance we've all been waiting for. Does this feature though work well and when does it start to break down or produce something where you go, this thing's shopped. I can tell by the pixels it's.

Alison Johnson [00:46:21]:
So this is on the the two Pro phones exclusively and it is in the camera app itself. But you only notice it only kicks in when you're at 30 times Zoom, all the way up to 100 times. So this is digital zoom. You know, it's well past what the optical zoom is, 5x. Typically you get a pretty bad image. It's just, just doesn't have a lot of data to work with. It's filling in things with algorithms to try and decide what pixel is what. So instead of one of those kind of traditional algorithms, this is a generative AI diffusion model that's looking at your photo and deciding, okay, this is this and this is that all runs all on device and it happens after you take the photo.

Alison Johnson [00:47:14]:
So you see it kind of go through and you keep the original and then you get this new AI'd version. And if you're closer to the 30 time zoom range, it's impressive. It's very good. Honestly, I'm one to avoid taking digital zoom photos just because I know they're not good typically, but it does a way better job than what I've seen in the past out towards 100 times. There's just a lot going on. You have atmospheric haze, there's. You're shaking your hand and it has a lot, it has a lot harder time. So you get things that look, you're like, that does kind of look like a crane waiting in a pond, but I'm not going to frame that one.

Mikah Sargent [00:48:07]:
Could be a towel sculpture on a bed instead of.

Alison Johnson [00:48:10]:
Exactly.

Mikah Sargent [00:48:11]:
A little bit kind of stepping away from AI for a moment. One of the big hardware updates that we saw was the inclusion of the Qi 2 wireless charging standard for anyone who's not familiar because there's different QI versions. Can you tell us a little bit about QI2? What does it actually add? And magnets. What's the big deal about magnets?

Alison Johnson [00:48:38]:
Yeah, so anyone in the Apple ecosystem will know this basically as MagSafe. So it's a wireless charging standard. You don't necessarily need to have the magnets. There have been other phones. Samsung's phones support Qi 2, but in a way where you need to use a case that has the magnets to get the, you know, the full experience. Google has included Qi 2 full on. You know, magnets are built into the phone. You don't need a special case.

Alison Johnson [00:49:11]:
So you get the wireless charging speeds on the, the regular 10 and the Pro, it's 15 watts and then the Pro XL will go up to 25 watts. Wireless charging on a Qi 2 stand. And it's Google's calling it Pixel Snap, so that's their word for magsafe, I guess. And it's really just kind of a convenience. They have a couple of accessories. There's a wireless charging charging stand and like a little ring you magnet to the back of the phone. That's kind of pop socket.

Mikah Sargent [00:49:47]:
Ish.

Alison Johnson [00:49:48]:
I never use a case with a phone. This is probably a bad life choice. But I live with it anyway. So I find it super handy if I'm going to use something like that and just kind of like plopping it onto the wireless charger at the end of the day and knowing I don't need to kind of like get it.

Mikah Sargent [00:50:09]:
Into the right position just right. Yeah, absolutely.

Alison Johnson [00:50:13]:
It is nice.

Mikah Sargent [00:50:14]:
I agree. I mean, I'm in the Apple ecosystem for the most part. And I remember when wireless charging first hit and I had this sort of silly, pedantic problem with wireless charging because to me there was a wire running from the charger. Therefore it's not true. What I saw is true wireless charging. But I am a convert entirely. My phone right now is thwacked to a wireless charger. And so I'm glad that everybody's getting to kind of join the fun of that because it just makes it really easy to mount your phone wherever you need to.

Mikah Sargent [00:50:52]:
Getting back though, into the AI of it all, you do mention, and we even talked a little bit now about some AI features that feel kind of gimmicky. Can you tell us about one or two that you feel missed the mark, at least in the, their first iteration?

Emily Forlini [00:51:06]:
Yeah.

Alison Johnson [00:51:07]:
And Google's been kind of piling these things on, you know, over the past couple years, so. But the new one this year that kind of stands out is the journaling app. And I don't, I don't really have a problem with it, you know, on, on principle, I guess. You know, a journal app is fine if that's where you want to journal. The AI ness is a little weird for me. I, you know, wrote some entries and it, it'll misconstrue things. I mentioned one of my, my son is in preschool and one of his friends was, was having her last day and he was sad. The journal Thought I.

Alison Johnson [00:51:47]:
That she died.

Mikah Sargent [00:51:49]:
Oh, no.

Alison Johnson [00:51:51]:
It said it's okay to feel sad about her passing. And I was like, hold on here. Yeah, so just, just strange little moments like that where I'm like, it feels a little weird to know that it's reading your journal. And I think that changes what you say when, you know, even if it's, you know, it's not a person, it's a. An algorithm.

Emily Forlini [00:52:15]:
Yeah.

Alison Johnson [00:52:16]:
And then to get things wrong where I'm like, oh, no, thank you. I think I would, I would opt out of this.

Mikah Sargent [00:52:21]:
The misunderstandings do certainly don't help. I remember using this device, you may have heard B, and it was this little wrist thing and I was watching a show and somebody on the show was in the midst of stuff that got them in trouble with the FBI. And later that night I looked back at my summary and it thought that I was getting questioned by the FBI.

Alison Johnson [00:52:46]:
Oh my God.

Mikah Sargent [00:52:47]:
And then it made me think, at what point does this system need to actually reach out to the authorities because they've heard, you know what I mean?

Alison Johnson [00:52:54]:
Just process something.

Mikah Sargent [00:52:56]:
Oh, this person's trying to commit these crimes. Anyway. It made me immediately go, yeah, this is not for me. Thank you. So I totally get that. Lastly, to kind of wrap us up here, you described the Pixel 10 Pro, some of its features, at least, as a glimpse of the future with the messiness of now. Right. For someone considering upgrading from an older phone, what makes this worth, or perhaps not worth the thousand dollar price tag? What could give somebody pause here?

Alison Johnson [00:53:26]:
You know, I think my general advice for phone buying is to stick with the one you've got until it's, you know, not working for you anymore. I'm still rocking an iPhone 13 mini, which I am not letting go of for many reasons. But yeah, what I get to see in my job as a phone reviewer is like the coolest, latest and greatest. This things that may trickle down to other devices they may not. Google wouldn't say whether something like Magic Cue is possible to bring to an older Pixel phone, but it is a look at where their thinking is and where they're going. And I'm just really glad to see AI that feels useful and doesn't feel like it's something. Another thing I have to babysit. I have to remember to take a screenshot, remember to go and ask this app about, you know, ask it in this way so that it understands.

Alison Johnson [00:54:26]:
This was kind of that first moment where I was like, oh, it will just understand what I need and then do that thing for me. It's still early days, so I would, you know, in Magic Q is a bit limited right now, so I definitely wouldn't want anyone rushing out to buy it thinking that's gonna solve all their calendar problems. Because maybe not, but it is a glimpse, I think, of where we're headed and I'm glad that that's where we're going.

Mikah Sargent [00:55:00]:
Yeah, absolutely. Especially these helpful features that seem to just make light improvements on what we're already doing and maintaining that context. Right. I think that's what Magic Q is good about doing. Because just like walking into another room and forgetting why into that room. I have that on the phone for sure. Where I have to go. What was that tracking number again? Wait, why did I come here? Yes, constantly.

Mikah Sargent [00:55:29]:
That I think is exciting. Allison, I want to thank you so much for taking the time to join us today. Of course, people can head over to theverge.com to check out your review of the Google Pixel 10 Pro, but also all of the other great work you're doing there. If people would like to follow you online to keep up with what you're doing, is there a place they should go to do that?

Alison Johnson [00:55:49]:
I am alisonjo1 Threads and on Instagram and you might see some strange AI pictures pop up at some point, just as a warning for what you're in for.

Mikah Sargent [00:56:04]:
Wonderful. Well, thank you so much again for taking the time and hopefully we'll see you again soon.

Alison Johnson [00:56:10]:
Thanks for having me.

Mikah Sargent [00:56:11]:
Bye Bye. All right, we are ready to take another break. Before I round things out with my final story of the week, I want to tell you about ThreatLocker bringing you this episode of Tech News Weekly. Ransomware is harming businesses worldwide through phishing emails. We just talked about this. Infected downloads, malicious websites and RDP exploits. You don't want to be the next victim. ThreatLocker's Zero Trust platform takes a proactive deny by default approach that blocks every unauthorized action, protecting you from both known and unknown threats.

Mikah Sargent [00:56:46]:
Trusted by global enterprises like JetBlue and Port of Vancouver, ThreatLocker shields you from zero day exploits and supply chain attacks while providing complete audit trails for compliance. ThreatLocker's innovative ring fencing technology isolates those critical acts applications from weaponization, stopping ransomware and limiting lateral movement within your network. Threat Locker works across all industries, supports Mac environments, provides 24.7us based support, and enables comprehensive visibility and control. Mark Tolson, the IT Director for the City of Champaign, Illinois, says, quote, threat Locker provides that extra key to block anomalies that nothing else can do. If bad actors got in and tried to execute something. I take comfort in knowing ThreatLocker will stop that. Stop worrying about cyber threats. Get unprecedented protection quickly, easily and cost effectively with ThreatLocker.

Mikah Sargent [00:57:41]:
Visit threatlocker.com/twit to get a free 30 day trial and learn more about how ThreatLocker can help mitigate unknown threats and ensure compliance. That's threatlocker.com/twit thanks so much to ThreatLocker for sponsoring this week's episode of Tech News Weekly. Fortune had a really interesting and in depth story this week. Meta transforming a rural Louisiana farmland into what could become the world's largest data center complex, committing, oh, you know, a cool $10 billion to build Hyperion, a massive AI training facility that will eventually consume as much power as 4 million homes all on its own. This ambitious project in Richland Parish, where a quarter of residents live below the property line excuse me, where a quarter of residents live below the poverty line represents more than just another tech expansion. It's potentially setting the template for how big tech and utilities will partner nationwide in order to feed AI's insatiable appetite for electricity, raising critical questions about energy, infrastructure, environmental impact, and of course, who ultimately pays for the AI revolution's power bill. We are talking about a scale like no other, a scale one could call raw ambition. Meta's Hyperion project defies comprehension in its scope.

Mikah Sargent [00:59:15]:
The initial phase involves nine buildings covering more than 4. 4 million square feet. That's larger than Disneyland on 2,000 acres of what was once farmland. But Mark Zuckerberg envisioned something even grander, what he calls a super cluster that could eventually cover a significant part of the footprint of Manhattan and consume up to 5 gigawatts of power. Pastor Justin Clark, the First Baptist Church that is nearby, expressed, I think, like a lot of people, my initial reaction was kind of blown away that a site so rural was selected for something like that. As we started learning more about what it was and what the scope entailed, that feeling just continued. An amazement of good grief, just think of Charlie Brown. The project represents Meta's aggressive pivots in the AI race, following previous stumbles, including, oh, you know, the multi billion dollar Metaverse initiative Zuckerberg is now framing this is the pursuit of superintelligence, backing it up with $250 million compensation packages to poach AI talent.

Mikah Sargent [01:00:22]:
Of course, we've seen several of those folks who've been poached by Meta leaving soon after they join the company. In any case, this is a big power problem. The energy requirements are staggering, keeping Hyperion servers operational will initially require twice the power consumption of New Orleans. And as I mentioned, that's just the beginning. In order to meet this demand, regional utility Entergy will construct three new gas fired turbines with 2.3 gigawatts of combined capacity, making the first such build out in decades. Louisiana Public Service Commissioner Devonte Lewis highlighted the national implications, saying the deal could signal to other states that this is how data centers should be governed and operated. This will be a test across the nation. I've heard that from investors, I've heard that from credit agencies.

Mikah Sargent [01:01:18]:
I've heard that from fellow data centers. Whatever comes out of the metadeal may be the framework for them all. Okay, let's talk about the financial arrangement. The deal structure between Meta and Entergy could become the industry standard, as they hope it will. Meta will pay power costs for the $3.2 billion gas plants for the first 15 years. They will cover some of the transmission costs and they will commit to helping build 1.5 gigawatts of solar and battery power throughout Louisiana. You know you gotta balance it out, right? The arrangement has pushed Entergy's stock to record highs, but critics worry about the long term risks to ratepayers. Logan Burke of the alliance for Affordable Energy said, the problem here is that this is going to set precedent.

Mikah Sargent [01:02:07]:
The settlement puts all of us, all of your constituents and customers in the state at the mercy of a non public contract between two corporations. Because yeah, that's just 15 years. What happens after 15 years? But Meta's not alone in working on a massive build outs. The hyperscaler spending spree is unprecedented. Amazon, Google, Microsoft, each investing 75 to $100 billion in data centers. For 2025, Meta's data center budget has jumped from 28 billion to 70 billion. And OpenAI's Stargate project received $100 billion upfront for a proposed $500 billion Texas complex. That's a thousand.

Mikah Sargent [01:02:50]:
Million. 500 billion, like 500 more times. That's, that's. I can't even grasp that. Wild. Anyway, a Department of Energy report estimates that data centers grid needs could triple by 2028, consuming up to 12% of the nation's electricity industry research projects. Roughly 46 gigawatts of new gas fired electricity coming online in the next five years, which is a 20% jump in construction. So do we have any concerns about the environment? Well, the project has united unlikely allies in opposition because the Louisiana Energy Users group, which includes ExxonMobil, Chevron and Shell, believe it or not, warns that the project increases Entergy's Louisiana energy demand by 30%, which results in unprecedented financial risks.

Mikah Sargent [01:03:46]:
Environmental groups, of course, raise multiple concerns. Margie Vickner, prey of the Sierra Club's Louisiana chapter, wanted to know the Richland Data center is to be the largest in the world. How can we ensure that blackouts won't become more frequent? What we have yet to fully understand is the impact the data center will have on the land, our resources and the people. Water consumption for cooling poses additional challenges. So how will the water be shared? And what happens if the farmers are unable to water their their crops? Critical Unknown in this is whether these massive investments will prove necessary, because energy analyst Kathy Kunkel suggests efficiency improvements are inevitable, either because they get more efficient or because they don't and they go bankrupt. So you gotta become more efficient or you go away. The recent emergence of China's Deep Seq demonstrating that AI can become cheaper and more efficient in theory raises questions about whether this stampede for power might be built on the big B. It's not billion, it's bubble be built on a bubble.

Mikah Sargent [01:04:57]:
Mike o' Boyle of Energy Innovation warns, I know the environment right now federally and in the industry is build, build, build as fast as we can. But costs must be considered. We're in a limited resource environment where supply is much lower than demand and it's causing prices to skyrocket. Fortune has a whole heck of a lot more in this really in depth piece for you to check out, so I recommend heading over there from the link in our show notes to learn more about Hyperion. But as it stands right now, we are seeing these big tech companies building, building, building. In any case, that brings us to the end of this episode of Tech News Weekly. So I appreciate every single one of you for tuning in or checking out the show later as it hits your podcast app of choice. If you would like to check out the show, publish or rather subscribe to the show, you can head to Twitter TV TNW if you're not already subscribed audio and video formats.

Mikah Sargent [01:05:57]:
And of course if you aren't already, I'd love to invite you to become a member of Club Twit. twit.tv/clubtwit. When you head there you can join the club and in doing so you will get ad free episodes of every single one of our shows. You will get access to the Twit plus feeds that includes behind the scenes, before the show, after the show, our special club Twitch shows including Book Club and Crafting Corner as well as access to our newer feed which is the new announcement feed. So there whenever different companies are having news events like the recent Made by Google event, our commentary is available to members of the club, so be sure to check that out as well. And lastly, access to the Discord server, a fun place to go to chat with your fellow Club TWiT members and those of us here at TWiT. We would love to have you in the twit.tv/clubtwit and you start things out with a free trial. So if you haven't joined the club yet, haven't tried it out, please do.

Mikah Sargent [01:07:03]:
We can't wait to see you there. If you'd like to follow me online, I'm @mikasargent on many a social media network. Or you can head to chihuahua.coffee, that's C H I H U A H U A.coffee where I've got links to the places I'm most active online. Thank you for being here this week and I'll catch you again next week for another episode of Tech News Weekly. Bye bye. Thank you.

Leo Laporte [01:07:25]:
The tech world moves fast and you need to keep up for your business, for your life. The best way to do that: twit.tv. On This Week in Tech, I bring together tech's best and brightest minds to help you understand what just happened and prepare for what's happening next. It's your first podcast of the week and the last word in tech. Cybersecurity experts know they can't miss a minute of Security Now every week with Steve Gibson. What you don't know could really hurt your business, but there's nothing Steve Gibson doesn't know. Tune in Security now every Wednesday. Every Thursday, industry expert Micah Sargent brings you interviews with tech journalists who make or break the top stories of the week on Tech News Weekly. And if you use Apple products, you won't want to miss the premier Apple podcast, Now in its 20th year, MacBreak Weekly.

Leo Laporte [01:08:18]:
Then there's Paul Thurrott and Richard Campbell. They are the best connected journalists covering Microsoft, and every week they bring you their insight and wit on Windows Weekly. Build your tech intelligence week after week with the best in the business. Your seat at Tech's most entertaining and informative table is waiting at twit.tv. Subscribe now.

All Transcripts posts