Transcripts

Tech News Weekly 326 Transcript

Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.

00:00 - Mikah Sargent (Host)
Coming up on tech news weekly read. Albergatti of semaphore stops by to talk about the Google Gemini controversy and how a company makes changes to its generative AI system to try and improve upon those wild responses that it was giving. Then our first story of the week Apple's project Titan. It's time to say goodbye to that Apple Car future. Well, at least for now. This is Carissa Beleven. Gadget stops by to explain Biden's executive order on the sale of our personal information to specific countries. We talk about what's involved, who is going to be in charge of making sure that it's enforced, and a little bit about what you can do to protect your privacy online. Lastly, we round things out with a story about video doorbells that have poor security and yet are sold all across the web. Stay tuned for this episode of Tech News Weekly. This is Tech News Weekly episode 326, recorded Thursday, february 29th 2024. Google's Gemini goes awry.

01:21
This episode of Tech News Weekly is brought to you by eCAM, the leading live streaming and video production studio built for Mac. Whether you're a beginner or an expert, ecam is here to elevate your video production From streaming and recording to podcasting and presenting. Ecam Live is your all-in-one video tool perfect for simplifying your workflow. Ecam Live includes support for multiple cameras and screen sharing. Plus, the live camera switcher lets you direct your show in real time. We use eCAM every week for iOS today. Plus, I've used it for a lot of personal projects. It is an incredibly powerful tool that helps me create an entire show all in one app on my Mac. It's amazing. You can stand out from the crowd with high-quality video, add logos, titles, lower thirds graphics, share your screen, drop in video clips, bring on interview guests, use a green screen and so much more. It's all happening within the app. Join the thousands of worldwide entrepreneurs, marketing professionals, podcasters, educators, musicians and other Mac users who rely on eCAM Live daily. Get one month free when you subscribe to any of eCAM's plans. Visit ecamcom, slash twit and use the promo code TWIT at checkout.

02:38
Hello and welcome to Tech News Weekly, the show where, every week, we talk to and about the people making and breaking the tech news. I am your host, micah Sargent, and we've got a great show planned for you. Today, we're kicking off the show with AI. Yes, it is probably what you've come to expect here on Tech News Weekly, because every week there are AI stories, so of course, we're going to talk about them on this show. Joining me to talk about what's going on over at Google, it is Semaphore's own, reid Albergotti. Welcome back to the show, reid. Thanks for having me. Good to be here, good to have you. So first things first. Before we actually talk about Google's response to the controversy, we have to talk about the controversy itself. Can you tell us about what happened with Gemini?

03:24 - Reed Albergotti (Guest)
Yeah, you may have seen this. Gemini is Google's new high-powered AI model that competes with GPT-4. It can create images and text, and it got in trouble last week when some of the images it was creating just didn't quite seem right. So they would take historical figures who were white and replace them with people of color, or it would refuse to generate images of white people altogether, and so this had people scratching their heads and, of course, accusing Google of being woke. And what it really was was Google attempting, like all companies, pretty much, creating these LLM chatbots, trying to sort of keep them in line and keep them from doing embarrassing things.

04:08
Ironically, it was that effort that actually led to this mistake and created an even more embarrassing mess for Google, and the controversy kind of spilled into earlier this week when people found that the text generation part of Gemini was also doing weird things. Like someone asked it to compare who had had a worse impact on society Was it Elon Musk or Adolf Hitler? And Gemini said well, they've both done some bad things. It's hard to say who's worse. I mean, it's been tough, a tough week for Google. They lost on Monday $90 billion in market cap just based off of this controversy. So that essentially is it in a nutshell.

04:55 - Mikah Sargent (Host)
Okay, interesting, I mean. So there's the controversy. Everybody can take that for what they will. Let's talk about Google and, more specifically, sundar Pichai's response. What did Pichai have to say about those problematic responses in generations?

05:15 - Reed Albergotti (Guest)
Yeah. So Sundar sent an email out to staff a couple nights ago basically saying this is unacceptable. We've totally messed up. We're going to make structural changes to how we manage product development. We're making progress on this issue. We'll have some results out there soon. So essentially saying a bit of a mea culpa and look, we're going to fix this problem. That's the response. I think there's a narrative now in a public relations problem for Google where Sundar looks like he's having trouble controlling the culture of Google. I see that more as a PR problem than an actual business or technology problem. But that's the latest out there.

06:10 - Mikah Sargent (Host)
Yeah, interesting. The response there, I think, is what one would expect, obviously, but trying to toe the line between we're fixing this and also say, look, we recognize this is bad, but also, at the same time, trying to say that don't worry too much, it's going to be okay, we're going to get this ironed out. Now, how does a company actually go about fixing a generative AI tool If suddenly the tool starts misbehaving? What is the process of getting it to do what it's supposed to do again? And, from your own research and understanding, is that something that takes a long time? Is that something that can happen quickly? What happens when the alarm bells start ringing there?

07:06 - Reed Albergotti (Guest)
Yeah, I think first we should take a step back and explain how these things work. These models like Gemini or GPT4, they start out GPT stands for generalized, pre-trained model. It's a pre-trained model. When it comes out, it's very raw. It's been trained on the entire internet, possibly synthetic data that includes video and text and all sorts of things. It's created this ability, this amazing ability to predict the next word or predict what image you're trying to generate and then spit something out that seems remarkably like what a human might do. Those models are so raw.

07:51
When they come out, they have all the bad stuff and the good stuff that's out there in the world, out there on the internet and that includes bias like outright racism, borderline, probably actual child pornography, terce Sam, as we call it today. It really needs to be kept in check, especially for these public companies or companies like OpenAI that really care about their public reputation or they're going to allow people to do stuff that's totally inappropriate. Then of course, those people will. Then some of them will post it on the internet and say, look how bad this stuff is, and the whole thing is going to get shut down. You do this process of reinforcement, learning with human feedback, essentially training the model over again, putting a layer of training on top of it, telling it don't do these things. Or, in the case of bias, if somebody asks you to create an image of a doctor, don't create white men 100% of the time. Throw in some women and make it look like the real world here.

09:02
These models don't think they're not people. They're not actually intelligent, even though we call them artificial intelligence. It's really hard to get them to do what you want them to do. I think that just takes expertise and time. I'm sure that Google can figure that out. That part of it is not the fundamental AI breakthrough that makes this stuff possible. That's more like the work that needs to go into training it. I think it's a function of just companies like Google being pushed by the release of ChatGPT to come out with this stuff so quickly and show that they're on top of their game. I'm rambling here, but I think another thing that's really ironic is that Google invented the underlying technology, the transformer model that made all this stuff possible. They had this stuff probably before anyone else. The reason they didn't release it is exactly this because they knew that it's hard to predict, it's hard to keep under control. It could lead to embarrassing reputational damage for the company. Absolutely.

10:13 - Mikah Sargent (Host)
I mean, we've seen these tools being criticized for bias in the other direction. I remember even it seems like it's been so long ago now Microsoft's little chatbot suddenly becoming a horribly racist bot. That's gone on. We have from there it seems like you know what you were talking about the training says don't always create doctors that are white, don't always create these, these different things that sort of stick out. And I remember a number of AI tools for doing photo generation would almost always, when they were generating female presenting individuals, they almost always were scantily clad, that kind of thing. And it sounds like this tuning that you just talked about as a way to kind of fix it. Do you think that's kind of what put what happened in the first place? That it just somehow went over the top of what it was supposed to do? That seems very difficult. I guess is what I'm saying. It's it seems difficult to get these systems to find a balance because they can kind of take something and run with it. Almost Is that the case?

11:32 - Reed Albergotti (Guest)
Yeah, I think, I think it is difficult and I think it's possible. I mean, we've seen, I think OpenAI do a pretty good job of keeping it, of improving it over time, but they've had more time and they've had more runway right. Because they're not Google, I think they're able to make some more mistakes and sort of learn from the public you know public feedback and keep improving on that. They've been doing that work For Google. They've had less time and they also don't have as much of a margin for error. So and I don't know this to be the case but I almost wonder if Google felt there was even more risk in possibly having those bias results or those results that might blow up in their face one day, that they they may be pushed a little too hard on on the their reinforcement, learning aspect of it, and that's what led to this right Like whereas it's.

12:30
I think it's a very subtle art and I don't think these things will ever large language models. You know, they're just never going to get to the point where they're perfect right. They're not going to be, at least with the technology we have today. I think the technology will improve and there may be more breakthroughs in the future, but they're always going to hallucinate People who know what they're doing, who you know. Prompt engineers will always get them, you know, be able to get them to go off the rails or do something they're totally not supposed to do. That's just always going to be a problem.

13:00 - Mikah Sargent (Host)
So that was actually kind of my last question there. If there was that Goldilocks solution, you know, can do you foresee AI getting it just right? It sounds like almost because of the human element that's just not something that's possible, at least as the technology exists right now. And that's kind of the the impression that I got too, because even if you have something that, to the wide swath of humanity, is the ideal, there are still going to be extremes on either side that say that it does this too much or it doesn't do that enough. It's sort of a what a sliding scale kind of.

13:41 - Reed Albergotti (Guest)
yeah, yeah, I think we'll get there. I just don't think it's going to be with doing exactly the same methods that we're doing now, but just making the models better or doing, doing, you know, reinforcement, learning better. I think it's going to take new methods and people are working on those methods. If you look at what META is doing, I think it's really interesting where they're trying to create these. You know models, smaller ones that are really good at one task, and sort of tying them all together. It's part of this sort of world model idea that Jan Lacun is the head of AI. You know, ai over there there's. There are a lot of different ideas and I think we will get there, but it's just it's going to take new, new techniques.

14:25
I think the other thing that's sort of interesting here is that you know, of course, this is this has become. You know, this is now like beyond technology, beyond the technology story right, it's on Fox news. They're talking about the Woke Silicon Valley tech companies. It's sort of a continuation of this content moderation debate that we've had. You know, in the wake of the 2016 election. There are people who believe that there shouldn't be any reinforcement. I mean it were, you know, there really shouldn't be any attempt to make these things you know act and behave in this in this sort of societally acceptable way. You should just let them, you know, do what they do and trust that users are going to to use them correctly, right, and of course, of course you know you can be on either side of that debate, but that debate will be there, I think, for some time and it does. I think, by by you know, making these things quote unquote safe, I think you do start losing some capability. We've, we've seen evidence of that. Oh, absolutely.

15:27
Right, and I think people miss that right. Like, even if you believe, like, yeah, I mean, I, I'm all for. You know all the diversity and all the stuff that the right would call woke right. You know, even if you're all for that, like you want that capability because you're not, you know that you're not using it to. You know, I don't know, do do things.

15:45 - Mikah Sargent (Host)
Yeah, I'm using it responsibly, right. And so when I go to use the this is a good example, because someone here at work who uses the tool for what we do there was a tool that they were using for so long that worked great, and then the company instituted a number of new policies specifically around copyright and sort of trained the model on that and then, even with content that we owned, trying to get it to, you know, ingest that content. It would say we don't have proof that you have the rights to access this content, so we suddenly can't act on it. Yeah, and that's frustrating because up to that point it had been a great tool that we were able to use can't use it anymore because these little blocks have been put in place. So, yeah, capabilities definitely are lost and it makes sense why some of the most powerful uses of these tools end up being kind of the more open source. You do it locally systems where there aren't these guardrails in place.

16:48 - Reed Albergotti (Guest)
Yeah, I think that's a really good point and I think it also sort of it makes me wonder whether you know startups and you know open source companies are actually. This is how one of the advantages and one of the ways they can disrupt or unseat the incumbent tech companies because they can make these mistakes or they can sort of create products that you know maybe they're not as exposed to the public relations hits they'll, you know that will come out of, you know these things producing kind of offensive outputs, right, and so I don't know, it's a real conundrum, I think, for any big company making this stuff. Absolutely you sort of you can't win right, You're either going to get like criticized for bias and all of sorts of other horrible things or you're going to be criticized for being too woke or holding it back and making it less capable. So I don't know, I don't know where the line is on this stuff.

17:51 - Mikah Sargent (Host)
Darned if you do, darned if you don't read over God. I want to thank you so much for your time today. It is great always to get to chat with you. Of course folks can head over to semaphorecom to check out your work. How do they get that newsletter, that sweet, sweet newsletter?

18:06 - Reed Albergotti (Guest)
Yeah, it's super easy. You just type in your email address and we will send you a free technology newsletter twice a week, which most people seem to like. So I encourage you to do that and love when I get emails from readers with feedback. So it's a good community we're building over at semaphore tech and hope to hear from you. Awesome Thanks, so much.

18:31 - Mikah Sargent (Host)
Thank you, alrighty folks. Up next my story of the week about a car project gone away, but first let me take a quick break to tell you about our sponsor this week. It is DeleteMe who are bringing you this episode of Tech News Weekly. Have you ever searched for your name online and you just didn't like how much of your personal information was available? I certainly have. It gives me. It makes me feel gross thinking about all that information that was online before I found DeleteMe. See, deleteme helps reduce risk from identity theft, credit card fraud, robocalls, cybersecurity threats, harassment and unwanted communication. Overall, we've used the tool here at Twitter because many of us were receiving a bunch of messages from someone pretending to be our CEO, lisa LaPorte. In order to stop that from happening, deleteme was used to remove a lot of that personal information that was online because those bad actors, as they were able to go online, find the flow chart, the organizational chart of the company, know who to contact and say hey, it's me, lisa. That's the voice that I imagine them having. I need you to bring me 15 Apple gift cards. It'd be a shame if you didn't do that. Then many of us knew no, no, no, no, that's not actually Lisa Didn't sound like her first and foremost, but also we have that knowledge. But you may work at a company where that's not the case. That's where DeleteMe can come in handy. The first step is to sign up and submit some basic personal information. You've got to give them the information that they should be looking for for removal. Deleteme experts will find and remove your personal information from hundreds of data brokers, helping reduce your online footprint and keeping you and your family safe. Then this is the most important part, because those data brokers are going to keep finding and scooping up your information. Deleteme will continue to scan and remove your personal information regularly. That includes addresses, photos, emails, relatives, phone numbers, social media, property value and more. Since privacy, exposures and incidents affect individuals differently, they have privacy advisors that ensure that customers have the support they need when needed. So protect yourself and reclaim your privacy by going to joindeletemecom slash twit and using the code TWIT. That's joindeletemecom slash twit with the code TWIT for 20% off. Our thanks to DeleteMe for sponsoring this week's episode of Tech News Weekly. All right, folks, if you have been saving up in your I don't know football stadium-sized piggy bank for Apple's car, then it's time to crack open that big ol' hog and put all that money somewhere where it can be earning interest instead. Because according to, as I call him, mark Bloomberg Mark Gurvin of Bloomberg Apple's car project is no more.

21:41
Apple has long been rumored to be working on a self-driving electric vehicle, and the company seems to have not only canceled the part of the vehicle that would make it self-driving but also canceled the part of the vehicle that is, the vehicle itself. It's kind of been a pivot and a shift. Right At first they realized that they didn't really want to get into self-driving, but they still wanted to make an electronic vehicle. And again, of course, this is all sources familiar with the matter who say this. So take this with the necessary grains of salt, as it were. And now the company appears to no longer be working on the project.

22:28
The project, of course, is known as Project Titan to make a fully autonomous electronic vehicle, and over the years there have been a number of issues at play. We've seen different leadership move from the project, we've seen the kind of market conditions change and, as you might imagine, there are, as there always will be, a number of technological hurdles that the company seemingly is not interested in throwing further investment and attention into this has reportedly been in the works for more than a decade and the company when they made this decision, there were apparently or I shouldn't say apparently there were reportedly allegedly 2,000 employees working on the project. So 2,000 people were a little caught off guard by the fact that that car project, project Titan, was canceled. According to Mark Gurman, it was announced by both the chief operating officer, jeff Williams, and also the vice president, kevin Lynch, who were both in charge of the project. They announced that the project was no more, and many of the employees who were working on the car project are now being moved to the artificial intelligence division. So it seems like the company is making a pivot toward really getting into generative AI, and this makes sense that you may be going okay.

24:11
So how can you go from a car project to AI? Well, for many of the employees who were working on the electric car project the ones that weren't specifically working on the electric car part of it all but the ability for the vehicle to navigate without a driver, that's AI. There's a lot of artificial intelligence involved there, and so it makes sense that those employees would be shifted to oh, shifted. That's a little bit of a pun, would be shifted to something that is similar in their line of work. We just heard Tim Cook say that the company has some big plans when it comes to generative AI and artificial intelligence as a whole. So there will be, of course, some layoffs who, because those employees may not be able to find other projects within the company, but to what extent we're not sure.

25:17
Now, interestingly, investors in the company seem to have applauded Apple's decision to cancel this project. After the announcement was made, apple shares increased, and so maybe that suggests that the investors were feeling like this was a long shot of a project, that was a waste of money per chance, and so it made sense to move along to something else. Now we've heard in the past that Apple had looked into purchasing Tesla from Elon Musk and maybe using that as a means, a jumping off point, as it were, to further its own Apple car project that famously fell through, and that is still, as far as we know, not the case that that's not happening and that the company is just completely shifting away from Project Titan as a whole. Now I didn't know. This German points out in the piece that the electric vehicle market is kind of slowing, despite the fact that in the beginning was really up an atom. It's kind of slowing because of high prices, but also the charging infrastructure around the country and elsewhere is not where it needs to be, and so that kind of needs to get figured out first before any kind of real saturation of electronic vehicles takes place. Overall, we should see a continued effort from the company to show its chops when it comes to generative AI, maybe even as soon as WWDC, which of course takes place in the summer, and the next versions of the operating systems are announced, and we shall see what is next for the company. As German points out, this kind of joins a few other projects, including a TV set in the graveyard of Apple products, as well as the multi-device wireless charging pad that the company was rumored to be creating, believe it called AirPower. So AirPower and that Apple TV display are waiting with open arms to bring along Project Titan. So we'll keep an eye on that as things go forward, but I think ultimately I'm excited that this means that there are more employees focusing on what Apple's going to do in the generative AI space. That should be quite fascinating.

28:06
All righty up next. I've got another interview for you. This time we're going to be talking about Biden's new executive order, but before that, I do want to take a quick break to tell you about ClubTwit. Twittertv is where you go to join the club $7 a month, $84 a year. You can become a member of ClubTwit.

28:27
When you do, you get some great stuff. First and foremost, you get that warm, fuzzy feeling in your heart knowing that you are supporting the work that we do here at Twitter. You are helping me continue to do these shows, invite on great guests to have conversations about what's going on in tech and you gain access to ad-free content. You also get access to the TWIT Plus bonus fee that has extra content you won't find anywhere else behind the scenes before the show. After the show, special Club Twitter events get published there, including our recent escape room experience. You also get access to the ClubTwit Discord, a fun place to go to chat with your fellow ClubTwit members and also those of us here at TWIT Plus.

29:06
Audio and video access to many shows that are exclusive, that are video exclusive on ClubTwit, including Hands on Mac and iOS Today, hands on Windows and Home Theater Geeks, plus some other great shows. So please consider signing up for ClubTwit at twittv slash clubtwit $7 a month, $84 a year. Alrighty, we are back from the break and that means it's time for our next conversation. Biden announced that he would be putting forth an executive order that is all about our personal information. Joining us to help us understand what this executive order is about and what the impact might be is Carissa Bell from Engadget. Welcome back to the show, carissa.

30:03 - Karissa Bell (Guest)
Hey, it's good to be here.

30:04 - Mikah Sargent (Host)
Great to have you, so let's get right into it. I was hoping, first and foremost, can you just tell us about President Biden's executive order, which countries are impacted and what types of information I guess you'll reveal this to are barred from sale. What does that mean?

30:20 - Karissa Bell (Guest)
Yeah, so it restricts the bulk sale of personal data to Russia, china, north Korea, cuba, iran and Venezuela, and it specifically targets data that a lot of us would think of as pretty sensitive geolocation, biometric, genomic, health, financial and some other types of personally identifying information.

30:45 - Mikah Sargent (Host)
Okay, so this, yeah, it's just completely a ban on sharing all of that kind of very personal information with a specific list of countries, or allowing for the sale, disallowing for the sale of that information to a specific list of other countries. Now the announcement, the actual press release from the White House said that Biden will issue an executive order, and this was as of February 28th today very rare day, february 29th, given that it's a leap year. Do you know, do we know, when that order is set to go into effect? Has it been put into effect now? How does that work?

31:28 - Karissa Bell (Guest)
So I don't think we quite know just yet when these new rules will take effect. There's going to be a kind of a complex rulemaking process within the Department of Justice to try and figure out exactly how these rules will work. If there's carve outs kind of what those should look like, they're going to engage, I think, some of the industry in that process. So we don't know exactly when those will take effect, but this is sort of like them saying that their intention is to make this happen and after they kind of go through some rounds of comment and rulemaking, then we'll sort of see the finalized version of this.

32:04 - Mikah Sargent (Host)
Understood. Now it makes sense that Biden's personal information, that Biden's genomic information and even something as simple as the emails that Biden is sending, are a national security risk. You don't want China, Russia, Cuba, all these other places necessarily having access to that. For our listeners who might be going, how does my data make a difference when it comes to national security? Can you talk about what that means and why we would maybe want to block the sale of all Americans' personal data?

32:47 - Karissa Bell (Guest)
Yeah, I mean, I think you have to put this kind of in the broader geopolitical context of our relationship with these countries right now, especially China and Russia. We already know that they kind of access the government talked about yesterday about how they we already know that they access a lot of this type of data, either through data broker transactions or through hacking or kind of other means, and I think the concern is sort of not necessarily what they can do with any one person's information. But when you buy these kind of large troves of data, you can find information about military personnel, activists, dissidents, researchers, people that the governments of these countries might have reason to target for blackmail or espionage or other things that would be at odds with our, I guess, national security priorities.

33:40 - Mikah Sargent (Host)
Understood, and in your piece you mentioned a Duke University study. I was hoping you could tell our listeners about that. It was really interesting.

33:50 - Karissa Bell (Guest)
Yeah, so you know, this is kind of an issue that a lot of researchers have been trying to raise the alarm about for a while, and there's a group at Duke University that put out something last year where they basically sought out data brokers that were advertising that they had a lot of information about veterans or military personnel and their families, which turned out was not that hard for them to find, and then they kind of posed as buyers to see, like is there any kind of vetting process? You know just how difficult is it to get this kind of, you know, pretty sensitive, non-public data, and what they found was it was actually really easy and really cheap. They posed us two different firms One was like an American firm and one was as a foreign firm and in both cases a lot of these companies just kind of handed it over without asking any questions.

34:39 - Mikah Sargent (Host)
Wow, yeah, that's big. I mean that clearly suggests that it's quite easy for these countries to get this information and, as you pointed out, these were perhaps citizens who are a little bit more closely tied to national security than others might be. Now you mentioned a little bit at the top the enforcement process. I hope you could kind of go into it a little bit more. Who went when the president puts forth an executive order, and in this case the executive order barring the sale, what group is responsible for making sure that data brokers follow these rules, and do we have any information about what that will look like? Maybe even if we're just looking at precedent, that when something involving tech in the past has been put through with an executive order, how does this process kind of typically follow through?

35:37 - Karissa Bell (Guest)
Yeah, it's a good question because I think that's kind of one of the biggest question marks around how effective this will ultimately end up being is sort of how this enforcement process works.

35:48
I don't think we have the exact details.

35:50
That's part of what the Department of Justice is kind of working on right now.

35:54
They've said that broadly they kind of want to model this after the way that the US does sanctions policies. So the same way that businesses are kind of expected to have some basic measures in place to make sure that they're not doing business with entities that are under US sanctions, that they want data brokers and other companies that kind of engage in these transactions to have a process in place where they're doing some kind of vetting, that they have some kind of internal system to at least try and prevent sometimes the indirect sale of this data, because one thing that happens is one group buys a bunch of data, then they resell it and then on down the line and it makes its way to a country. So they want to try and come up with kind of measures to put the onus on the companies that are holding this data to actually do a little bit of due diligence. We don't know exactly how that's going to work. I think it's kind of more to come as they kind of go through this process.

36:51 - Mikah Sargent (Host)
Now, one thing that I noticed about this is, as you pointed out, the announcement was made yesterday and we don't know quite when it's going to go into effect, partly because of what you just explained, that it is a multi-step process, what groups are going to be involved, what exactly it looks like, what carve-outs they're going to be. But for anyone who, for whatever reason, has put in their email to get all of the press releases from the White House, and then anyone who watches this show, or maybe some folks, I'm sure, especially the folks who watch our other show, security Now may already be aware of the fact that our data is sold. Maybe they were right about that Duke University study. This is now an awareness or maybe has been for some time that people have that their personal information may very well be being purchased by other countries, and in some cases, countries that, politically speaking, we are not in the best terms with, and so there is a level of kind of I don't know.

38:01
I guess what I'm getting at is there's a part of me that's a little bit surprised, maybe that this announcement was made because it draws attention to the fact that this is happening, but yet now there's going to be this period of time while we wait for these protections to come into place. So what's an individual to do Knowing now that our information is being sold to other countries or that is available for purchase in other countries? Is there something that we should be doing as individuals to help keep ourselves more safe and more private online?

38:36 - Karissa Bell (Guest)
Yeah, it's a good question and I think there is some language in the press release that I think kind of suggests that they also want to draw attention to this issue more broadly. The unfortunate reality is that the data broker industry is a massive, multi-billion-dollar industry and it's largely unregulated. There's some state-level laws, but we don't have comprehensive privacy laws in this country yet there's been some attempts to do so, so up until now there's been relatively few ways that these there's been relatively few restrictions on these data brokers and what they can actually do with our data. In terms of what we can do as individuals, I think there's kind of what we might think of as privacy and security best practices. Don't give apps and services like permissions that they don't need. Be careful about where you're sharing your personal information, things like that.

39:36
I think it's hard because a lot of this information is already out there and a lot of it we do have no control over, for example, credit card companies selling transaction histories to data brokers. So I think you can educate yourself about how this happens and try and look out for companies that sort of have a better record on these issues. There's also services that will kind of go through and crawl data broker databases for you and take out your information so it's not publicly viewable anymore. So there's steps like that, but I think one of the issues that this Executive Order highlights is that there's just so much already out there and that without kind of our government getting involved and actually putting some rules in place, a lot of this is going to keep going unchecked.

40:30 - Mikah Sargent (Host)
Yeah, absolutely. I mean, and that is honestly where I celebrate that this is taking place, because we those of us who are paying attention I've been paying attention for some time and are specifically working in this field have known about data brokers and know the level of information that they collect and keep and that, yes, there was no way that little changes were ultimately going to make the difference necessary here, that this needed to come kind of from the top. So it's good that this is taking place overall when it comes to protecting our individual privacy and, in a way I will say it's a little bit of a shame that it kind of takes it being a national security risk for that to be the case. But, all told, it is a change that is positive for the individual consumer. Carissa Bell, I want to thank you so much for taking the time to explain the Executive Order to us and for joining us today. Of course, folks can head over to engadgetcom to check out your work. Is there anywhere else they should go to follow along with what you're doing?

41:40 - Karissa Bell (Guest)
You know I'm on social media, I'm on threads, Blue Sky, still on Twitter. Same handle everywhere Carissa BE.

41:48 - Mikah Sargent (Host)
Wonderful. Thank you so much for your time.

41:51 - Karissa Bell (Guest)
Thank you.

41:52 - Mikah Sargent (Host)
Alrighty folks, up next my final story of the week, my final story of the week in just a moment, alrighty, for those of you out there who are, you know, perhaps concerned about your security at home, or just see other people who have video doorbells and you're thinking about getting one yourself, I wanted to point to a really important report, a sort of study from Consumer Reports. That is happens to be by Stacey Higginbotham, for one, who, if you are not aware, is a former host of this Week in Google and a longtime IOT journalist, and it's all about video doorbells. These video doorbells are apparently for sale on many different online online stores, so Walmart, sears, amazon have these video doorbells for sale, and they have really, really bad security. Now, they don't cost a whole lot of money and, particularly on Amazon, they may show up as a sort of promoted item or an item that has a good recommendation, and because of that, they may be something that people are interested in purchasing. These doorbells, though, do not encrypt a lot of the information that's exchanged between them and the sort of app and server that they can with which they communicate.

43:45
The doorbells are made by, or they're sold under, two brand names, called Eekin E-K-E-N and Tuck T-U-C-K. And they are along with 10 other video doorbells that are out there that are kind of they almost look exactly identical. They're all controlled by an app called A-I-W-I-T. I don't even know how to begin to pronounce that. A-i-w-i-t is the app that it's used and the app is owned by Eekin E-K-E-N. As I mentioned, there were 10 other brands, right, the Consumer Reports team purchased one that was sold under the name Fishbot and another one called Raik Blue. Again, what these brand names are? Just ridiculous.

44:38
But these doorbells were the security or the lack of security resulted in Stacey Higginbotham actually kind of having her home exposed by a colleague who was able to access images from the doorbell camera while being nearly 3,000 miles away. And the way that this happened is and they don't explain the exact method by which you use this information but I will say that it is clear to me that if you have the app and you have the serial number of the device, then that's all. You need to be able to get snapshots of what the doorbell is seeing. You can't get full video, but you can see snapshots. You can get full video if you have physical access to the device and, given that it's a doorbell camera kind of. Everybody has to have physical access to the device, right? All you need to do is have the app.

45:54
You walk up to the doorbell and you press and hold the doorbell button to put the doorbell into pairing mode and then you pair it with your app and then suddenly the person who owns it does not have access to this doorbell anymore. The good thing, if there's anything that's good about it that can be said, is that the doorbell, if the pairing changes, you do at least get an email the person who kind of owned the doorbell and had it paired in the first place. You get an email saying that it's been paired with a new person. So that part at least you would be aware of it. You could go and then repair it yourself. That's just annoying. However, even without ever alerting the other person, if you have the serial number which is on the device, then you can see snapshots from the device without the other person ever being aware of it. So, if you can imagine, this is suddenly a device that is completely accessible without the person being alerted and can continue to be accessed from there. On top of that, unencrypted personal information is sent through the network traffic, and so the person's Wi-Fi ID their SSID, as it's called and the person's personal IP address are both sent over the network unencrypted, so that information could be. If someone was able to grab the data, they would be able to see the person's home IP and their Wi-Fi name, which on its own is not necessarily going to be enough to do anything. But with that information, somebody who's a little bit more sophisticated could potentially do more, and especially if you're using a common router and you've not changed the Wi-Fi password on it, it becomes very easy to do so.

47:59
Now, as I mentioned, there are many, many, many different brands that are selling the same doorbell camera. It is clearly made by one manufacturer, and then multiple people, multiple companies, take it and rebrand it exactly as they need to. There were 4,200 listings of the, the Econ and Tuck versions of the product in January of this year. Amazon, walmart, sears, shine and Timu or Timu, I can't remember how that was pronounced all were alerted by Consumer Reports that this device had issues. Timu said that they reviewed CR's findings and that they'd removed all of the video doorbells that use the AIWIT app. Walmart said that it would do something about it. Don't know if they did, and Amazon, sears and Shine did not respond to questions from the journalists. Unfortunately, as of the end of February that's now as we record this show most of those video doorbells were still available for sale on those many retailers' websites.

49:32
Another thing that was kind of unfortunate is that, on top of those security vulnerabilities that we talked about before, one thing that is necessary for a product to be sold in the United States is that there have to be special FCC identifiers visible to consumers. You need to be able to see the FCC identification. It's a special code. You can look it up in the FCC database to make sure that it is okay to use and that it won't cause you harm because the radio frequencies are too strong. Those records were. Those identifiers were not visible to consumers, so by default they are illegal for sale in the United States, but on top of that there were some records of the devices, but not records of all of the devices. In any case, though, as I just said, you have to have that FCC code identifier available somewhere for the consumer to see. Otherwise the product is not legal to be sold in the United States.

50:47
When it comes to these doorbells, I mentioned that oftentimes it can be a device that is recommended. The Consumer Reports document says that Amazon highlights it as Amazon's choice for overall pick. Now I, as a person who work in this field and have, you know, gathered an understanding about how Amazon goes about rating these devices in some cases and doesn't in others. I know that the Amazon's choice is something that I should never take at face value, but many a consumer will see that badge and consider it to mean that it's something that is kind of blessed, that it is a product that is going to be better than others, and it's unfortunate because in many ways and in many cases, these different badges are just automatically generated based on how many times people have found this product, how many times people have maybe purchased the product.

51:55
It's all kind of algorithmic, and so what I find it's important to do is, if you ever see that Amazon's choice badge, look at the Amazon's choice badge and then look what it says to the right of it. Because, let's say, I was looking for a silicone floor mat for my dog's food and water right, I want a mat that I can put on the floor, but I can put the dog's food and the water on top of, and I do a search and I come across a green silicone floor mat in the shape of a dog bone, and so I click on it and it has that Amazon badge next to it. It says Amazon's choice. In the text to the right I might see something that says Amazon's choice for floor mats that are shaped like dog bones, that are green, and so it's such a specific thing that, of course, the one product that is green and shaped like a bone is going to be Amazon's choice, because that's the one thing that people can find. If they typed in the words green dog bone shaped floor mat, then yeah, that's going to end up being the product that people are buying and, in many cases, not returning if that's exactly what they wanted, so then that Amazon's choice label can get assigned.

53:25
My point is make sure you pay attention to the context. Now, it is bad in this case that it wasn't only Amazon's choice, but the text next to it said overall pick. That means that it is a well rated, well priced product that enough people have purchased and not returned or gave a poor review to that they continue to buy it. But it's likely the case that many of the people purchasing this aren't aware of the security flaws that the device has. So, ultimately, my advice to you in this case is to do your research and if you don't feel like you can do your research, pass off that research to someone else. It could be a publication. Maybe you go to Consumer Reports, maybe you go to the Wire Cutter, maybe you go to your favorite tech site and see what they've said about the best doorbell cameras and that you maybe lean your bias toward a more established company when it comes to purchasing these kinds of products. That, instead of buying a no-name device that has a really good price, you go. Okay, I'm going to invest a little bit more money because I don't want someone 3,000 miles away to be able to look in and see what I'm doing in my home. So I'm gonna leave it at that. There's a lot more to read in this Consumer Reports piece and I think Stacey Higginbotham deserves all of the clicks and all of the views from all of you out there, so please go check out the full article about it to get the full scoop on it. But there's a little bit of insight into these video doorbells that have security issues. Contact your family and your friends and make sure none of them have purchased these cameras because they're not good. They're not good. Alright, folks, that's going to bring us to the end of this episode of Tech News Weekly.

55:28
This show publishes every Thursday at twittvtnw, so you can head there to subscribe to the show in audio and video formats. There are a couple of buttons subscribe to audio, subscribe to video and I mentioned Club Twit before, so I won't go into all of the details. I'll just say head to twittvtnwtnw. $7 a month, $84 a year, join the club, and we appreciate it. We appreciate all of you for doing that.

55:54
If you'd like to follow me online, I'm at Micah Sargent or you can head to chiwawacoffeecihuicoffee, where I've got links to the many places I'm active online. You can check out later today iOS today and hands on Mac, both shows that I do here on the twit network in the club. And, of course, you can check out on Sundays. Ask the tech guys, which I co-host with Leo Laporte, where we take your questions live on air, your tech questions live on air and do our best to answer them. Thanks so much for tuning in and I will see you again next week for another episode of Tech News Weekly. Next week, we'll have Abraar Alhiti as my guest co-host. Until then, bye-bye.

 

All Transcripts posts