This Major Blunder Shows Why You Can’t Trust ChatGPT With Home Security Questions

Far from worrying about AI in home security, I’m a big fan of how it’s saving us time and getting things right, especially with object recognition. But that doesn’t mean you should hop on ChatGPT and start asking it questions about home technology, privacy or how safe a device is.
AI is excellent if you want one of Google’s agents to tell you when a package was delivered or picked up or ADT’s Trusted Neighbor to unlock your front for a recognized family member. But you really shouldn’t ask for advice on security, especially in its current state.
There are good reasons for this: Even the best LLMs (large language models) still hallucinate information from the patterns they’ve gleaned. That’s a particular problem in the smart home world, where tech specs, models, compatibility, vulnerabilities and updates shift so frequently. It’s easy for ChatGPT to get confused about what’s right, current or even real — and those are key questions when making decisions about home security. Let’s look at a few of the biggest mistakes so you can see what I mean.
Chat AIs hallucinate that Teslas are spying on your home securityBP’s alternative fuels wing is expanding its EV charging presence in the US with the purchase of Tesla DC fast-charging hardware.
BP

From talking fridges to iPhones, our experts are here to help make the world a little less complicated.

Asking a chatbot about specific security tech is always a risky business, and nothing illustrates that quite so well as this popular Reddit story about a chat AI that told the user a Tesla could access their "home security systems." That’s not true — it’s probably a hallucination based on Tesla’s own HomeLink service, which lets you open compatible garage doors. Services like Google Gemini also suffer from hallucinations which can make the details hard to trust.
While AI can write anything from essays to phishing emails (don’t do that), it still gets information wrong, which can lead to unfounded privacy concerns. Interestingly, when I asked ChatGPT what Teslas could connect to and monitor, it didn’t make the same mistake, but it did skip features like HomeLink, so you still aren’t getting the full picture. And that’s just the start.

From talking fridges to iPhones, our experts are here to help make the world a little less complicated.

Chatbots can’t answer questions about ongoing home threats or disastersConversational AI won’t provide you with important details about emerging disasters.
Tyler Lacoma/ChatGPT

ChatGPT and other LLMs also struggle to assimilate real-time information and use it to provide advice. That’s especially noticeable during natural disasters like wildfires, floods or hurricanes. As hurricane Milton was bearing down this month, I queried ChatGPT if my home was in danger and where Milton was going to hit. While the chatbot thankfully avoided wrong answers, it was unable to give me any advice except to consult local weather channels and emergency services.

Don’t waste time on that when your home may be in trouble. Instead of turning to AI for a quick answer, consult weather apps and software like Watch Duty, up-to-date satellite imagery and local news.
LLMs don’t have vital updates on data breaches and brand security While ChatGPT can compile information about a security company’s track record, it leaves out key details or gets things wrong.
Tyler Lacoma/ChatGPT

It would be nice if a chatbot AI could provide a summary of a brand’s history with security breaches and if there were any red flags about purchasing its products. Unfortunately, they don’t seem capable of that yet, so you can’t really trust what they have to say about security companies.

For example, when I asked ChatGPT if Ring had any security breaches, it mentioned that Ring had experienced security incidents but not when (before 2018), which is a vital piece of information. It also missed key developments including the completion of Ring’s payout to affected customers this year and Ring’s 2024 policy reversal that made cloud data harder for police to access.
ChatGPT isn’t good at providing a timeline for events and shouldn’t be relied on to make recommendations.
Tyler Lacoma/ChatGPT

When I asked about Wyze, which CNET is not currently recommending, ChatGPT said it was a "good option" for home security but mentioned it had a data breach in 2019 that exposed user data. But it didn’t mention that Wyze had exposed databases and video files in 2022, then vulnerabilities in 2023 and again in 2024 that let users access private home videos that weren’t their own. So while summaries are nice, you certainly aren’t getting the full picture when it comes to security history or if brands are safe to trust.

Read more: We Asked a Top Criminologist How Burglars Choose Homes
Chat AIs aren’t sure if security devices need subscriptions or notChatGPT can’t adequately explain security subscriptions or tiers.
Tyler Lacoma/ChatGPT

Another common home security question I see is about the need for subscriptions to use security systems or home cameras. Some people don’t want to pay ongoing subscriptions or want to make sure what they get is worth it. While chatbots can give lots of recipe specifics, they aren’t any help here.
When I questioned ChatGPT about whether Reolink requires subscriptions, it couldn’t give me any specifics, saying many products don’t require subscriptions for basic features but that Reolink "may offer subscriptions plans" for advanced features. I tried to narrow it down with a question about the Reolink Argus 4 Pro, but again ChatGPT remained vague about some features being free and some possibly needing subscriptions. As answers go, it was largely useless.
Meanwhile, a trip to CNET’s guide on security camera subscriptions or Reolink’s own subscriptions page shows Reolink offers both Classic and Upgraded tier subscriptions specifically for LTE cameras, starting at $6 to $7 per month depending on how many cameras you want to support and going up to $15 to $25 for extra cloud storage and rich notifications/smart alerts. Finding those answers takes less time than asking ChatGPT, and you get real numbers to work with.
ChatGPT isn’t the place for your home address or personal info, eitherDon’t let chatbots know too much about your personal info.
Vertigo3d via Getty

As the famous detective said, "Just one more thing." If you do ever query a chatbot about home security, never give it any personal information like your home address, your name, your living situation or any type of payment info. AIs like ChatGPT have had bugs before that allowed other users to spy on private data like that.
Additionally, LLM privacy policies can always be updated or left vague enough to allow for profiling and the sale of user data they collect. Scraping data from social media is bad enough, you really don’t want to hand personal details over directly to a popular AI service.
Be careful what data you provide in a question and even how you phrase it, because there’s always someone eager to use that. If you think you’ve already given your address out a few too many times online, we have a guide on how you can help fix it.

Read more: Your Private Data Is All Over the Internet. Here’s What You Can Do About It
For more information, check out if you should pay for more advanced ChatGPT features, check out our in-depth review of Google Gemini and the latest on Apple Intelligence. 

{Categories} _Category: Implications{/Categories}
{URL}https://www.cnet.com/home/security/this-major-blunder-shows-why-you-cant-trust-chatgpt-with-home-security-questions/{/URL}
{Author}Tyler Lacoma{/Author}
{Image}https://www.cnet.com/a/img/resize/5a780f433023bf7d53cdb98240fa698012e9d59e/hub/2024/10/25/77e1e8ae-820c-4687-bc75-2cb84a1a2531/gettyimages-2161097796.jpg?auto=webp&fit=crop&height=675&width=1200{/Image}
{Keywords}{/Keywords}
{Source}Implications{/Source}
{Thumb}https://www.cnet.com/a/img/resize/ab468ccb70c7a1b620bcc8298d0891cc06cc7680/hub/2024/10/25/77e1e8ae-820c-4687-bc75-2cb84a1a2531/gettyimages-2161097796.jpg?auto=webp&width=300{/Thumb}

Exit mobile version