How AI descriptions of some home security cameras are causing ‘problems’ for users in US

58 minutes ago 4
ARTICLE AD BOX

How AI descriptions of some home security cameras are causing ‘problems’ for users in US

Artificial intelligence (AI) powered home security cameras designed to provide more detailed alerts. However, some of them are causing confusion and anxiety among some users in the US after incorrectly identifying routine events as emergencies, animals or potential intruders, a report claims.

Instead of sending standard motion notifications, several camera systems now use AI to generate descriptions of what they detect. While these alerts can sometimes identify specific objects or people accurately, users say they have also received warnings about fires, tornadoes, bears and suspicious individuals that turned out to be false alarms.In a statement to the Wall Street Journal, Kristi Buckley, a federal contractor in Houston, said that one alert from her home camera warned that her neighbour’s house was on fire.

She rushed to check the footage only to find that the supposed flames were the red brake lights of a parked vehicle. On another occasion, the same system identified a reflection in a window as a “Tornado sighted.”The incidents are raising questions about the reliability of AI-generated surveillance alerts as home security companies continue expanding such features.

Users report AI confusing pets, reflections and people with threats

Buckley told WSJ her Wyze camera has also identified her dog’s interaction with a cat as a “ninja cat,” though she later found the footage showed “just a normal, non-ninja, black cat.”

Users across different brands have reported similar experiences. Cameras have reportedly tagged humans as bears and turkeys, while raccoons, dogs and even moving flags have been identified as bears. One corgi was reportedly labelled a pig.“They’re astonishingly good at recognizing visual patterns, but they really don’t have any common sense,” said David Doermann, a computer-science and engineering professor at the University at Buffalo.

“That’s why it can be very impressive at one moment but completely wrong the next.”

Why AI alerts are becoming more common in home security systems

Home security companies are increasingly relying on AI-generated descriptions to improve their surveillance alerts. Instead of generic motion alerts, systems can provide alerts with descriptions like identifying a person’s clothing or specific vehicle.Buckley said she values her Wyze camera’s ability to recognise “that a bird is a blue jay, or that the person by a truck is ‘a man in a white shirt walking alongside a Ford-350.’”

However, she said inaccurate alerts continue to occur.Wyze Labs co-founder and chief marketing officer Dave Crosby acknowledged limitations in the technology, saying: “But there are just so many billions of unique scenarios that the models need to learn before they can get absolutely everything right.”The company introduced descriptive AI alerts in early 2025 and offers them as part of a $19.99 monthly subscription package.

Wyze says it has more than 13 million users.

How false AI alerts have led to concern among users

Tauf Chowdhury, a digital-transformation consultant in New York, received an AI alert from his Ring camera saying “a dark-colored bear is walking on the paved area.”“I was like, oh, I hope it’s not a person, because a person could be the same proportion as a bear,” he said. “What the hell did it pick up?”The supposed bear turned out to be a raccoon. Chowdhury later decided not to continue paying for the AI feature after his trial ended.A Ring spokesperson said: “As with all AI-driven features, the descriptions may occasionally be imprecise,” adding that customer feedback is used to improve the AI models.According to SafeHome.org, around 75 million homes in the US have security cameras. A recent survey cited by the company found that 28% of users already have AI person and package detection, while 39% expressed interest in facial recognition features, despite privacy concerns.Meanwhile, users continue reporting unusual alerts. Florida resident Vanessa Soderstrom said her Blink camera warned of someone near her sliding glass door, which turned out to be her niece’s reflection.“That really creeped me out,” Soderstrom said.Later, the system identified Soderstrom herself as “a brown bear” while she was cleaning outdoors wearing brown clothing. Another alert claimed “a person is jumping from the roof of a house,” which she said was again triggered by her own movement.As AI surveillance tools become more common, companies are working to improve accuracy while balancing user expectations around reliability and safety.

Read Entire Article