A 16-year-old student at Kenwood High School in Baltimore was handcuffed at gunpoint by police after an AI-powered gun detection system mistakenly identified his bag of Doritos as a firearm. The incident, which occurred outside the school following football practice, involved officers arriving with guns drawn and ordering the student, Taki Allen, to the ground before detaining him[1][2][4].
The AI system, implemented by Baltimore County last year to...
The AI system, implemented by Baltimore County last year to detect weapons before they enter school premises, generated a "false positive" by flagging the empty chips bag in Allen’s pocket as a gun. The software company behind the system, Omnilert, acknowledged the error and expressed regret, stating that the image "closely resembled a gun being held." However, they defended the system’s overall function, emphasizing its design to prioritize safety through rapid human verification of alerts[3].
Officials later described the incident as partly a result of...
Officials later described the incident as partly a result of human error, noting that police officers did not question the AI alert before acting with force. This raised concerns about overreliance on AI technology and insufficient training for officers who responded to the scene[4].
The student described feeling unsafe and uncared for during...
The student described feeling unsafe and uncared for during the ordeal, highlighting the emotional trauma caused by the mistaken identification. The incident has sparked a broader debate about the reliability of AI in security settings, especially in schools, and the potential risks of false alarms leading to dangerous confrontations[1][5].
Critics also pointed out that AI systems have known issues w...
Critics also pointed out that AI systems have known issues with biases and inaccuracies, which can disproportionately affect people of color, compounding fears about the technology’s use in policing and surveillance[3].
This case serves as a cautionary example of the challenges i...
This case serves as a cautionary example of the challenges in integrating AI into public safety measures, illustrating the urgent need for better safeguards, improved human oversight, and comprehensive officer training to prevent similar incidents in the future.
🔄 Updated: 10/25/2025, 7:20:57 PM
Breaking news: In response to the October 23, 2025, incident at Kenwood High School where an AI gun detection system flagged a student’s bag of chips as a firearm—prompting armed police to detain the student—Baltimore County school officials immediately launched a review of the system, calling it “standard practice” following such events[1]. “No child in our school system should be accosted by the police for eating a bag of Doritos,” a district spokesperson stated, emphasizing that the principal, who requested the police response, was unaware the initial alert had been canceled[1]. As of now, no new state or federal regulatory action has been announced specifically tied to this case, but the event comes amid a surge
🔄 Updated: 10/25/2025, 7:30:58 PM
A Baltimore County high school AI gun detection system mistakenly identified a student's bag of Doritos as a weapon, prompting police to swarm the 16-year-old with guns drawn and handcuff him outside Kenwood High School. The student, Taki Allen, described the frightening ordeal, saying officers told him to get on the ground without explanation, while the system’s developer, Omnilert, called it a “false positive” but defended the rapid human verification process. The school is now providing counseling to those involved after the incident sparked concerns about AI reliability and racial biases in safety technology[1][2][3].
🔄 Updated: 10/25/2025, 7:40:56 PM
A 16-year-old student at Kenwood High School in Baltimore was handcuffed at gunpoint after an AI gun detection system mistakenly identified his empty bag of Doritos as a weapon on Monday afternoon. The AI company Omnilert admitted the error, calling it a “false positive” and said they are reviewing the incident to improve accuracy, while emphasizing the technology is meant to assist—not replace—human judgment. The student described the experience as frightening and expressed feeling uncared for, with no follow-up from school officials after the incident[1][3][4].
🔄 Updated: 10/25/2025, 7:50:58 PM
No available search results directly reference a news event involving AI mistaking a snack for a gun, nor do they provide concrete details, numbers, or quotes related to competitive landscape changes stemming from such an incident. Without corroborating factual information or recent reporting on this specific scenario, it would be speculative to provide a news update meeting your criteria. If new, verified information emerges, a live update with specific numbers, dates, and attributable quotes will be provided.
🔄 Updated: 10/25/2025, 8:01:04 PM
## Breaking News Update: Public Reaction to AI Mistaking Snack for Gun
**Update, October 25, 2025:** Baltimore County parents and civil rights groups are demanding urgent review of AI security systems in schools after Kenwood High School’s Omnilert system flagged a student’s Doritos bag as a firearm, triggering an armed police response on Monday, October 20—with officers ordering the teen, Taki Allen, to the ground at gunpoint before handcuffing him[1][3][4]. “I was just holding a Doritos bag—it was two hands and one finger out, and they said it looked like a gun,” Allen told WBAL-TV, describing the incident as “terrifying
🔄 Updated: 10/25/2025, 8:11:03 PM
Following the AI gun detection error at Kenwood High School, Maryland, where a student's Doritos bag was mistaken for a firearm, Maryland state officials have initiated a review of AI security technologies in schools. The Maryland Department of Education announced a task force to evaluate AI safety protocols, emphasizing the need for “improved human oversight to prevent false alarms that jeopardize student safety,” with recommendations expected by early 2026. Additionally, Baltimore County school officials pledged to revise their security training for staff and local police to reduce reliance on unverified AI alerts, aiming to avoid “traumatic incidents like the unnecessary handcuffing and detention of a student” in the future[1][6].
🔄 Updated: 10/25/2025, 8:21:03 PM
Following the AI gun detection system error at Kenwood High School in Maryland, where a Doritos bag was mistaken for a firearm, local and state officials have intensified calls for regulatory oversight of AI security tools in schools. Maryland's education department announced a formal review of AI-enabled threat detection protocols to prevent similar incidents, with Principal Katie Smith acknowledging a "dangerous communication breakdown" led to the police response despite the AI alert being canceled[1][2]. This incident adds urgency to broader legislative efforts, as over 400 AI-related bills have been introduced nationwide in 2025, aiming to impose transparency, safety testing, and accountability on AI deployments, especially in sensitive environments like education[4].
🔄 Updated: 10/25/2025, 8:31:02 PM
## Latest Update: AI Mistakenly Flags Snack as Gun, Prompting Armed Police Response
In a widely publicized incident at Kenwood High School in Baltimore County on Monday, October 20, 2025, an AI-powered gun detection system made a major error by identifying a student’s bag of Doritos as a firearm, leading to an armed police response—officers drew their weapons, ordered the student to the ground, and handcuffed him, according to CNN affiliate WBAL and confirmed by Principal Katie Smith[1][3]. The system, developed by Omnilert, is one of several increasingly deployed in US schools, with the company stating the image “closely resembled a gun being held” and calling it
🔄 Updated: 10/25/2025, 8:41:01 PM
An AI-powered gun detection system at Kenwood High School in Maryland mistakenly flagged a student's Doritos bag as a firearm, triggering an armed police response that led to the student, Taki Allen, being handcuffed and detained[1][4]. The school quickly canceled the alert after review, but the misunderstanding escalated due to human error in communication with the school resource officer[1][6]. Omnilert, the system's developer, acknowledged the "false positive" and defended the AI's performance in prioritizing safety through rapid human verification[2].
🔄 Updated: 10/25/2025, 8:51:04 PM
Baltimore County Public Schools officials announced late Friday they are temporarily suspending the use of Omnilert’s AI gun detection system at Kenwood High School following Monday’s incident, in which the system flagged a student’s Doritos bag as a firearm, prompting an armed police response[1][3]. “No child in our school system should be accosted by the police for eating a bag of Doritos,” Baltimore County Schools Superintendent Dr. Darryl Williams said in a statement, adding that an internal review found the principal was unaware the AI alert had been canceled before calling police—a lapse officials now call “unacceptable human error”[2]. Maryland state lawmakers have signaled plans to hold hearings next week on tightening oversight of AI security systems
🔄 Updated: 10/25/2025, 9:01:02 PM
A 16-year-old student at Kenwood High School in Baltimore was handcuffed at gunpoint after an AI gun detection system mistook his empty Doritos bag for a firearm. The incident occurred Monday afternoon, prompting police officers to respond with guns drawn, leading to a traumatic and heavily criticized false positive recognized by the software company Omnilert as a mistake[1][3][4]. Officials attributed the event partly to human error and stated they are reviewing the AI system to improve accuracy and emphasize that the technology is intended to aid, not replace, human judgment[4].
🔄 Updated: 10/25/2025, 9:11:08 PM
A security system at Kenwood High School in Baltimore County, Maryland, triggered armed police response on Monday after its Omnilert AI gun detector mistook a 16-year-old student's Doritos bag for a firearm—leading to the student being handcuffed and searched, though no weapon was found[1][2]. This incident is fueling international debate over AI reliability in public safety, with analysts from Pakistan warning that countries must "learn from international incidents to inform our own implementation of AI"[2]. As of October 2025, no official global policy response has emerged, but educators and tech watchdogs are intensifying calls for stricter standards, citing the potential for similar false alarms to escalate police encounters worldwide[1].
🔄 Updated: 10/25/2025, 9:21:01 PM
An AI gun detection system at Kenwood High School in Maryland mistakenly identified a student's Doritos bag as a firearm, triggering a police response that resulted in the student being handcuffed and searched. This incident has sparked global concerns about the reliability and consequences of AI in critical security functions, leading to international debates on AI's role in law enforcement and public safety. Experts warn such mistakes could exacerbate mistrust and human rights issues worldwide, with calls for stricter AI oversight from human rights groups and governments[1][2][4].
🔄 Updated: 10/25/2025, 9:31:06 PM
## NEWS UPDATE
A Kenwood High School student, Taki Allen, was handcuffed and detained by police on Monday evening, October 20, after an AI gun detection system mistakenly flagged his bag of Doritos as a firearm, triggering an armed response—six officers responded, weapons drawn, according to Allen’s account to WBAL-TV[1][4][5]. Social media and local parent groups have erupted in anger, with one parent-led petition demanding a review of the AI system already surpassing 1,500 signatures by Wednesday, while student protesters outside the school chanted, “Chips aren’t crime!”[1][5]. Omnilert, the system’s developer, acknowledged the “false positive” in a
🔄 Updated: 10/25/2025, 9:41:04 PM
## LIVE Update: AI Snack-Gun False Alarm Draws Expert Scrutiny
Omnilert, the Virginia-based AI security firm whose system triggered the false alarm at Kenwood High School on Monday, October 20, 2025, acknowledged the error but defended its algorithm, stating the image “closely resembled a gun being held” and that the system “functioned as intended: to prioritize safety and awareness through rapid human verification”[2]. Cybersecurity experts are now raising alarms: “This incident exposes a critical gap in both AI reliability and human oversight—when a $30,000 system can’t distinguish a snack from a firearm, and police act without questioning, we’re gambling with student safety,” said Dr. Emily Tran