Inside Chicago’s surveillance panopticon

MIT Technology Review
by Rod McCullom
February 23, 2026
AI-Generated Deep Dive Summary
On September 2, 2024, a tragic mass shooting occurred on a Chicago Transit Authority Blue Line train, claiming four lives. Police responded swiftly by activating a digital dragnet—a network of thousands of surveillance cameras—capturing the suspect and leading to his arrest just 90 minutes later. This incident highlights Chicago’s extensive surveillance system, which includes up to 45,000 cameras, one of the highest per capita in the U.S., along with advanced tools like license plate readers and access to various security feeds from schools, parks, and transportation systems. While law enforcement touts this vast network as effective for public safety, critics argue it creates a “surveillance panopticon,” infringing on privacy rights and disproportionately targeting Black and Latino communities. Chicago’s history of excessive policing in these neighborhoods has led to concerns that such measures fail to address deeper structural issues like job shortages, housing insecurity, and mental health services. This debate over surveillance underscores the broader tension between security and civil liberties. The use of AI-driven systems, such as ShotSpotter acoustic sensors, further fuels this controversy. Although designed to detect gunfire and aid response times, critics argue these tools often target marginalized communities and have been linked to incidents like the fatal shooting of a 13-year-old by police responding to an alert. This has sparked significant pushback, including successful campaigns to halt their use in Chicago. For readers interested in AI’s role in public safety, this story reveals both potential benefits—like faster crime response—and ethical dilemmas tied
Verticals
aitechscience
Originally published on MIT Technology Review on 2/23/2026