top of page
Search
Writer's pictureHowie Klein

Armed Police Department Robots Roaming Our Streets Shooting People? San Francisco Decides Today

Updated: Dec 5, 2022



Yesterday, Shahid Buttar called me to let me know that this afternoon, the San Francisco Board of Supervisors will vote on a proposal to arm local police robotic drones with lethal weapons, an obviously “authoritarian policy whose very suggestion reveals a disturbing political trend in our city that should shame every San Franciscan.” I feel shamed and I haven’t lived there in decades! I must admit, I'm not even 100% comfortable yet with those delivery drones all over L.A. these days... And I don't even think they're armed. The question, of course, is— and has been for a long time— does San Francisco even deserve its reputation as a “progressive” city? Shahid wrote about it on his substack.


“Proposing to arm SFPD drones with lethal force,” he wrote, “is positively ignorant, especially when contrasted with the progress emerging across the rest of California. At the state level, California recently expanded a policy to become among the first states in the nation to proactively seal criminal records after most defendants complete their sentences. Just last year, our state adopted several measures to curtail the use of force by human police officers. Giving police robots lethal weapons would senselessly move in the opposite direction.


We’ve all seen how often police get it wrong. Every innocent person killed by police proves the point, as does every example of state violence towards unarmed dissidents or journalists. Every time an innocent person is executed by the state proves the point further, as does every prisoner exonerated after conviction for crimes they did not commit, often after losing years of their lives to biased so-called “justice.”
Especially in this context, arming robots— with even less discretion than human police officers— would be senseless, lazy, and predictably lead to innocent people ending up in early graves. Too many innocent San Franciscans have already been killed by SFPD officers.
Police don’t just kill some innocent people and send others to prison. Recent examples from Uvalde, Texas to Colorado Springs, Colorado also reveal how little police actually do to protect public safety. Those particular events revealed police not only failing to take meaningful action in the face of a threat to public safety, but then counterproductively impeding and even detaining civilians who did the right thing.
On the one hand, police officers failing to intervene when innocent lives are threatened might seem like a reason to arm robots and deploy them instead. Policies to reduce risks to police officers in the face of potential violence are worth considering in the abstract, but applying that reasoning in any given instance requires ignoring that police are paid generous salaries—starting at over $100,000 in San Francisco—precisely because their jobs involve risk.
In fact, policing is far less risky than many other less-compensated lines of work, including construction, logging, delivery driving, mining, and agriculture. Government statistics also prove that policing has grown safer in recent years. If anyone is going to consider arming robots to reduce risks to human police officers, they might consider starting instead with robots to reduce the more substantial risk to laborers—many of whom are from precisely the communities most exposed to arbitrary profiling by police.
Arming drones makes even less sense when considering the costs, particularly to justice, the presumption of innocence, and communities of color long subjected to arbitrary police violence and profiling. It would be one thing if police didn’t already kill over 1,000 people every year and falsely arrest many more, reflecting massive racial disparities and biases. That stark reality, however, suggests that tomorrow’s proposal offers policymakers a chance to leap from a frying pan into a proverbial fire.
Facial recognition technology was presented to the public (by a domestic-facing military industrial complex) as a way to protect public safety. But the profound invasiveness of allowing the government to track everyone’s movements in public space threatens the public more than it could protect us.
That’s why the SF Board of Supervisors adopted the first policy in the country to prohibit the use of facial recognition technology by municipal agencies, including police, in 2019. The Supervisors adopted that policy for many good reasons, some of which I explained to the Board before their historic vote.
Proponents of tech-in-policing presume that public safety challenges are driven by inefficiency in enforcement, as distinct from more intrinsic problems like poverty, social crisis, and desperation. Proponents of tech-in-policing also ignore how technology effectively launders documented racial biases while pioneering new ways to violate fundamental rights. The unique threats posed by facial recognition were among the reasons the Board voted to restrain it in 2019.
…Why would elected leaders in a supposedly “progressive” city even consider a proposal so authoritarian as to subject residents to arbitrary execution without charge or trial?
Every story about the supposed “crime wave” sweeping San Francisco has reflected editorial decisions to privilege the perspectives of police and property owners before, for instance, San Franciscans subjected to arbitrary policing. This is not an abstract concern for the few Black residents who remain in our city after the vicious gentrification (in which police played a key role beyond that of the housing market) of the past generation shrank our city’s Black population to roughly a third of its previous peak.
The Board of Supervisors already voted earlier this year to water down their historic decision in 2019 that curtailed police surveillance and expanded civilian oversight, opening the door for more reforms favoring the SFPD over justice and our communities.

If I still lived up there, I’d be calling my member of the board STAT to remind him or her that allowing police department robots to use lethal force:

  1. is unapologetically authoritarian, and profoundly out of step with the city’s commitments to civil liberties.

  2. indulges concerns about police safety while denigrating pressing concerns about community safety.

  3. resigns the Board’s previous leadership on issues of public safety and civil liberties.

  4. invites an escalation of tech-washing that has long papered over (or entirely obscured) documented racial biases in law enforcement.

  5. compounds an already problematic history of our city marginalizing communities of color.

  6. repeats the errors of a federal policy that has already exacerbated international conflict while enabling hundreds (if not thousands) of preventable civilian deaths.


UPDATE:


The Board of Supervisors approved the deadly robot plan in a 8-3 vote. Oakland shelved the same proposal because of a strong public backlash. The Sacramento Bee reported that “Elizabeth Joh, a UC Davis School of Law professor and an expert in policing, privacy, and technology, said those critics are right to be concerned… She asked what other situations would police seek permission to use lethal force robots? While the robotic technology in the hands of police today relies on slow-moving track treads, Joh wondered what will happen when law enforcement has the capability to use drones or four-legged robots to apply lethal force.”


A few months ago a Russian chess robot (unarmed thankfully) grabbed the finger of his 7 year old opponent and broke it. “Played by humans, chess is a game of strategic thinking, calm concentration and patient intellectual endeavour. Violence does not usually come into it. The same, it seems, cannot always be said of machines. Last week, according to Russian media outlets, a chess-playing robot, apparently unsettled by the quick responses of a seven-year-old boy, unceremoniously grabbed and broke his finger during a match at the Moscow Open… A Russian grandmaster, Sergey Karjakin, said the incident was no doubt due to ‘some kind of software error or something,’ adding: ‘This has never happened before. There are such accidents. I wish the boy good health.’ Christopher may have been lucky. While robots are becoming more and more sophisticated, with the most modern models capable not just of interacting but actively cooperating with humans, most simply repeat the same basic actions— grab, move, put down— and neither know nor care if people get in the way. According to one 2015 study, one person is killed each year by an industrial robot in the US alone. Indeed, according to the US occupational safety administration, most occupational accidents since 2000 involving robots have been fatalities.”


Call me crazy, but I hope the 8 San Francisco supervisors who voted to allow armed robots to kill people are run over by driverless Uber cars.



168 views

Comments


bottom of page