Residents of cities like Detroit are getting fed up with police surveillance technology, and don’t care whether it decreases crime. Several digital rights advocacy groups are also weighing in, calling for bans on government use of facial recognition technology. There is a rejection of consequentialist or utilitarian arguments going on here—that is to say, if you argue “but crime is decreasing because of surveillance technology,” whether that claim is wrong, those opposed to its use often do so on a deep moral level.

But there are also powerful “policy objective” arguments against such technology, two of which have been cited by advocates of the California bill to ban police use of the tech: The first concern is that “facial recognition isn’t reliable enough to use without flagging high rates of innocent people for questioning or arrest.” The second is that “adding facial recognition to body cameras creates a surveillance tool out of a technology that was supposed to create more accountability and trust in police departments.”

Both of these arguments seem inarguably true to me. You can improve technology but it will always produce some false positives, and terrible things could happen to people’s lives as a result. And this widespread spying on people, far beyond even targeted surveillance in particular investigations, does not build trust between police and communities.

But I also think we should be careful to know that we can get what we ask for, in this case, a “ban” on the use of this technology, and what we are likely to get even if California and other states pass such laws. We’ll have to check (and in the case of the police this will mean community review) police procedures to ensure there will be no surreptitious, illegal use of the tech, or whether police will procure results of the tech from other entities.

That seems obvious, but I’m not sure everyone gets it. “Imagine if we could go back in time and prevent governments around the world from ever building nuclear or biological weapons. That’s the moment in history we’re in right now with facial recognition,” said Evan Greer, deputy director of Fight for the Future, in a statement. But certainly, a ban biological and nuclear weapons can’t prevent their production altogether. Likewise, you can (and probably should) regulate, restrict, monitor, and ban police procedures and use of technology, but people, entities, will still develop surveillance technology. Governments and bad-acting private entities will use it if they can get away with it, and “getting away with it” takes interesting forms in the world of high corruption.

Of course, that’s not an argument against banning police use of the tech, but instead an argument for doing more, for at least also improving the conversation about technology and trust.

That conversation goes both ways in that it will sometimes affirm new tech even though it’s imperfect, and reject another tech for perhaps doing its job too well. As saving lives go, autonomous vehicles are probably more helpful than surveillance technology. They will certainly save millions of lives worldwide, although we can debate how many. Nevertheless, the media, and not just the media, focus on the crashes that may occur. It’s easy, and correct, to respond as philosophy professor and essayist Ryan Muldoon does in The Conversation: “autonomous cars will have been a wild technology success even if they are in millions of crashes every year, so long as they improve on the 6.5 million crashes and 1.9 million people who were seriously injured in a car crash in 2017.”

But that’s not always how people see it; there’s an intersubjective element to risk assessment, and understanding how people’s minds work is part of understanding how to apply data. That’s why 71 percent of Americans still don’t “trust” autonomous vehicles even in 2019. Learning more about risk is important, but taking democratic, deliberative control of risk management—including against an overly enthusiastic surveillance state—would be even better.