Facial recognition technology takes advantage of modern computers’ ultra-high speeds to filter through a database of images and help identify a person. The technology is commonly used on a small scale in order to allow employees entry into their building, but it can also be used on a much larger scale, like in the movies when the audience waits in suspense while the CIA uses a computer to refine search results of a suspect before dramatically revealing a match. For that and many other reasons, this new tech has been met with a great deal of controversy — which is exactly why the newest generation of facial recognition has been put on the back burner, for now.

Major tech companies including IBM, Amazon, and Microsoft have agreed to stop selling facial recognition systems to law enforcement until legislation is passed regarding how the technology will be handled going forward. The debate is squarely centered around human rights at a time when the topic is front and center due to the ongoing protests across the nation sparked by the murders of George Floyd, Breonna Taylor, Rayshard Brooks, and many other Black Americans at the hands of the police. For that reason, tech companies are insisting on well-defined regulations regarding how and when police will be able use facial recognition innovations in the future.

Microsoft president Brad Smith commented: “We need Congress to act, not just tech companies alone.” It’s not the first time Smith has spoken out about the potential for misuse of the technology. Two years ago, in another blog post, he explained that “unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up.”

There are two main reasons theses companies pulled back plans to release the tech into police departments while insisting on legislative action. The first is due to the current focus on both racial injustice and police brutality. These companies are taking a moral stance against both and are trying to protect the public from them.

The second reason for the decision is that myriad reports have shown that commercial facial recognition algorithms are less accurate when it comes to darker skin, frequently misidentifying minorities and people of color. And since the goal of the technology is to make the job of the police easier, and not to result in false matches or propagate inaccurate stereotypes, that seems like a pretty big problem. Having individual privacy and discrimination protections in place not only safeguards the public, but insulates the companies providing the technology, too.

The reality is that in general, technological innovation is far outpacing appropriate legislation. For example, Amazon’s widely popular Ring doorbells can also provide facial recognition information, and the company already partners with around 1,300 police departments across the nation. Although it’s ready to launch new products, Amazon Ring announced this month that it would be putting a one-year hold on selling new tech to law enforcement in order to give the courts time to implement protections.

Also this month, IBM announced that it was no longer investing in the research and development of facial recognition technology. Instead, CEO Arvind Krishna publicly expressed an interest in working with lawmakers to advance social justice and police reform, stating, “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.” He added, “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values.”