Microsoft, $MSFT, has banned U.S. police departments from using enterprise AI tool for facial recognition

Microsoft has reiterated its prohibition on U.S. police departments using generative AI for facial recognition through Azure OpenAI Service. This service is Microsoft's fully managed, enterprise-focused wrapper around OpenAI technology.

Language added to the terms of service for Azure OpenAI Service now explicitly prohibits integrations with Azure OpenAI Service from being used "by or for" police departments for facial recognition in the U.S. This includes integrations with OpenAI's current and potentially future image-analyzing models.

Another new addition to the terms covers "any law enforcement globally" and explicitly prohibits the use of "real-time facial recognition technology" on mobile cameras, such as body cameras and dashcams, to attempt to identify a person in "uncontrolled, in-the-wild" environments.

These policy changes come shortly after Axon, a company that manufactures technology and weapons for military and law enforcement, announced a new product that uses OpenAI's GPT-4 generative text model to summarize audio from body cameras. Critics raised concerns about potential pitfalls, such as hallucinations (even the best generative AI models today invent facts) and racial biases introduced from the training data (especially concerning given that people of color are more likely to be stopped by police than white individuals).

It is unclear if Axon was using GPT-4 via Azure OpenAI Service, and if so, whether the updated policy was in response to Axon's product launch. OpenAI had previously restricted the use of its models for facial recognition through its APIs.

The new terms still allow some flexibility for Microsoft. The complete ban on Azure OpenAI Service usage applies only to U.S. police, not international law enforcement. Additionally, it does not cover facial recognition performed with stationary cameras in controlled environments, such as a back office (although the terms prohibit any use of facial recognition by U.S. police).

This aligns with Microsoft's and OpenAI's recent approach to AI-related law enforcement and defense contracts. In January, it was reported that OpenAI is working with the Pentagon on various projects, including cybersecurity capabilities, marking a departure from the startup's earlier ban on providing its AI to militaries. Microsoft has also proposed using OpenAI's image generation tool, DALL-E, to help the Department of Defense (DoD) build software for military operations.