Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Microsoft bans police use of facial recognition technology.

0 0
Read Time:2 Minute

Microsoft Implements Stricter Regulations for AI Use in Law Enforcement

Microsoft recently announced a ban on the use of its artificial intelligence service for facial recognition technology by police departments in the United States. The decision came as part of an update to the code of conduct for its Azure OpenAI Service. The company specified that integrations with the service must not be utilized for real-time facial recognition technology used by any law enforcement agency globally.

Scope of the Ban

The ban extends to the use of facial recognition technology on various devices, including mobile cameras and dash-mounted cameras, for the purpose of identifying individuals in uncontrolled environments or matching them against a database of suspects or prior inmates. Microsoft’s move reflects a growing concern over the ethical implications and potential risks associated with the use of AI in law enforcement.

Azure OpenAI Service

The Azure OpenAI Service provides enterprise customers with access to OpenAI’s large language models (LLMs). Microsoft manages the service and limits its usage to customers with existing partnerships that focus on lower-risk applications. By imposing these restrictions, Microsoft aims to ensure responsible and ethical use of AI technologies in various domains.

Controversy Surrounding AI-Powered Police Reports

Last week, Axon, a company specializing in technology for law enforcement, unveiled an AI-powered software program called Draft One. This innovative product leverages OpenAI’s powerful LLM, GPT-4, to automate the process of generating police report narratives based on audio recordings from police body cameras. While Axon touts Draft One as a groundbreaking tool for law enforcement, critics have expressed concerns about its potential consequences.

See also
AI Enhances Bitcoin Blockchain Analysis for Money Laundering

Some experts warn that AI, due to its inherent limitations, such as generating false or nonsensical information, could lead to inaccurate reporting and legal complications. Dave Maass from the Electronic Frontier Foundation raised doubts about the effectiveness of AI tools in the hands of untrained law enforcement personnel. He emphasized the need for adequate training and understanding of AI technologies to prevent misuse and misinterpretation.

Implications of Microsoft’s Code of Conduct

While the timing of Microsoft’s updated code of conduct coincides with the release of Axon’s AI-powered software, it is essential to note that the company’s decision may have broader implications for the future of AI ethics in law enforcement. By setting clear guidelines and restrictions on the use of AI technology in sensitive areas like facial recognition, Microsoft is taking a proactive stance towards ensuring responsible AI deployment.

Image/Photo credit: source url

About Post Author

Chris Jones

Hey there! 👋 I'm Chris, 34 yo from Toronto (CA), I'm a journalist with a PhD in journalism and mass communication. For 5 years, I worked for some local publications as an envoy and reporter. Today, I work as 'content publisher' for InformOverload. 📰🌐 Passionate about global news, I cover a wide range of topics including technology, business, healthcare, sports, finance, and more. If you want to know more or interact with me, visit my social channels, or send me a message.
Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %