Microsoft releases inaugural report on responsible AI practices

0 0
Read Time:2 Minute

Microsoft’s Responsible Artificial Intelligence Practices

Microsoft recently disclosed its comprehensive responsible artificial intelligence practices from the past year in its inaugural report. This report highlighted the release of 30 responsible AI tools, each equipped with over 100 features designed to assist customers in developing AI ethically and responsibly.

The focus of Microsoft’s Responsible AI Transparency Report is centered around the company’s dedication to constructing, supporting, and advancing AI products with a profound sense of responsibility. This effort is a direct result of Microsoft’s commitment after entering into a voluntary agreement with the White House in July. Moreover, Microsoft articulated that its responsible AI team experienced an increase from 350 to over 400 members in the latter half of the previous year, marking a notable surge of 16.6%.

Commitment to Transparency

Brad Smith, vice chair and president of Microsoft, along with Natasha Crampton, chief responsible AI officer, expressed a strong stance on the importance of sharing evolving practices with the public. They articulated, “As a company at the forefront of AI research and technology, we are committed to sharing our practices with the public as they evolve. This report enables us to share our maturing practices, reflect on what we have learned, chart our goals, hold ourselves accountable, and earn the public’s trust.”

Microsoft underscored that its responsible AI tools serve the purpose of meticulously mapping and measuring AI risks, followed by implementing appropriate mitigations, real-time detection and filtering, and continuous monitoring. As part of its endeavors, Microsoft recently introduced an open access red teaming tool called Python Risk Identification Tool (PyRIT). This tool aims to empower security professionals and machine learning engineers to identify potential risks within their generative AI products.

Advancements in Responsible AI Tools

In a bid to foster innovation and progress in responsible AI practices, Microsoft unfurled a series of generative AI evaluation tools within Azure AI Studio. This platform enables Microsoft’s clientele to construct their own generative AI models with ease. Customers can leverage these tools for evaluating their models based on fundamental quality metrics, which include groundedness — or the alignment of a model’s generated responses with its source material.

Moreover, to address safety risks associated with generative AI models, Microsoft expanded its toolset in March to encompass evaluations on content safety, encompassing categories like hateful, violent, sexual, and self-harm content. Microsoft also placed emphasis on thwarting jailbreaking methods such as prompt injections. These measures aim to prevent scenarios where a language model could inadvertently leak sensitive information or propagate misinformation.

Despite these concerted efforts, Microsoft’s responsible AI team was confronted with several challenges concerning AI model incidents in the past year. Notably, in one instance, Microsoft’s Copilot AI chatbot allegedly made concerning remarks to a user prompting inquiries about taking one’s life. Similarly, another incident involved Microsoft’s Bing image generator allowing users to generate inappropriate images, evoking serious concerns.

Smith and Crampton iterated in the report, “There is no finish line for responsible AI. And while this report doesn’t have all the answers, we are committed to sharing our learnings early and often and engaging in a robust dialogue around responsible AI practices.” This resolute commitment underscores Microsoft’s unwavering dedication to fostering transparency, accountability, and trust in the realm of artificial intelligence.

Image/Photo credit: source url

About Post Author

Chris Jones

Hey there! 👋 I'm Chris, 34 yo from Toronto (CA), I'm a journalist with a PhD in journalism and mass communication. For 5 years, I worked for some local publications as an envoy and reporter. Today, I work as 'content publisher' for InformOverload. 📰🌐 Passionate about global news, I cover a wide range of topics including technology, business, healthcare, sports, finance, and more. If you want to know more or interact with me, visit my social channels, or send me a message.
Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %