In Could 2024, we launched our inaugural Accountable AI Transparency Report. We’re grateful for the suggestions we acquired from our stakeholders all over the world. Their insights have knowledgeable this second annual Accountable AI Transparency Report, which underscores our continued dedication to constructing AI applied sciences that individuals belief. Our report highlights new developments associated to how we construct and deploy AI methods responsibly, how we help our prospects and the broader ecosystem, and the way we study and evolve.
The previous 12 months has seen a wave of AI adoption by organizations of all sizes, prompting a renewed focus on efficient AI governance in follow. Our prospects and companions are wanting to study how we now have scaled our program at Microsoft and developed instruments and practices that operationalize high-level norms.
Like us, they’ve discovered that constructing reliable AI is sweet for enterprise, and that good governance unlocks AI alternatives. In response to IDC’s Microsoft Accountable AI Survey that gathered insights on organizational attitudes and the state of accountable AI, over 30% of the respondents observe the dearth of governance and danger administration options as the highest barrier to adopting and scaling AI. Conversely, extra than 75% of the respondents who use accountable AI instruments for danger administration say that they’ve helped with information privateness, buyer expertise, assured enterprise choices, model status, and belief.
We’ve additionally seen new regulatory efforts and legal guidelines emerge over the previous 12 months. As a result of we’ve invested in operationalizing accountable AI practices at Microsoft for near a decade, we’re nicely ready to comply with these laws and to empower our prospects to do the identical. Our work right here is just not performed, nonetheless. As we element within the report, environment friendly and efficient regulation and implementation practices that help the adoption of AI know-how throughout borders are nonetheless being outlined. We stay centered on contributing our sensible insights to standard- and norm-setting efforts all over the world.
Throughout all these aspects of governance, it’s essential to stay nimble in our method, making use of learnings from our real-world deployments, updating our practices to mirror advances within the state-of-the-art, and guaranteeing that we’re aware of suggestions from our stakeholders. Learnings from our principled and iterative method are mirrored within the pages of this report. As our governance practices proceed to evolve, we’ll proactively share our contemporary insights with our stakeholders, each in future annual transparency studies and different public settings.
Key takeaways from our 2025 Transparency Report
In 2024, we made key investments in our accountable AI instruments, insurance policies, and practices to maneuver on the velocity of AI innovation.
-
- We improved our accountable AI tooling to supply expanded danger measurement and mitigation protection for modalities past textual content—like pictures, audio, and video—and extra help for agentic methods, semi-autonomous methods that we anticipate will symbolize a big space of AI funding and innovation in 2025 and past.
- We took a proactive, layered method to compliance with new regulatory necessities, together with the European Union’s AI Act, and supplied our prospects with sources and supplies that empower them to innovate in step with related laws. Our early investments in constructing a complete and industry-leading accountable AI program positioned us nicely to shift our AI regulatory readiness efforts into excessive gear in 2024.
- We continued to use a constant danger administration method throughout releases via our pre-deployment evaluate and crimson teaming efforts. This included oversight and evaluate of high-impact and higher-risk makes use of of AI and generative AI releases, together with each flagship mannequin added to the Azure OpenAI Service and each Phi mannequin launch. To additional help accountable AI documentation as a part of these evaluations, we launched an inner workflow software designed to centralize the assorted accountable AI necessities outlined within the Accountable AI Commonplace.
- We continued to supply hands-on counseling for high-impact and higher-risk makes use of of AI via our Delicate Makes use of and Rising Applied sciences crew. Generative AI purposes, particularly in fields like healthcare and the sciences, had been notable progress areas in 2024. By gleaning insights throughout instances and interesting researchers, the crew supplied early steering for novel dangers and rising AI capabilities, enabling innovation and incubating new inner insurance policies and pointers.
- We continued to lean on insights from analysis to tell our understanding of sociotechnical points associated to the newest developments in AI. We established the AI Frontiers Lab to spend money on the core applied sciences that push the frontier of what AI methods can do by way of functionality, effectivity, and security.
- We labored with stakeholders all over the world to make progress in direction of constructing coherent governance approaches to assist speed up adoption and permit organizations of all types to innovate and use AI throughout borders. This included publishing a ebook exploring governance throughout varied domains and serving to advance cohesive requirements for testing AI methods.
Looking forward to the second half of 2025 and past
As AI innovation and adoption proceed to advance, our core goal stays the identical: incomes the belief that we see as foundational to fostering broad and useful AI adoption all over the world. As we proceed that journey over the following 12 months, we’ll focus on three areas to progress our steadfast dedication to AI governance whereas guaranteeing that our efforts are aware of an ever-evolving panorama:
- Creating extra versatile and agile danger administration instruments and practices, whereas fostering abilities growth to anticipate and adapt to advances in AI. To make sure individuals and organizations all over the world can leverage the transformative potential of AI, our means to anticipate and handle the dangers of AI should maintain tempo with AI innovation. This requires us to construct instruments and practices that may rapidly adapt to advances in AI capabilities and the rising variety of deployment eventualities that every have distinctive danger profiles. To do that, we will make higher investments in our methods of danger administration to supply instruments and practices for the most typical dangers throughout deployment eventualities, and in addition allow the sharing of check units, mitigations, and different finest practices throughout groups at Microsoft.
- Supporting efficient governance throughout the AI provide chain. Constructing, incomes, and maintaining belief in AI is a collaborative endeavor that requires mannequin builders, app builders, and system customers to every contribute to reliable design, growth, and operations. AI laws, together with the EU AI Act, mirror this want for info to movement throughout provide chain actors. Whereas we embrace this idea of shared duty at Microsoft, we additionally acknowledge that pinning down how tasks match collectively is complicated, particularly in a fast-changing AI ecosystem. To assist advance shared understanding of how this may work in follow, we’re deepening our work internally and externally to make clear roles and expectations.
- Advancing a vibrant ecosystem via shared norms and efficient instruments, notably for AI danger measurement and analysis. The science of AI danger measurement and analysis is a rising however nonetheless nascent area. We’re dedicated to supporting the maturation of this area by persevering with to make investments inside Microsoft, together with in analysis that pushes the frontiers of AI danger measurement and analysis and the tooling to operationalize it at scale. We stay dedicated to sharing our newest developments in tooling and finest practices with the broader ecosystem to help the development of shared norms and requirements for AI danger measurement and analysis.
We look ahead to listening to your suggestions on the progress we now have made and alternatives to collaborate on all that’s nonetheless left to do. Collectively, we are able to advance AI governance effectively and successfully, fostering belief in AI methods at a tempo that matches the alternatives forward.
Discover the 2025 Accountable AI Transparency Report.