By Dale Jose

ARTIFICIAL INTELLIGENCE (AI) is one of the most rapidly growing and most talked about technologies. Its momentum shows no signs of slowing down. With recent rapid developments, AI is on the path to becoming the most definitive and transformative technology in this generation. It is crucial for all sectors to speed up the work to manage this technology.

In Microsoft’s report, Governing AI: A Blueprint for the Future, we offer some ideas and suggestions. These suggestions build on the lessons we’ve been learning based on the work we’ve been doing for several years. We’ve defined, published, and implemented ethical principles to guide our work. We’ve built out constantly, improving engineering and governance systems to put these principles into practice. Today, we have about 350 people working on responsible AI at Microsoft, helping us implement best practices for building safe, secure, and transparent AI systems designed to benefit society.

NEW OPPORTUNITIES TO IMPROVE THE HUMAN CONDITIONThe resulting advances in our approach have given us the capability and confidence to see ever-expanding ways for AI to improve people’s lives. We’ve seen AI help push progress in multiple fields and sectors. Moreover, other innovations are fending off cyberattacks and helping to protect fundamental human rights, even in nations afflicted by foreign invasion or civil war.

AI offers even more potential for the good of humanity than any invention that has preceded it. In fact, AI offers a new tool to genuinely help advance human learning and thought.

THE GUARDRAILS FOR THE FUTUREWhile AI opens a world of new possibilities to help improve people’s lives, we must also be ready for the challenges that lie ahead with this new and evolving technology.

As technology moves forward, it’s just as important to ensure proper control over AI as it is to pursue its benefits. We are committed to develop and deploy AI in a safe and responsible way. We also recognize, however, that the guardrails needed for AI require a broadly shared sense of responsibility and should not be left to technology companies alone.

When we at Microsoft adopted our six ethical principles for AI in 2018, we noted that one principle was the bedrock for everything else–accountability. This is the fundamental need: to ensure that machines remain subject to effective oversight by people, and the people who design and operate machines remain accountable to everyone else. In short, we must always ensure that AI remains under human control. This must be a first-order priority for technology companies and governments alike.

People who design and operate AI systems cannot be accountable unless their decisions and actions are subject to the rule of law. In many ways, this is at the heart of the unfolding AI policy and regulatory debate. How do governments best ensure that AI is subject to the rule of law? In short, what form should new law, regulation, and policy take?

A FIVE-POINT BLUEPRINT FOR THE PUBLIC GOVERNANCE OF AIIn Section One of Governing AI: A Blueprint for the Future, we offer a five-point framework to address several current and emerging AI issues through public policy, law, and regulation. We offer this recognizing that every part of this blueprint will benefit from broader discussion and require deeper development, but we hope this can contribute constructively to the work ahead.

First, implement and build upon new government-led AI safety frameworks. The best way to succeed is often to build on the successes and good ideas of others. In this instance, there is an important opportunity to build on work completed just four months ago by the US National Institute of Standards and Technology (NIST). NIST has launched a new AI Risk Management Framework and we offer four concrete suggestions to implement and build upon this framework. We also believe the administration and other governments can accelerate momentum through procurement rules based on this framework.

Second, require effective safety brakes for AI systems that control critical infrastructure. This blueprint proposes new safety requirements that would create safety brakes for AI systems that control the operation of designated critical infrastructure. These fail-safe systems would be part of a comprehensive approach to system safety that would keep effective human oversight, resilience, and robustness top of mind.

Third, develop a broad legal and regulatory framework based on the technology architecture for AI. We believe there will need for the law place various regulatory responsibilities upon different actors based upon their role in managing various aspects of AI technology. For this reason, this blueprint includes information about some of the critical pieces that go into building and using new generative AI models.

Fourth, promote transparency and ensure academic and nonprofit access to AI. We believe a critical public goal is to advance transparency and broaden access to AI resources. Moreover, it is critical to expand access to AI resources for academic research and the nonprofit community.

Fifth, pursue new public-private partnerships to use AI as an effective tool to address the inevitable societal challenges that come with new technology. Important work is needed now to use AI to protect democracy and fundamental rights, provide broad access to the AI skills that will promote inclusive growth, and use the power of AI to advance the planet’s sustainability needs. It is crucial to develop concrete initiatives and bring organizations from all sectors together to advance them. We offer some initial ideas in this report, and we look forward to doing much more moving forward.

GOVERNING AI WITHIN MICROSOFTEvery organization that creates or uses advanced AI systems will need to develop and implement its own governance systems. In Section Two of Governing AI: A Blueprint for the Future, we describe the AI governance system within Microsoft — where we began, where we are today, and how we are moving into the future.

When it comes to AI, we first developed ethical principles and then had to translate these into more specific corporate policies. We’re now on the second version of the corporate standard that embodies these principles and defines more precise practices for our engineering teams to follow. We’ve implemented the standard through training, tooling, and testing systems that continue to mature rapidly. This is supported by additional governance processes that include monitoring, auditing, and compliance measures.

We are on a collective journey to forge a responsible future for artificial intelligence. We can all learn from each other. And no matter how good we may think something is today, we will all need to keep getting better. As technological change accelerates, the work to govern AI responsibly must keep pace with it. With the right commitments and investments, we believe it can.

Dale Jose, National Technology and Security Officer of Microsoft Philippines. Microsoft (Nasdaq “MSFT”@microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

Neil Banzuelo