Google has published a set of fuzzy but otherwise admirable “AI principles” explaining the ways it will and won’t deploy its considerable clout in the domain. “These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions,” wrote CEO Sundar Pichai.
The principles follow several months of low-level controversy surrounding Project Maven, a contract with the U.S. military that involved image analysis on drone footage. Some employees had opposed the work and even quit in protest, but really the issue was a microcosm for anxiety regarding AI at large and how it can and should be employed.
Consistent with Pichai’s assertion that the principles are binding, Google Cloud CEO Diane Green confirmed today in another post what was rumored last week, namely that the contract in question will not be renewed or followed with others. Left unaddressed are reports that Google was using Project Maven as a means to achieve the security clearance required for more lucrative and sensitive government contracts.
The principles themselves are as follows, with relevant portions quoted from their descriptions:
- Be socially beneficial: Take into account a broad range of social and economic factors, and proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides…while continuing to respect cultural, social, and legal norms in the countries where we operate.
- Avoid creating or reinforcing unfair bias: Avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.
- Be built and tested for safety: Apply strong safety and security practices to avoid unintended results that create risks of harm.
- Be accountable to people: Provide appropriate opportunities for feedback, relevant explanations, and appeal.
- Incorporate privacy design principles: Give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.
- Uphold high standards of scientific excellence: Work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches…responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications.
- Be made available for uses that accord with these principles: Limit potentially harmful or abusive applications. (Scale, uniqueness, primary purpose, and Google’s role to be factors in evaluating this.)
In addition to stating what the company will do, Pichai also outlines what it won’t do. Specifically, Google will not pursue or deploy AI in the following areas:
- Technologies that cause or are likely to cause overall harm. (Subject to risk/benefit analysis.)
- Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
- Technologies that gather or use information for surveillance violating internationally accepted norms.
- Technologies whose purpose contravenes widely accepted principles of international law and human rights.
(No mention of being evil.)
In the seven principles and their descriptions, Google leaves itself considerable leeway with the liberal application of words like “appropriate.” When is an “appropriate” opportunity for feedback? What is “appropriate” human direction and control? How about “appropriate” safety constraints?
It’s arguable that it is too much to expect hard rules along these lines on such short notice, but I would argue that it is not in fact short notice; Google has been a leader in AI for years and has had a great deal of time to establish more than principles.
For instance, its promise to “respect cultural, social, and legal norms” has surely been tested in many ways. Where can we see when practices have been applied in spite of those norms, or where Google policy has bent to accommodate the demands of a government or religious authority?
And in the promise to avoid creating bias and be accountable to people, surely (based on Google’s existing work here) there is something specific to say? For instance, if any Google-involved system has outcomes based on sensitive data or categories, the system will be fully auditable and available for public attention?
The ideas here are praiseworthy, but AI’s applications are not abstract; these systems are being used today to determine deployments of police forces, or choose a rate for home loans, or analyze medical data. Real rules are needed, and if Google really intends to keep its place as a leader in the field, it must establish them, or, if they are already established, publish them prominently.
In the end it may be the shorter list of things Google won’t do that prove more restrictive. Although use of “appropriate” in the principles allows the company space for interpretation, the opposite case is true for its definitions of forbidden pursuits. The definitions are highly indeterminate, and broad interpretations by watchdogs of phrases like “likely to cause overall harm” or “internationally accepted norms” may result in Google’s own rules being unexpectedly prohibitive.
“We acknowledge that this area is dynamic and evolving, and we will approach our work with humility, a commitment to internal and external engagement, and a willingness to adapt our approach as we learn over time,” wrote Pichai. We will soon see the extent of that willingness.
from TechCrunch https://ift.tt/2M56pXU
No comments:
Post a Comment