The Information Technology Industry Council (ITI) — of which Apple is a member — today has shared a new document outlining AI Policy Principles for the tech industry, the government, and public-private partnerships, as well as projecting how valuable AI could become in the near future.
Whoosh! Screen Cleaner
The five-page document seeks to “ensure that AI can deliver its greatest positive potential” and contains three mains sections: Industry’s Responsibility in Promoting Responsible Development and Use, The Opportunity for Governments to Invest In and Enable the AI Ecosystem, and The Opportunity for Public-Private Partnerships (PPPs).
ITI says projections see AI creating more than $60 billion of yearly value by 2020 in the U.S., while generating between $7 trillion to $13 trillion globally by 2025.
As it evolves, we take our responsibility seriously to be a catalyst for preparing for an AI world, including seeking solutions to address potential negative externalities and helping to train the workforce of the future.
Here’s what the document lays out for the responsibilities of the tech industry:
Responsible Design and Deployment:
We recognize our responsibility to integrate principles into the design of AI technologies, beyond compliance with existing laws. While the potential benefits to people and society are amazing, AI researchers, subject matter experts, and stakeholders should and do spend a great deal of time working to ensure the responsible design and deployment of AI systems. Highly autonomous AI systems must be designed consistent with international conventions that preserve human dignity, rights, and freedoms. As an industry, it is our responsibility to recognize potentials for use and misuse, the implications of such actions, and the responsibility and opportunity to take steps to avoid the reasonably predictable misuse of this technology by committing to ethics by design.
Other components for the industry include: Safety and Controllability, Robust and Represenative Data, and Interpretability, and Liability of AI Systems Due to Autonomy.
As for principles for the government, the document highlights the need for a flexibility:
Flexible Regulatory Approach:
We encourage governments to evaluate existing policy tools and use caution before adopting new laws, regulations, or taxes that may inadvertently or unnecessarily impede the responsible development and use of AI. As applications of AI technologies vary widely, overregulating can inadvertently reduce the number of technologies created and offered in the marketplace, particularly by startups and smaller businesses. We encourage policymakers to recognize the importance of sector-specifc approaches as needed; one regulatory approach will not ft all AI applications. We stand ready to work with policymakers and regulators to address legitimate concerns where they occur.
Other topics include Investment in AI Research and Development, Promoting Innovation and the Security of the Internet, Cybersecurity and Privacy, and Global Standards and Best Practices.
The last section addresses the opportunity for public-private partnerships with a focus on education and equal opportunity.
Democratizing Access and Creating Equality of Opportunity:
While AI systems are creating new ways to generate economic value, if the value favors only certain incumbent entities, there is a risk of exacerbating existing wage, income, and wealth gaps. We support diversifcation and broadening of access to the resources necessary for AI development and use, such as computing resources, education, and training, including opportunities to participate in the development of these technologies.
Public Private Partnership:
PPPs will make AI deployments an attractive investment for both government and private industry, and promote innovation, scalability, and sustainability. By leveraging PPPs – especially between industry partners, academic institutions, and governments – we can expedite AI R&D and prepare our workforce for the jobs of the future.
This section also details STEM Education and Workforce opportunities. The full AI Policy Principles can be found here.
Recent initiatives by Apple have made its AI efforts more public. The company launched its machine learning journal back in July, after first announcing that it would allow researchers to share findings at the end of last year. As it happens, Apple won a prestigious award for the first white paper it published. Last week, an entry in its machine learning journal shared a fascinating look at how ‘Hey Siri’ works.