The US, UK, and 16 other countries have signed an agreement pledging to take steps to make AI “secure by design.”
Although acknowledged to be a basic statement of principles, the US Cybersecurity and Infrastructure Security Agency (CISA) has said that it’s an important first step …
Reuters reports.
The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are “secure by design.”
In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse.
CISA director Jen Easterly said that it was important that countries recognize that AI development needs a safety-first approach, and encouraged other countries to sign up.
“This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs,” Easterly told Reuters, saying the guidelines represent “an agreement that the most important thing that needs to be done at the design phase is security.”
The other countries to sign up so far are Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore.
Europe has made a head-start in this area, with an attempt to create specific laws governing the development and release of new AI systems – which would include a legal requirement for companies to carry out regular security testing to identify potential vulnerabilities. However, progress on this has been slow, leading to France, Germany and Italy proceeding with an interim agreement of their own.
The White House has urged Congress to develop AI regulation in the US, but little progress has been made to date. President Biden last month signed an executive order requiring AI companies to conduct safety tests, mostly geared to protecting systems from hackers.
Apple’s use of AI
Apple has incorporated AI features into its products for many years, most notably in the area of iPhone photography. The company has developed its own chatbot – dubbed Apple GPT – but is so far only using it within the company, likely because it wants to take advantage of generative AI features for software development without compromising product security.
Given the company’s typically cautious approach to new tech, it’s likely to be some time before Apple releases anything like this to its customers.
9to5Mac’s Take
Creating laws intended to ensure the safety and security of new AI systems is incredibly difficult.
The very nature of AI systems – which develop their own capabilities, rather than being specifically programmed to do or not do certain things – means that even researchers working on a project may not be fully aware of what a new AI model can achieve until it is already complete.
It’s also common to have disagreement among researchers about what those capabilities are, and what they might mean for the future.
This 20-page agreement is incredibly basic, more a statement of general principles than a blueprint, but given the challenges faced, it probably is at least a reasonable starting point. It establishes that research companies have an obligation to specifically look for security vulnerabilities.
However, it’s important to note that the initiative is solely concerned with how hackers might take advantage of AI systems. It does not address the much broader – and bigger – question of how AI systems might themselves pose a threat to humanity.
FTC: We use income earning auto affiliate links. More.
Comments