There’s growing recognition of the AI threats we may face as companies push further and faster with artificial intelligence tech. Responding to this, the US Justice Department has appointed its first ever federal law enforcement officer focused on AI …
US Justice Department appoints AI officer
Reuters reports the appointment of a Princeton University professor to the role.
Jonathan Mayer, a professor at Princeton University who researches technology and law, will serve as chief science and technology adviser and chief AI officer, the department said.
“The Justice Department must keep pace with rapidly evolving scientific and technological developments in order to fulfill our mission to uphold the rule of law, keep our country safe and protect civil rights,” Attorney General Merrick Garland said in a statement.
The risk of using AI to assist in criminal activities like fraud, and even engineering dangerous biochemicals, was recently highlighted by the White House. Apple was one of more than 200 companies and other organisations to respond to a presidential executive order to help address the problem.
The DoJ says that Mayer will be tasked with both sides of AI: its use by criminals, but also use by the US to defeat threats by terrorists and nation states.
Deputy Attorney General Lisa Monaco said the technology could help the United States detect and disrupt terror plots and hostile actions from U.S. adversaries. But she said the department is also concerned about its potential to amplify existing biases, tamper with elections and create new opportunities for cyber criminals.
“Every new technology is a double-edged sword, but AI may be the sharpest blade yet,” Monaco said.
Call for action on deepfakes
Separately, Mashable reports that hundreds of academics, politicians, and tech leaders have signed an open letter expressing concern about the risks of so-called deepfakes: AI-generated fake videos which can make anyone appear to say anything.
In addition to the obvious scam risks, there is concern at the use of deepfakes to influence elections by making candidates appear to make damaging statements, or take offensive actions. Additionally, there are concerns that even completely fictional AI-generated CSAM can normalize the abuse of real children.
The open letter – which anyone can sign – calls for legislation to address the issues.
New laws should:
- Fully criminalize deepfake child pornography, even when only fictional children are depicted;
- Establish criminal penalties for anyone who knowingly creates or knowingly facilitates the spread of harmful deepfakes; and
- Require software developers and distributors to prevent their audio and visual products from creating harmful deepfakes, and to be held liable if their preventive measures are too easily circumvented.
Photo by Kanchanara on Unsplash
FTC: We use income earning auto affiliate links. More.
Comments