AC Global Risk, a startup which has been described as selling ‘dangerous junk science,’ claims Apple is one of its clients …
The Intercept reports that the firm claims that simply analysing yes/no answers to questions can, in ten minutes, determine someone’s level of risk – whether that’s as an employee or an asylum seeker.
The California-based company offers an automated screening system known as a Remote Risk Assessment, or RRA. Here’s how it works: Clients of AC Global Risk help develop automated, yes-or-no interview questions. The group of people selected for a given screening then answer these simple questions in their native language during a 10-minute interview that can be conducted over the phone. The RRA then measures the characteristics of their voice to produce an evaluation report that scores each individual on a spectrum from low to high risk. CEO Alex Martin has said that the company’s proprietary risk analysis can “forever change for the better how human risk is measured.”
The company claims that its analyses are ‘highly accurate,’ but academics say that while the concept is in principle plausible, the approach cannot be considered reliable.
Some of the leading experts in vocal analytics, algorithmic bias, and machine learning find the trend toward digital polygraph tests troubling, pointing to the faulty methodology of companies like AC Global Risk. “There is some information in dynamic changes in the voice and they’re detecting it. This is perfectly plausible,” explained Alex Todorov, a Princeton University psychologist who studies the science of social perception and first impressions. “But the question is, How unambiguous is this information at detecting the category of people they’ve defined as risky? There is always ambiguity in these kinds of signals.”
Many experts – including the man described as the leading expert in the field – go much further.
Several leading audiovisual experts who reviewed AC Global Risk’s publicly available materials for The Intercept used the word “bullshit” or “bogus” to describe the company’s claims. “From an ethical point of view, it’s very dubious and shady to give the impression that recognizing deception from only the voice can be done with any accuracy,” said Björn Schuller, a professor at the University of Augsburg who has led the field’s major academic challenge event to advance the state of the art in vocal emotion detection. “Anyone who says they can do this should themselves be seen as a risk.”
AC Global Risk does not specify how Apple uses the technology.
Image: Shutterstock
Check out 9to5Mac on YouTube for more Apple news:
FTC: We use income earning auto affiliate links. More.
Comments