Trump administration wants racist AI for ‘Extreme Vetting Initiative’
It’s becoming increasingly evident the Trump administration doesn’t understand technology, or perhaps fears and hates it. The seemingly imminent abrogation of net neutrality, and its quest to build AI for its “Extreme Vetting Initiative,” appear to lend credibility to any theories suggesting we’re being led by Luddites.
US Immigration and Customs Enforcement (ICE) this June sent out a letter detailing an initiative to “obtain contractor services to establish an overarching vetting contract that automates, centralizes and streamlines the current manual vetting process while simultaneously making determinations via automation if the data retrieved is actionable.”
According to the letter, the current methodology ICE is forced to use doesn’t provide enough “high-value derogatory information to further investigations or support any prosecution by ICE or US attorneys in immigration or federal courts.”
The humans at ICE, in short, would like for someone in the technology sector to create a machine learning system to data-mine for information it can use to prosecute or deny entry to immigrants: the very definition of a biased AI.
This endeavor would probably involve a deep learning network capable of making correlations between disparate data-sets. In order to train such a network, there’s a pretty good chance that DHS or ICE would set a specific goal – a target number of people from the areas it wishes to extend “extreme vetting” to.
A group of 54 distinguished scientists and engineers today sent a different letter to the Department of Homeland Security beseeching it to leave AI out of its plans for immigrant vetting. In the letter the coalition states:
To the best of our knowledge, there’s no machine capable of determining whether a person is likely to commit a crime, nor is there an AI that can determine a human’s intentions through the collection of social media data.
The movie “Minority Report” remains a work of fiction.
In fact, ProPublica’s piece “Machine Bias,” which was a Pulitzer finalist, explained:
It would then be logical to believe creating a similar AI to determine whether a person should be allowed entry into the country isn’t much different than making one to determine whether Black people should get harsher sentences.
We contacted the American Civil Liberties Union who told us:
In the face of all this information it seems like a no-brainer that the tech community would be absolutely united against this particular application of AI, and from what we can tell they are.
The 56 scientists and researchers who signed the letter to DHS are a mix of academics and experts from companies like Google and Microsoft.
IBM, whose representatives attended a June meeting with government officials alongside a group of other companies, was name-checked by Reuters today, but less than a year ago a company rep told The Hill that:
We’ve been clear about our values. We oppose discrimination and we wouldn’t do any work to build a registry of Muslim Americans.
And it’s worth considering Virginia Rometti, CEO of IBM, helped disband Trump’s board of technology leaders. In a letter to employees she wrote:
We have worked with every U.S. president since Woodrow Wilson. We are determinedly non-partisan – we maintain no political action committee. And we have always believed that dialogue is critical to progress; that is why I joined the President’s Forum earlier this year.
But this group can no longer serve the purpose for which it was formed. Earlier today I spoke with other members of the Forum and we agreed to disband the group.
It’s likely a safe bet that IBM isn’t going to get behind this.
No technology company should enable a system that experts believe will result in the discrimination of people. It is counter to the very idea of research and advancement that anyone use artificial intelligence in a way that intentionally marginalizes or otherwise violates the civil rights of any human, no matter the color of their skin or country of origin.
We asked an ACLU representative what they’d say to a company considering filling a government contract to create an AI to aid the “Extreme Vetting Initiative” and they told us: