The UK government has started to use AI for its immigration screening processes, but the move has been criticized for discriminating against certain groups of people, particularly people of color, according to a new investigation by the Guardian.
The AI tools are being used to solve a number of issues, ranging from benefits allocation to marriage license approvals, according to the Guardian article. The report, however, has highlighted the often haphazard and uncontrolled deployment of cutting-edge technology across Whitehall.
With a focus on welfare, immigration, and criminal justice, at least eight Whitehall departments and some police agencies have integrated AI into their decision-making procedures. The investigation by the Guardian found several loopholes in the system where AI-driven techniques may have produced biased results.
For instance, the Metropolitan Police’s facial recognition tool exhibited a higher error rate in recognizing black faces compared to white ones under certain settings.
Secondly, the Department for Work and Pensions (DWP) employed an algorithm believed to have mistakenly removed benefits from dozens of recipients.
Another anomaly recognized was that the Home Office utilized an algorithm to identify potential sham marriages, but it disproportionately flagged individuals from specific nationalities like Albania, Greece, Romania, and Bulgaria.
Although effort and a lot of data have been put into developing these AI systems during the training of the models, sometimes these tools tend to function in a way that even developers do not understand. Most tools will produce biased results, provided that the data used to train them was also biased.
Experts have warned that if the data shows traces of discrimination, the results will also likely be discriminatory.
Ironically, UK Prime Minister Rishi Sunak praised AI during a London Tech Week in June 2023, citing the benefits of the technology within the public sector, saying, “from saving teachers hundreds of hours of time spent less on planning to helping NHS (National Health Services) patients get quicker diagnoses and more accurate tests.”
However, total implementation of AI within the public sector may not only be a challenge in the UK but in several other countries.
For example, the Netherlands faced criticism after using AI to detect potential childcare benefit fraud, leading to erroneous decisions that pushed many families into poverty.
Several experts are concerned the UK may experience the same Netherlands scandal, adding officials were using “poorly understood algorithms to make life-changing decisions,” with the affected unaware of it.
The recent dissolution of an independent government advisory board that held public sector bodies accountable for AI use compounds these concerns.
Addressing the risks
Shameem Ahmad, CEO of the Public Law Project, underscores the importance of addressing AI’s potential risks and challenges while acknowledging its potential for social good.
“AI comes with tremendous potential for social good. For instance, we can make things more efficient. But we cannot ignore the serious risks,” said Ahmad.
“Without urgent action, we could sleep-walk into a situation where opaque automated systems are regularly, possibly unlawfully, used in life-altering ways, and where people will not be able to seek redress when those processes go wrong,” he said.
Northumbria University law Professor Marion Oswald draws attention to the lack of transparency and consistency in AI use in the public sector, particularly in situations when people’s lives are profoundly affected.
“There is a lack of consistency and transparency in the way that AI is being used in the public sector,” said Professor Oswald.
“A lot of these tools will affect many people in their everyday lives, for example, those who claim benefits, but people don’t understand why they are being used and don’t have the opportunity to challenge them,” added the professor.