The use of AI in finance has a myriad of applications, offering numerous benefits such as fraud detection, robo-advice, algorithmic stock trading, and personalized chatbots.
However, a recent paper by the accountancy and auditing firm Deloitte, highlights the potential unintended consequences and ethical pitfalls associated with using AI in finance. As AI systems heavily rely on the data they are trained on, incomplete or biased datasets can limit their objectivity and perpetuate discriminatory practices.
The achilles heel of AI in finance
One of the key issues identified by Deloitte is bias in the input data provided to AI systems. AI models are only as good as the data they are fed.
If the input data contains biases related to gender, race, ideology, or other factors, the AI’s ability to make objective decisions is compromised. Additionally, incomplete or unrepresentative datasets hinder AI’s ability to analyze and predict outcomes accurately, raising concerns about fairness and inclusivity.
Another significant source of bias lies within the development process itself. If the development teams lack diversity or carry subconscious biases, those biases can be inadvertently ingrained in the AI model.
Furthermore, the unpredictability of AI models poses a challenge, as their response to evolving market conditions may be difficult to anticipate, potentially leading to unintended consequences at a portfolio and macro level.
Continuous learning and unforeseen behaviors
According to Deloitte’s paper, AI systems can self-improve and learn from new data. While this feature is promising, it introduces the risk of AI acquiring unintended behaviors over time, potentially leading to discrimination.
For example, an online lending platform could unknowingly reject loan applications from ethnic minorities or women more frequently than in other groups. Such outcomes allegedly erode trust between financial institutions, individuals, and machines, magnifying societal issues.
The opaqueness of AI solutions adds another layer of complexity. AI’s ability to arrive at decisions based on complex algorithms can make it challenging to establish safeguards or explain the rationale behind those decisions.
Regulators often need more technical expertise and resources to inspect algorithms thoroughly. However, regulators increasingly recognize the importance of addressing AI usage’s risks and unintended consequences in the financial sector.
Addressing bias and promoting collaboration
Although AI offers profound benefits, addressing biases and promoting stakeholder collaboration is crucial. Financial institutions, regulators, and development teams must work together to identify and mitigate sources of bias in AI decision-making processes. Proactive measures are needed to ensure fairness, inclusivity, and transparency.
According to Rumman Chowdhury, formerly Twitter’s head of machine learning ethics, transparency, and accountability, the lending sector exemplifies how biases against marginalized communities can emerge in AI systems.
“Algorithmic discrimination is actually very tangible in lending,” Chowdhury said on a panel at Money20/20 in Amsterdam.
“Chicago had a history of literally denying those [loans] to primarily Black neighborhoods.”
The practice of denying loans to black neighborhoods has been compared to 1930’s Chicago, and historic process of “redlining” districts based on their ethnic make-up.
AI in finance
As AI’s role in financial services continues to expand, the issue of bias in data becomes even more critical. Regulators increasingly demand transparency and a thorough understanding of the underlying mechanisms of AI algorithms.