Blog

The impact of Artificial Intelligence on Financial Institutions

2 September 2024
Jean-Philippe Thirion Blue Chip Boutique Leader Financial Institutions Connect on Linkedin
Rudi Sneyers Risk Management & Compliance Practice Leader Connect on Linkedin

Artificial Intelligence (AI) presents significant opportunities for financial institutions, including enhanced revenue generation, streamlined processes, improved customer experiences, and personalized product offerings, all while reducing operational costs. However, the journey towards AI integration is fraught with risks related to data quality, bias, data protection, discrimination, transparency, interpretability, and dependency on third-party providers. These risks require careful assessment and ongoing monitoring.

We believe that the primary risk for financial institutions lies in failing to engage with AI or in halting ongoing AI initiatives. However, regulatory and supervisory authorities must expand their expertise and adapt their frameworks to effectively oversee the use of AI in these institutions.

Jean-Philippe Thirion - Business Unit Leader Financial Institutions

Understanding AI in Financial Services

AI technologies, such as machine learning and predictive analytics, are revolutionizing risk management. Machine learning models identify patterns and anomalies that may indicate emerging risks, allowing financial institutions to address these proactively. As these models learn from new data, they continuously improve, enhancing the precision and efficiency of risk assessments. Predictive analytics leverage historical data to forecast future risks, helping institutions anticipate potential disruptions and implement proactive mitigation strategies, thereby increasing resilience.

Current AI applications in financial services focus on fraud detection, chatbots for customer service, and claims handling. AI-driven predictive analytics offer numerous advantages, such as identifying cross-selling opportunities, speeding up underwriting processes, assigning credit ratings, and developing investment strategies. Potential applications extend to Governance, Risk, and Compliance (GRC) tools, automated report generation, and summarizing communications in call centers.

Sound governance and the role of the Board of Directors

AI adoption is a strategic investment that introduces additional risks. The Board of Directors plays a crucial role in balancing the innovative opportunities presented by AI with diligent risk management. It is essential to establish a robust governance framework that oversees the internal AI processes, ensuring proper identification, classification, and documentation of risks. The Board must also understand how AI can provide a competitive edge in the short and long term, in terms of business opportunities, customer experience, and cost savings through efficiency gains.

To navigate the complexities of AI, the Board's collective skills must align with the heightened digital and AI-related competence needs. This includes challenging senior management on the projected returns on investment, proposed use cases, inherent model risks, and risks associated with implementation.

Risk management should be integrated throughout the AI journey, from use case selection to the deployment phase. Critical areas of focus include:

  1. Data Quality: Ensuring data quality is fundamental, as the output of AI models is only as good as the input data. Financial institutions must address data quality issues to avoid inaccurate or incomplete model outcomes.
  2. Bias and Fairness: AI systems can inadvertently perpetuate biases present in the training data, leading to discriminatory outcomes in areas such as creditworthiness assessments and insurance approvals. Continuous monitoring and adjustment are necessary to mitigate these risks.
  3. Third-Party Risk Management: As AI systems increasingly rely on third-party data sources, assessing the quality of these providers' internal control frameworks becomes crucial to managing third-party risks.
  4. Reputation Risk: Transparency in AI processes and outcomes is essential to protect fundamental rights and maintain public trust.

Embedding risk management within the broader digital strategy ensures that risk considerations are integrated across all stages of digital transformation projects. AI-driven predictive analytics significantly enhance risk management capabilities by allowing institutions to take proactive measures against various risks, such as financial, insurance, supply chain disruptions, and cybersecurity threats.

According to a recent US- Accenture study, banks are increasingly focused on technology handling time-consuming tasks to free up time for commercial revenue-generating efforts. Early adopters of generative AI could see up to a 30% productivity improvement over the next three years.

Managing Discernment Risk

The transformative potential of artificial intelligence (AI) resides in enhancing decision-making processes. However, the discernment risk associated with AI—whereby algorithms make decisions without human oversight or understanding—poses significant ethical and practical challenges. Immanuel Kant recognised the indispensable role of hypotheses in the scientific process. Yet the ideology promoted by Big Data promises only one thing: to predict! Hence the saying: "given enough data, the figures will speak for themselves". However the output produced by generative AI is not thought through or considered.

The human mind is endowed with a capacity for analyzing phenomena - human discernment. Relying on a machine's calculations to optimize your choices is an undeniable achievement in many fields. But over reliance on AI might impair critical thinking, underscoring the need for human oversight in ethical decision-making and sensitive data handling.

The Board of Directors must play an active role in overseeing AI initiatives, ensuring that the institution's strategy, talent management, business model, marketing, operations, and risk management align with AI's capabilities. To this end, having at least one Board member with expertise in AI governance is crucial for providing integrated oversight and ensuring transparency and security in AI applications.

Regulatory and Supervisory Challenges

AI systems in financial institutions are categorized based on risk levels, with different regulatory requirements for each category. High-risk AI applications, such as creditworthiness assessments and insurance pricing, require stringent oversight to protect individuals' rights. The evolving regulatory landscape necessitates collaboration among supervisory authorities to establish a harmonized framework that supports innovation while ensuring fair competition.

We believe that regulatory and supervisory authorities will continue to enhance their knowledge and adapt their supervisory methods to effectively oversee the use of AI in financial institutions. This is also why the European law on artificial intelligence is not the end of the road, but the beginning of companies taking charge of AI and implementing it.This approach aims to create a level playing field in supervision, balancing innovation and competitive advantages offered by AI technologies.

How TriFinance can help

TriFinance has extensive experience assisting financial institutions with transformation processes. Our deep understanding of corporate culture, governance, information systems, products, and processes enables us to support institutions in making strategic AI-related decisions at the executive level. We assist in developing use cases, quantifying and evaluating projected benefits, and deploying AI solutions, ensuring that financial institutions navigate the complexities of AI adoption successfully.