Embracing artificial intelligence in financial services: Data, Models and Governance

The Bank of England and the FCA have published a joint report on the use and future impact of Artificial Intelligence (AI) in financial services.  The key takeaways for firms are – have good data, understand your AI models, and wrap it up in a sound governance framework. We consider the report to be a handy aide-memoire for firms that want to take a sensible approach to the use of AI.

Data

Data is the foundation of any AI model. Data quality and completeness plays a pivotal role in the effectiveness of any AI model. In many cases, the benefits and risks can be tracked back to data rather than AI or algorithms. As firms develop their data strategies to accommodate AI there are increasing calls for the development and use of AI-specific data standards.

Even if there are no specific AI models on the table at firms right now, we see no downside in investing in efforts to increase data quality and completeness. Having accurate data will only benefit firms, however they wish to utilise the data.

Model Risk

Model risk is not a new concept for financial services firms, however the scale at which AI is used, the speed of AI, and the complexity of the underlying models is new. Complexity comes in the form of:

  • Inputs

  • Relationships between variables

  • Intricacies of models themselves

  • Types of outputs:

    • Actions

    • Algorithms

    • Unstructured data

    • Quantitative data

  • It gets even more complicated when several AI models operate within a network

Being able to explain model output is vital. Focus should not only be on features and parameters but also on consumer engagement and clear communications. Additionally, identifying and managing change in AI models as well as monitoring and reporting their performance is key to ensuring models behave as expected.

Governance

A key characteristic of AI is the capacity for autonomous decision making. This has serious implications regarding how to govern the technology and its outcomes – including effective accountability and responsibility. Where possible firms should leverage existing governance frameworks to manage AI. However, if there are existing issues with the governance frameworks, AI could be used as the catalyst to update the entire framework to ensure governance is pushing the firm forward, not holding it back.

The report suggests firms should create a centralised body that sets AI governance standards, with responsibility falling to a senior manager of the firm. Business areas would be accountable for the outputs, compliance, and execution against governance standards. Firms should ensure an appropriate level of understanding and awareness of AI’s benefits and risks throughout the organisation. One way to achieve this is via formal governance arrangements and a clear role for senior leadership. And to be clear, our view is that senior leadership do not need to be in the weeds and have an encyclopaedic knowledge of AI. It’s about good governance, questioning, challenging, applying common sense, using simple language.

Next Steps

In terms of next steps, the report advises that Regulators could start by providing clarity on how existing regulation and policies apply to AI. There is logic to this. However, our summary at the start was - have good data, understand your AI models, and wrap it up in a sound governance framework. Firms don’t need to wait for Regulators to tell them how existing regulations apply – it makes good business sense to apply these principles now.

Previous
Previous

Avyse Partners appoint Patty Georgadaki-Patterson as Senior Consultant

Next
Next

My thoughts two weeks after returning from maternity leave