With the
Regulation on Artificial Intelligence (the "AI Act") entering into force in due course, following its publication in the Official Journal of the EU, there has been an increasing focus of supervisory bodies to provide initial guidance to regulated financial service providers who are using or intend to use Artificial Intelligence ("AI") in the provision of financial services.
The European Securities and Markets Authority ("ESMA") has issued a public statement providing initial guidance to firms using AI in the provision of investment services to retail clients and their compliance with MiFID II requirements.
Furthermore, the Central Bank of Ireland Director of Consumer Protection delivered remarks outlining the Central Bank's expectation of financial services firms when considering using AI.
On 18 June 2024, the European Commission published a
targeted consultation on AI in the financial sector to inform Commission services on the concrete application and impact of AI in financial services. The input received will enable the European Commission to provide guidance to the financial sector for the implementation of the AI Act. The consultation paper is structured in three parts and includes questions on the development of AI, specific use cases in finance (including securities markets) and on the AI Act related to the financial sector. Stakeholders are invited to respond to the consultation by 13 September 2024.
ESMA Statement on use of AI for investment services to retail clients
On 30 May 2024, ESMA issued a
public statement providing initial guidance to firms using AI in the provision of investment services to retail clients. The statement is intended to provide initial guidance to investment firms using AI, in light of their key obligations under MiFID II.
ESMA noted that potential use cases of AI include: customer service and support; support in provision of investment advice or portfolio management; compliance; risk management; fraud detection or operational efficiency.
The ESMA statement addresses scenarios where a firm specifically develops or officially adopts AI tools, but also addresses the use by firm staff of third-party AI technologies with or without the direct knowledge and approval of senior management.
ESMA notes potential risks associated with the use of AI tools, such as:
- Lack of accountability and oversight;
- Lack of transparency and explainability;
- Security/Data privacy concerns; and
- Issues around robustness/reliability of the output, quality of training data, and algorithmic bias.
In order to address these concerns, ESMA expects firms to comply with relevant MiFID II requirements, particularly those pertaining to organisational requirements, conduct of business requirements, and the general obligation to act in the best interest of the client. ESMA notes that its statement is based on investment firms' obligations under MiFID II and is without prejudice to the broader EU framework on digital governance, such as the AI Act and the Digital Operational Resilience Act ("DORA") which have AI components.
MiFID II Requirements
ESMA's Expectations regarding AI utilisation include:
Client Best Interest and Information to clients
- Transparency on the role of AI in investment decision-making processes related to the provision of investment services, as well as the use of AI for client interactions.
- Provide clients with information on how the firm uses AI tools for the provision of investment services.
- Ensure that such information is presented in a clear, fair and not misleading manner.
Organisational requirements – Governance, Risk Management, Knowledge and competence and Staff training
- Appropriate understanding by the management body regarding how AI technologies are applied and used within their firm.
- Appropriate oversight by the management body of AI technologies to ensure that the AI systems align with the firm's overall strategy, risk tolerance, and compliance framework.
- Implementation of robust governance structures that monitor the performance and impact of AI tools on the firm's services.
- Develop and maintain robust risk management processes and procedures specific to AI implementation and application.
- Implementation of comprehensive testing and monitoring systems, applying the principle of proportionality.
- Ex-ante controls to ensure the accuracy of information supplied to and/or utilised by AI systems in order to prevent the dissemination to clients of erroneous information or the provision of misleading investment advice and ex-post controls to monitor and evaluate any process that involves the delivery of information directly or indirectly through AI-driven mechanisms.
- Clear documentation and reporting mechanisms to ensure transparency and accountability in AI-related risk management practices.
- Data used as input for the AI systems is relevant, sufficient, and representative, ensuring that the algorithms are trained and validated on accurate and comprehensive and sufficiently broad datasets.
- Compliance with MiFID II requirements regarding outsourcing of critical and important operational functions, where AI tools are developed by third-party service providers.
- Ensure adequate training programs on the topic of AI for relevant staff, particularly staff in control functions, regarding operational aspects of AI and its potential risks, ethical considerations, and regulatory implications.
Conduct of business requirements
- Implementation of robust controls to ensure AI systems are designed and monitored to meet suitability and product governance requirements.
- Implementation of rigorous quality assurance processes for AI tools, including testing of algorithms and their outcomes for accuracy, fairness, and reliability in various market scenarios.
- Periodic stress testing to evaluate how these AI systems perform under extreme market conditions.
- Strict adherence to data protection regulations to safeguard any sensitive client information collected for the purpose of the provision of investment services.
Record Keeping
- Maintain comprehensive records on AI utilisation for the provision of investment services and on any related clients’ and potential clients’ complaints.
- Records should encompass aspects of AI deployment, including the decision-making processes, data sources used, algorithms implemented, and any modifications made over time.
Central Bank of Ireland Director of Consumer Protection Remarks on AI
On 22 May 2024, the Central Bank of Ireland Director of Consumer Protection delivered
remarks on AI at the OECD/FSB Roundtable on AI in Finance.
The Central Bank's strategy on AI will focus of the following key areas:
- The implementation of the AI Act;
- Setting standards and new requirements for the provision of digital financial services through the review of the Consumer Protection Code; and
- Reinforcing in an AI context the standards set in financial services regulation more generally.
The Central Bank will expect to understand the following information from financial services firms which consider implementing AI tools:
- What business challenge is being addressed and why AI is an appropriate response;
- That firms identify and prepare for any new source of risk to their operational resilience arising from the use of AI; and
- That there is clarity within the firm regarding the accountability for decisions made on the use of AI.