Shaun Hurst, the lead regulatory advisor at Smarsh, provides insights on striking a balance between leveraging AI and meeting regulatory requirements.
For good reason, artificial intelligence (AI) continues to rule the conversation. Financial institutions (FIs) in particular are utilizing technology to great advantage across a wide range of industries. But given how swiftly AI is being adopted, how can businesses make sure they stay compliant?
At Smarsh, which assists businesses in locating regulatory and reputational concerns within their communications data, Shaun Hurst serves as the lead regulatory advisor for EMEA. Hurst has over 20 years of experience assisting financial services organizations in resolving difficult IT issues. He explores how financial institutions must strike a balance between using AI and maintaining compliance in this passage.
AI is quickly being recognized as a technology that will profoundly transform a variety of different industries, much like the introduction of the internet. This tendency is prevalent throughout the financial services sector; in fact, banks have made significant investments in recent years to incorporate AI into their operations.
In instance, banks are starting to use chatbots or ‘conversational AI’ systems to give financial advisors answers to client questions. For instance, the investment bank Morgan Stanley recently gave its 16,000 financial advisors access to a chatbot powered by OpenAI technology.
Banks must keep in mind that compliance is a crucial element in delivering the many benefits of the AI revolution as they increasingly seek to benefit from this revolutionary technology.
Management of Data
At a fundamental level, conversational AI adoption by banks has the potential to expand the volume of communication data that banks must manage. In terms of the volume of communication data produced, for instance, if a chatbot is having several discussions with a large number of financial advisors every day, it would be as if the bank had doubled the number of employees. Financial institutions must make sure they can properly gather, store, and monitor these discussions in light of this surge of communication data.
It’s critical to note that banks have just adopted conversational AI at a time when financial authorities are intensifying their scrutiny of communication compliance violations. Wall Street banks paid a record $1.8 billion in fines in 2022 alone for failing to follow proper record-keeping procedures for employee interactions, and it appears that this trend will continue into 2023. Financial institutions must be aware of this regulatory environment and actively take communication data management into account when implementing conversational AI solutions.
A crucial part of the legislative initiatives being created in both Europe and the US to control the use of AI is ensuring efficient oversight. These actions have significant ramifications for compliance officers and senior managers at financial services businesses that are starting to more fully integrate AI tools. The adoption of “explainable AI” by banks, or AI models with human-understandable and defendable decision-making processes, is key to this effort.
One repercussion of AI’s quick adoption in the financial services sector is the potential for knowledge gaps at high management levels, which would make it difficult to effectively monitor employees’ usage of the technology. In order to address this, banks should think about implementing training programs for their present managers and, going future, including AI expertise into their recruitment criteria.
Banks that want to appropriately integrate AI into their operations must strike a balance between introducing this cutting-edge technology and giving managers the authority to uphold compliance norms.
The enormous internal stores of customer data that banks maintain are one of the elements that best positions them to create AI solutions. While this gives them a good advantage in the development of AI, the use of data for such reasons also raises the possibility that privacy rules may be broken. In fact, a group of IT professionals working in banks ranked “security and privacy breaches” as the main danger associated with implementing AI when surveyed by The Economist Intelligence Unit in 2022.
As a first step in guaranteeing data privacy standards, banks should actively inform their clients of the intended use of their data. However, financial institutions also need to take into account the increasingly changing legislative environment with regard to the use of personal data for AI purposes.
For instance, the EU’s AI Act is anticipated to be passed this year, the Biden Administration has published a proposal for a “AI Bill of Rights,” and the UK’s Data Protection and Digital Information Bill was introduced in parliament. The Financial Conduct Authority will also soon release a discussion paper on the use of AI in financial services. In light of this, banks wishing to integrate AI models into their operations must exercise keen regulatory awareness to assure compliance with future regulations.
Possibility of Bias
A hurdle to maximizing the benefits of AI is the possibility for prejudice to appear, with incidences of unintentional discrimination potentially resulting in reputational and legal loss. This concern is in addition to privacy violations. The danger is in the enormous datasets that AI algorithms are trained on; if these datasets contain examples of bias in the past, there is a chance that an AI tool may display prejudice by giving investment advice to clients based on non-financial factors like a client’s gender or race.
A major obstacle to developing ethical AI, or the use of AI that is thought to be ethical in both its intended applications and the outcomes it produces, is the potential for prejudice. In order to effectively counteract bias, both the input and output levels must be scrutinized. Therefore, skewed datasets need to be fixed before training AI algorithms. Banks should use communication monitoring technologies at the output level to spot bias in investment advising.
As banks grow more confident in their utilization of AI, it is possible that the technology will transition from being an internal tool to being offered as a client-facing service. Deutsche Bank and tech giant Nvidia already have a relationship in place for the delivery of interactive avatars powered by AI to their banking customers. The key sources of financial advice for banks in the future may not be their staff, but rather their AI technologies.
However, probably more than any other sector, the banking sector depends on trust. Customers must be assured that they can rely on the guidance provided by an AI tool. Poor user experience will undermine this trust, thus financial services businesses should take all necessary precautions before introducing AI tools to their clientele. They should also decide whether the advantages of such a move exceed the drawbacks.
In the end, using AI gives banks a ton of chances. But in order to benefit from these advantages, banks must keep in mind the points mentioned above and get ready to face the coming AI revolution.