The financial sector is warily adopting generative AI, but technologies like ChatGPT boost confidence by using proven results.
According to David Donovan, executive vice president and head of financial services, North America, at digital transformation consultancy Publicis Sapient, “the use of generative AI is likely to increase in the coming years, even though many banks remain cautious due to the highly regulated nature of the financial services industry.” The best results for financial institutions will probably come from concentrating on use cases where ChatGPT has already produced encouraging results.
According to Fabio Caversan, vice president of digital business and innovation at digital consultant Stefanini Group, “companies in the financial sector can have a degree of confidence when using LLMs for practise areas where these tools have a demonstrated record of success at this point in development.”
These include marketing, human resources, and customer care and support. These are the kinds of jobs where language is used extensively, which ChatGPT and other large language models (LLMs) are adept at handling. But according to Caversan, “I don’t think that LLMs’ numerical and statistical capabilities are at a high enough level to recommend using them for financial forecasting or fraud detection at this point.”
Although Caversan has not yet observed any significant achievements, banks and other financial institutions are cautiously investigating new use cases. To fully grasp LLMs’ potential for success in the financial services industry, additional time will be needed.
Nine applications of LLMs in banking and finance
Experts in the field think ChatGPT-like technologies might benefit the banking sector in a number of ways.
- Condensing intricate observations
Vice president of data and analytics delivery at digital transformation consulting firm AArete Priya Iragavarapu anticipates that ChatGPT and other LLMs will take the lead in conveying intricate insights drawn from critical analysis, including gap, trend, and spend analysis, as well as forecasting. These insights could be condensed into simple language by LLMs to create a personalised message for every client.
- Simplifying the underwriting procedure
Regulations and a rigorous underwriting procedure are involved in the administration of mortgages, auto loans, and personal loans. Iragavarapu thinks that when domain-specific data is included in model training, LLMs can help with this process.
- Improving client interaction and service
The largest credit union in Georgia, Delta Community Credit Union, spoke with Sujatha Rayburn, vice president of information management and analytics, who stated that LLMs might enhance traditional chatbot strategies with their improved natural language and semantic processing skills. And than this facilitates their ability to carry out difficult, multi-step tasks and supports client decision-making.
- Compliance automation
Rayburn stated that by lowering the time and labour required for document reading and understanding, generative AI may be able to lower compliance expenses. Automated regulatory relevance testing, for instance, might match intricate regulatory requirements to many business characteristics. This could assist in locating possible infractions of anti-money laundering and know-your-customer policies, which call for identification verification, sanctions list checks, and suspicious transaction monitoring.
- Risk evaluation and handling
In the field of risk assessment, Donovan is investigating how LLMs can analyse massive volumes of data to find patterns, trends, and correlations. Financial institutions may be able to make more informed lending decisions and better risk assessments with this capability, which might result in lower default rates and increased profitability.
- Increasing customization
Vice president of worldwide digital banking at digital consultant Mobiquity Peter-Jan Van De Venn is looking into how LLMs might assist banks in customising the language and content of their digital interactions to clients’ unique requirements, preferences, and behaviour. Customer journeys may become more tailored as a result.
- Using automation to process documents
The CEO and creator of Gradient Insight, an AI consultant, Iu Ayala, is investigating how ChatGPT might automate the processing of financial papers, including account opening forms, insurance claims, and loan applications. Moreover document lifecycle managers (LLMs) can expedite and simplify document processing, leading to increased productivity and shortened processing times, by comprehending and obtaining pertinent information from unstructured data.
- Converting legalese to everyday language
The president and founder of the strategic consulting business Embarc Advisors, Jay Jung, is investigating the potential of ChatGPT to assist in the interpretation of legal documents, including side letters, purchase agreements, and term sheets. Based on his observations, most of them are written in legalese, posing a challenge for non-experts to comprehend. Fortunately, ChatGPT can swiftly translate these documents into clear, concise English.
- Produce executive summaries
Jung has also investigated ChatGPT’s capacity to compile insights and financial information into executive briefs. It is simple to write out important bullet points, but it can take time to organise takeaways into executive or board briefings. “ChatGPT is an excellent tool as it allows me to simply input the bullets and let ChatGPT compose the summary and modify the tone,” stated Jung. “It considerably reduces my time.”
ChatGPT’s drawbacks when used in finance
Financial companies should use generative AI with caution as they are often risk averse. Ensuring the safe scaling of ChatGPT and other LLMs requires addressing the following difficulties.
- Duty of caution
Van De Venn is worried about the potential impact of ChatGPT on duty-of-care regulations. While he believes that first-line support for basic product inquiries is appropriate, he also believes that restrictions protecting consumers from inappropriate advise usually apply when giving financial advice. He declared, “The regulators must move quickly and provide clarity.”
2. Misleading data
Caversan said that because LLMs have a tendency to manufacture misleading information that looks genuine, he makes sure that qualified professionals verify the authenticity of all outcomes and use cases. Instead than concentrating on end-to-end automation, organisations should emphasise AI augmentation to combine the advantages of tools with workers’ critical thinking and creative abilities.
3. Privacy and security of data
When employing LLMs, Rayburn is worried about protecting the confidentiality and security of client data. Organizations must protect sensitive information entered into chat interfaces from unauthorized access or publication.
4. Contextual knowledge
Rayburn claims that the present generation of LLMs is incapable of comprehending a customer’s background in a nuanced way. Models will need further training on voluminous banking data and documents in order to comprehend the jargon and language unique to the banking sector.
5. Fresh pricing models
Rayburn also noted that in order for LLMs to function effectively, a large investment in specialised hardware, software, and cloud services is necessary. Businesses need to develop fresh methods for assessing the total cost of ownership of new LLM services, taking into account the processes necessary for appropriately contextualising and customising new models.
6. Prejudice
According to Ayala, companies need to exercise caution to prevent reinforcing biases seen in model training data, as this could have negative effects on the financial sector. Reducing these hazards involves implementing bias detection, continuous monitoring, and ongoing model improvement.
7. Openness and comprehensibility
Right now, it’s difficult for well-known LLMs like ChatGPT to justify their conclusions or provide references for their assertions. Moreover the EU’s Digital Services Act requires businesses to disclose algorithmic transparency for choices made by online platforms, and it is scheduled to take effect in 2024. According to Ayala, finance companies need to have methods for deciphering and interpreting LLM output in order to guarantee regulatory compliance and give users clarity.