top of page

AI like ChatGPT won't work in banking

Updated: Jan 28, 2023

At the time of writing (January 2023) the hype around AI and in particular the 2-month-old ChatGPT is enormous. It's going to transform everything from search to content creation and take people's jobs. But, can AI be truly transformative in a regulated industry like Banking?

I'm not convinced, but I'm willing to be persuaded. So I decided to write this blog post and actually engage an AI assistant to help with the writing. The results are interesting, but not mind-blowing as many would hype-it-up to be.

There will be uses for AI in many areas of banking. But until someone from the regulatory area of a bank can tell me how they will meet their compliance requirements for content generated by AI, I will remain a skeptic.

Compliance is about visibility and control

A few years ago, I worked with a bank that wanted to deliver a transformational customer experience in the mortgage area. Mortgages are stressful (as anyone with a home loan will emphatically tell you), so the bank wanted to simplify the experience. Make it more 'friendly' and 'conversational'. So they envisaged a 'chat' style experience that would behave kind of like a banker - talk to the borrower, ask questions, skip unnecessary sections etc.

The bank's digital team came up with a design that was like a familiar WhatsApp or Messenger-style experience. There were paths the applicant could take if it was Pre-Approval vs Found A Home and Single vs Multiple borrowers. The concept was great.

So we built a prototype and tested it.

We got two interesting bits of feedback. The first from customer testing and the second from the bank's compliance team.

Customer: "This is a mortgage, not a conversation with my teenage kid. This should feel more serious. I'm borrowing a lot of money!" (Ouch)

Compliance: "Show me the application form the customer is being asked to complete."

Digital Team: We can't, it's not like that. It evolves based on the answers to their questions.

Compliance: "So what are you asking me to sign off on? Show me that."

Digital Team: "We can't. It's not like that. We don't have a form to print out. The engine responds dynamically to the answers to the prior questions and then decides what should come next."

Compliance: "Bye!"

It kind of made sense. The Compliance team needed to know what they were 'approving' and it couldn't be shown to them. So how could they approve it? In the world of AI, that gets even more difficult.

What control do you (really) have over humans?

The argument will inevitably be made that we have no real control over the behavior of the humans we employ in our banks. As demonstrated by the issues Wells Fargo faced in 2013 and beyond where bank staff was opening accounts on behalf of unknowing customers to meet sales targets. Compliance controls were in place to avoid that - but they were ignored.

However, in the case of humans, a bank could at least point to the compliance documentation, the training, the rules, etc. to show that it was trying to do the right thing...and a rogue employee broke those rules.

Now perhaps with AI we will be able to put similar controls in place and I'm just not aware of that yet - but we need to include the compliance teams on our journey to ensure the effective use of AI in banking. So before we get caught up in the hype - let's ask the compliance folks how they can work with AI to ensure the bank continues to be compliant.

I asked AI to write a blog post for me

I decided to test this out - and the result was impressive. I chose to work with Jasper.Ai, a solution that utilizes OpenAI, the technology behind the much-hyped ChatGPT. There are multiple ways to engage with Jasper, but I chose the simple 'One-Shot Blog Post' where you provide some detail on a topic, a tone of voice, and an intended audience - and it responds with a 1-page post.

Now I could have tried the more powerful Documents feature in Jasper that allows me to highlight a sentence or paragraph and ask Jasper to 'write more' about that topic. and in doing so build a substantial post. But as Jasper was writing a post within a post, I wanted to keep it short.

My input is below. I asked Jasper to write about the following:

"The regulatory controls required in banking make it difficult for AI to be successfully adopted. Regulatory compliance teams need to approve the content that is presented to bank clients. If the content is generated by AI - what are they going to approve? If the AI generates something that is incorrect, the bank would be held responsible by the regulators."

You can see the Jasper screen below.

The following text in blue is a copy-paste of exactly what Jasper produced.

Blog Title: How AI and Banking Regulations Can Work Together

Blog Introduction: In recent years, financial services have become increasingly focused on the use of artificial intelligence (AI) to improve customer service and streamline processes, but one major obstacle stands in the way of successful AI adoption – banking regulations. Regulatory compliance teams are responsible for approving all content presented to bank clients and any incorrect information generated by AI could lead to costly fines or other penalties. So, how can banks ensure that their regulatory obligations are met while still taking advantage of the potential benefits of AI?

Blog Body:

Understanding Regulations

The first step in successfully implementing AI is understanding the regulations that apply to your bank. Financial institutions must comply with a variety of laws and regulations at both the state and federal level, such as anti-money laundering laws, consumer protection regulations, and privacy rules. Understanding these regulations will help you identify areas where AI should be used carefully or not at all.

Evaluating Risk

Once you have identified potential areas for using AI, it’s important to evaluate the risk associated with each application. This includes considering factors such as accuracy, scalability, cost savings, customer experience, data security, and compliance requirements. For example, if a particular application requires high accuracy but has a low cost savings potential, then it may not be worth pursuing. On the other hand, if an application offers significant cost savings without sacrificing accuracy or data security then it may be worth exploring further.

Developing Controls

Once you have identified potential applications for AI and evaluated their risks, it’s time to develop controls that will ensure regulatory compliance while still allowing banks to take advantage of the benefits offered by AI technology. These controls may include testing processes that measure accuracy levels before deployment; periodic audits to ensure compliance; transparency regarding how customer data is being used; training programs for employees; and procedures for responding quickly when issues arise.


AI technology has tremendous potential when it comes to improving efficiency in banking operations but regulatory concerns can make adoption difficult. Banks need to understand their regulatory obligations and evaluate risks before they begin leveraging AI technologies. With proper controls in place – such as testing processes and employee training – financial institutions can take full advantage of this cutting-edge technology while still meeting their regulatory requirements. Ultimately, this will enable them to provide better services for customers while avoiding costly penalties from regulators.

This is actually quite impressive. And got me concerned about AI writing my son's homework assignments. However, does the post contain any 'original thought' on the topic? As mentioned above, I've worked, personally, with regulatory compliance folks at banks and seen projects stopped as a result of that work. That impacted me emotionally - I was frustrated. But I also empathized with the position of the compliance team. Can AI do that and then bring that understanding to a discussion like this? What I read in blue above sounds like the repurposing of content that the AI has ingested from the web and repurposed (perfectly) for this situation.

I asked AI to generate an image for me

As you have probably seen on my blog, I try to use coffee-themed images in my blog posts. I like coffee. Real coffee. Espresso. And not Starbucks. So, I thought it would be great to have an image of a coffee being made by a robot. So I asked one of the AI-powered image generation tools out there called Stable Diffusion (link) to generate an image for me. I typed in "a robot making coffee" and hit the Generate Image button, and it delivered 4 images. This was my favorite and is used in the banner image for this blog post. Pretty impressive.

Prove me wrong

I would love to hear from someone tasked with regulatory compliance inside a bank who can explain how they will do their job, knowing that AI is creating content that is presented to customers. If the content is written by a human, reviewed by the compliance team and then the AI selects the best piece(s) of content to present to the client, then I have no concern. That's the intelligent assembly of preapproved content.

But if there's no content that the regulatory compliance team can review, if it's actually generated by AI - how will this work?

Let me know...

208 views0 comments


bottom of page