CommBank makes AI abuse detection model freely available to other banks

The Commonwealth Bank of Australia (CBA) has made its AI model for identifying digital payment transactions featuring offensive messages freely available to any bank worldwide.

The bank’s AI model, which is now available on source code platform GitHub, is designed to identify digital payment transactions which include harassing, threatening or offensive messages it refers to as “technology-facilitated abuse”.

CBA first rolled out abuse transaction monitoring in 2020, with 400,000 transactions blocked annually by an automatic filer that prevents offensive language being used in transaction descriptions on its app.

CBA group customer advocate Angela MacMillan explained that it developed the technology after conducting research which found that one in four Australian adults had experienced financial abuse from their partner.

“Sadly, we see that perpetrators use all kinds of ways to circumvent existing measures such as using the messaging field to send offensive or threatening messages when making a digital transaction,” she said. “By using this model, we can scan unusual transactional activity and identify patterns and instances deemed to be high risk so that the bank can investigate these and take action.”

CBA shared that its model detects around 1,500 cases of abuse it deems high-risk each year.

“By sharing our source code and model with any bank in the world, it will help financial institutions have better visibility of technology-facilitated abuse,” MacMillan said. “This can help to inform action the bank may choose to take to help protect customers.”

In August CBA launched a police referral pilot designed to set new standards for how banks report tech-facilitated abuse to law enforcement.

At the time, the bank said that the move built on its existing use of AI to identify and stop abuse in transaction descriptions by working with police in New South Wales (NSW) to create a new process that will allow it to report abuse with the consent of victims.

    Share Story:

Recent Stories


Data trust in the AI era: Building customer confidence through responsible banking
In the second episode of FStech’s three-part video podcast series sponsored by HCLTech, Sudip Lahiri, Executive Vice President & Head of Financial Services for Europe & UKI at HCLTech examines the critical relationship between data trust, transparency, and responsible AI implementation in financial services.

Banking's GenAI evolution: Beyond the hype, building the future
In the first episode of a three-part video podcast series sponsored by HCLTech, Sudip Lahiri, Executive Vice President & Head of Financial Services for Europe & UKI at HCLTech explores how financial institutions can navigate the transformative potential of Generative AI while building lasting foundations for innovation.

Beyond compliance: Building unshakeable operational resilience in financial services
In today's rapidly evolving financial landscape, operational resilience has become a critical focus for institutions worldwide. As regulatory requirements grow more complex and cyber threats, particularly ransomware, become increasingly sophisticated, financial services providers must adapt and strengthen their defences. The intersection of compliance, technology, and security presents both challenges and opportunities.

Unleashing generative AI: A force multiplier for financial crime teams
This FStech webinar, sponsored by NICE Actimize sees industry experts examine the revolutionary impact of generative AI on financial crime operations, and provides actionable insights to enhance your compliance strategies.