A Thing
The Green SheetGreen Sheet

The Green Sheet Online Edition

April 08, 2024 • Issue 24:04:01

AI: A double-edged sword

By Patti Murphy
ProScribes Ink

Here's an old story that some younger readers may not recognize, but what the heck. Willie Sutton, a bank robber from the early 20th century was famously asked why he robbed banks. His reply: because that's where the money is.

Today, of course, a crook doesn't need to go inside any bank buildings, as there are ample ways to steal money from banks and their customers—online and other ways. With more of our financial lives going digital, we face more risk than ever. That is why it is critically important that the card brands, issuers and acquirers protect transaction and cardholder information.

These days, artificial intelligence is a go-to tool for fraud and risk management, just as it has become a go-to technology for crooks. That's what makes AI a double edge sword.

Mastercard and Visa both recently rolled out several new AI-based services. Mastercard, for example, announced that it is adopting generative AI techniques to boost its network security posture.

Generative AI is a type of artificial intelligence that can produce fake content, including text, imagery and audio, that is hard to detect as artificial. In a common fraud scenario, generative AI is used to produce a polished phishing email—one devoid of typos and poor grammar, both of which are telltale signs most recipients use to weed out obvious phishing expeditions in seconds.

The latest update to Mastercard's real-time decisioning solution can scan an unprecedented 1 trillion data points to predict whether a transaction will go south. Mastercard said this enhancement builds on its existing ability to analyze account, purchase, merchant and device information in real time.

The technology assesses relationships between multiple entities surrounding a transaction to determine its risk—in less than 50 milliseconds, Mastercard said in a press release that initial runs suggest fraud detection improvements averaging 20 percent, and at times as high as 300 percent. Not only does the upgrade identify bad transactions more accurately, it also has been shown to reduce the number of false positives by better than 85 percent, Mastercard said.

"With generative AI we are transforming the speed and accuracy of our anti-fraud solutions, deflecting the efforts of criminals," said Ajay Bhalla, president of cyber and intelligence at Mastercard. "Supercharging our algorithm will improve our ability to anticipate the next potential fraudulent event, instilling trust into every interaction."

For its part, Visa introduced an AI product designed to combat token fraud. It's a value-added service that uses machine learning tools to rate the likelihood of a fraudulent request for token provisioning. Tokenization is a fraud-fighting technology that helps protect sensitive account information from fraudsters by replacing it with unique codes. However, tokens can be illegitimately provisioned. Visa pegs losses to such frauds at $450 million in 2022 alone.

Visa also now offers real-time account-to-account protection, powered by deep learning AI models.

AI-generated fraud on the rise

But are these tools enough? Because, after all, there are plenty of Willie Sutton wannabes in the digital world. The latest boom, one targeted by Mastercard, identifies fake identities that are created using AI. Fraud cases involving AI-generated identities rose 17 percent between 2021 and 2023, and three quarters (76 percent) of financial professionals are fairly certain their companies have approved customers who presented synthetic IDs, according to research by the fraud prevention company Deduce.

The Deduce research also found that fraud and risk professionals expect the problem to worsen before it can be contained with an effective solution. "Synthetic identity fraud has long been a significant challenge for the financial industry, but the advent of AI technology has accelerated the problem," said Ari Jacoby, Deduce's CEO. "Fraudsters are now able to create identities at an unprecedented pace, allowing them to play the long game with those personas."

Once a fraudster creates a synthetic identity in the credit reporting apparatus, they start taking out loans and credit lines they have no intention of repaying. Cost estimates vary, but better than one third of risk professionals surveyed put the average cost of a synthetic fraud incident at between $25,000 and $100,000. Nearly a fourth (23 percent) put the cost north of $100,000.

Perhaps one of the most sobering findings was that despite all the time, money and effort the industry is putting into developing fraud defenses, just over half (52 percent) of risk professionals surveyed feel fraudsters are adapting faster than financial organizations.

Consumer concerns run deep

Consumers are especially concerned about AI-based fraud attacks. A 2023 consumer survey fielded by Prove Identity, a digital identity platform, found 72 percent knew what AI-based attacks were and 84 percent were concerned about frauds perpetrated using AI. These concerns are not misplaced. Consider these additional findings reported by Prove:

  • 51 percent of surveyed consumers have been victims of identity fraud or know someone who has been.
  • 23 percent have been victims of SIM swap attacks, through which fraudsters take over phone numbers by having them "ported" onto new SIM cards.
  • Better than a third (35 percent) have been victims of social engineering attacks.

Federal agencies are voicing concerns. The Federal Trade Commission stated in a 2023 blog post that it is "keeping a close watch on the marketplace and company conduct as more AI products emerge." It added, "[W]e aim to prevent harms consumers and markets may face as AI becomes more ubiquitous."

More recently, the Commodities Futures Trading Commission warned that fraudsters are claiming "huge returns" using AI-assisted technologies like bots. "When it comes to AI, this advisory is telling investors, 'Be very wary of the hype'," said Melanie Devoe, director of the agency's customer education and outreach office.

Other federal officials have also expressed concerns about AI, AI fraud and the impact of AI fraud on the financial stability of individuals and companies. "AI has spread to every corner of the economy, and regulators need to stay ahead of its growth," said Rohit Chopra, director of the Consumer Financial Protection Bureau.

"This is an all-hands-on-deck moment," said Kristen Clarke of the Justice Department's Civil Rights Division. Among other things, the DOJ is working with the CFPB and FTC to ensure AI is not used to support discrimination in lending and housing. end of article

Patti Murphy, self-described payments maven of the fourth estate, is senior editor at the Green Sheet. She also co-hosts the Merchant Sales Podcast, and is president of ProScribes Ink. You can reach her at patti@proscribes.net.

The Green Sheet Inc. is now a proud affiliate of Bankcard Life, a premier community that provides industry-leading training and resources for payment professionals. Click here for more information.

Notice to readers: These are archived articles. Contact names or information may be out of date. We regret any inconvenience.

Prev Next
A Thing