View Archives

View Flipbook

Table of Contents

Insights and Expertise

Embrace tech, ditch the old pitch

Mobile wallets: Implications for merchants

The future of dispute management: Turning pain points into strategic advantages

Why U.S. financial institutions can't ignore payment testing

New Products

Speed self-checkout with integrated tap-to-pay

Accelerate brand experiences with agentic AI

Simplify device management across your merchant base

The Green Sheet Online Edition

January 1, 2026 • 26:01:02

Agentic commerce, Part 1: Our biggest integration so far

Payment system architects and engineers describe the evolution from simple POS devices to agentic commerce as a journey shaped by technological breakthroughs and costly break-ins. Each paytech innovation has expanded convenience while introducing new threats.

Experts believe agentic commerce may follow the same pattern. In this first article in a series on agentic commerce, payments industry leaders assess the risks of integrating agentic AI into existing payment platforms.

Goran Bosankić, co-founder and chief revenue officer at Field39, a global paytech company, has witnessed numerous groundbreaking changes over several decades.

"EMV, contactless, Apple Pay, 3D Secure, tokenization and many other technologies introduced new risks and opportunities and each time, the pattern of balancing consumer protection with ease of use was almost always the same," he said. "Today we face a similar challenge with the rising trend of agentic commerce."

Deep fakes, automated attacks

Agentic commerce, Bosankić noted, delivers a nearly frictionless user experience but introduces risks and security challenges that must be addressed before mass adoption. For example, a Gartner study published in June 2025, revealed that AI agents can create new attack surfaces when embedded into enterprise applications.

"When Microsoft 365 Copilot was released, the massive adoption and ease of use created a new attack surface for the existing risks of oversharing due to insufficient SharePoint data access management," Gartner researchers wrote. "Similarly, accessible automations through embedded features, or by no-code agent development platforms (for example, AutoGen, CrewAI), will create new attack surfaces and trigger increased use."

Allen Kopelman, co-founder and CEO of Nationwide Payment Systems, mentioned that bad bots and deep fakes have been plaguing risk managers and underwriters. "I've seen companies with fake Facebook pages, websites and Google files look convincing enough to fool a bank," he said.

"It's not like the old days, when it was easy to spot a fake; today we use AI tools to detect bogus bank statements and driver's licenses."

Fraudsters will open a bank account at a fintech or nano bank, Kopelman explained, then put a fake business name on the bank's letter.

He's also heard underwriters raise concerns about phony applications and bust outs, in which fraudsters run massive numbers of stolen credit cards as soon as their merchant account is activated. Then they disappear.

LLM hallucinations

Thomas Müller, co-founder and CEO at Rivero, a European fintech, advocated the use of rules-based AI in commerce and banking applications, stating that these technologies deliver a more consistent and reliable customer experience than large language models (LLMs), which can be prone to hallucinating and making things up.

"There's a lot of excitement about AI, but people who see beyond the hype recognize the need to map problems to solutions in a secure, predictable way," he said. "I would urge banks and service companies not to expose customers to a large language model or work with a fintech that uses LLM for customer service."

Air Canada learned this the hard way, Müller noted, when a chat bot made up a refund rule that didn't exist. When the consumer went to claim that refund, the airline was held liable for the chatbot's actions and had to refund the flight. This highlights why LLMs are not ideal for building virtual agents and apps, he said.

Regulators are also taking a hard look at AI, Müller pointed out, citing the European Union's Artificial Intelligence Act, enacted in March 2024 and designed to be phased into law over a two-year period.

The regulatory framework makes it more difficult for banks and B2B fintechs to build products on top of machine learning models that are not deterministic or explainable, he said, adding that he agrees with and supports these protections.

Multilayered protections

Troy Leach, chief strategy officer at Cloud Security Alliance, a not-for-profit organization specializing in cloud computing security best practices, highlighted the need to balance agentic AI's speed and convenience with a proportionately robust security framework.

Leach said the key is "to understand the risk and limitations associated with the technology and to understand this requires a layered approach, not only because the security is still being defined as AI advances but because we have taken the most complex part of the transaction equation, the consumer, and added the complexity of a non-human that could drift into non-deterministic decisions."

Agents that act on behalf of humans need to be carefully designed, Leach noted, to prevent legal violations, such as ignoring age restrictions or enabling purchases prohibited in certain jurisdictions. Step-up authorizations for pricing thresholds, purchase frequency or delivery locations, along with robust monitoring and "circuit breakers" that intervene when agents make errors or are manipulated, can help mitigate these risks, he added.

Rather than becoming complacent or affirming controls are already in place, Leach stated, businesses should take AI agents seriously because they have changed the consumer variable.

Service providers need to re-evaluate every aspect of security and fraud prevention as we evolve beyond static, human-operated transactions, he added.

Non-human authentication

Leach recommended leveraging existing security frameworks and advanced tokenization, such as Visa's Trusted Agent Protocol and Mastercard's Agent Pay, to verify and authenticate AI agents. "We need to think of agents as another type of non-human identity (NHI)," he said.

"Bolting additional trust layers onto agentic AI communication that are not inherent, such as encrypted authentication, will help distinguish 'good agents' from 'bad bots.'"

Stressing the need to properly identify AI agents that make autonomous decisions, Leach pointed out that credential tokens and cryptographic authentication will help establish trust. I

In fact, he pointed out, there are ways to verify and register AI agents that are similar to applying for a new credit card.

These methods are already in place with Visa, Mastercard and OpenAI's Instant Checkout feature linked to Stripe merchants.

Early-day trial and error

Broader controls for human and AI intercommunication are also needed, Leach said, pointing to Model Context Protocol (MCP), an open-source tool developed by Anthropic. MCP enables agents to communicate with both external systems and peer agents and is already being adopted by key frontier models.

He cautioned, however, that MCP's current form may resemble early email protocols such as POP3 and IMAP, technologies developed decades ago that remain widely used despite inherent vulnerabilities and the layers of security required to mitigate modern risk.

Leach likened today's agentic AI initiatives to the early days of the internet, when browsers first introduced communication and security cues such as Netscape's iconic gold lock.

Twenty-five years ago, we saw a rise in fraudulent activity, he said, adding that a similar pattern is likely to emerge unless authentication and guardrails, such as spending limits, restricted purchasing categories and other controls, are broadly understood and effectively managed.

"Agentic AI is truly groundbreaking," Leach concluded. "It will do many great things for consumers and will become part of daily life faster than the internet did. The key is understanding the risks and limitations that come with the technology."

Part 2 of this series will explore the transformative potential of AI-powered commerce. End of Story

Dale S. Laszig, content strategy director at The Green Sheet and founder and CEO at DSL Direct, is a payments industry journalist, creator and consultant. Connect via email at dale@dsldirectllc.com and LinkedIn at www.linkedin.com/in/dalelaszig.

Notice to readers: These are archived articles. Contact information, links and other details may be out of date. We regret any inconvenience.

skyscraper ad