Tuesday, October 1, 2024
What are some common myths or misconceptions about identity fraud prevention that you've encountered in the industry?
Generative AI is already enabling mass production of near-perfect fakes in response to a clever prompt. As fraud becomes increasingly digital, the ability of humans to detect manipulations declines -- especially when professional fraud is concerned. Agent Smith in "The Matrix" was prophetically right: "Never send a human to do a machine's job."
Fraud-enabling AI technology is outpacing fraud-preventing AI technology at an unprecedented pace because AI technology is evolving from manipulation enablement to self-generation enablement. Anyone not employing traffic-level detection in conjunction with case-level detection is only protecting against amateur fraud, not professional fraud.
Most fraudsters are motivated by ease and anonymity. Relying on personal data verification per se is becoming ever more precarious.
But who says that fraudsters will continue with the modes of operation we see today? The next threat might come not from credentials but from communications. Deepfaked CEOs have already enabled withdrawals in millions. What is to stop fraudsters from deepfaking you and conducting a video call requesting a reset of your access credentials?
We are already seeing randomization introduced into deepfaked document and face images. Images, as opposed to large language model (LLM) texts and voices, do not have to "make sense" when responding to questions or instructions. We are already seeing the quality of images and videos rapidly becoming so convincing that the good old blur areas, distortions, etc. are becoming ever rarer.
Since the target is moving, it makes sense to adopt the strategies used in cyber risk detection and apply them to Gen-AI impersonation detection. Cyber has long developed beyond the templating of viruses and attacks and ventured into the search for anomalies that are not pre-templated. Gen-AI is produced by particular engines, each doing what it does differently. That "Algorithmic fingerprint" is one of the promising detection methodologies yet to be perfected.
You mentioned that features like holograms and microprints are often seen as proof of an ID's authenticity. What are the limitations of relying on these features, and how do fraudsters bypass them?
These features are effective for verifying the authenticity of identification documents; however, their practical application is limited, particularly outside of controlled environments like airports equipped with document readers. Both holograms and microprints, along with various other security measures, were designed to be detected using professional scanners that utilize advanced illumination techniques and coaxial lighting.
In contrast, the typical scenario today involves customers capturing their ID and selfies under diverse and often suboptimal conditions. Experience indicates that the quality of these images frequently falls short of enabling reliable detection in the vast majority of cases, making it easy for fraudsters to fake them.
Deepfakes are a growing concern in identity fraud. What makes spotting deepfakes more difficult than simply looking for inconsistent reflections or jerky head movements?
Five years ago, this may have been feasible. However, today, unless the fraudster has used a very inexpensive tool, identifying deepfakes is incredibly challenging. And this technology is getting much more powerful; deepfakes will soon be undetectable by observation.
Certain indicators, such as "jerky head movements," may have assisted in identifying earlier versions of real-time deepfakes -- and may still do, depending on the quality of the technology employed. However, the likelihood of customers being asked to engage in such detection methods is minimal.
Politically exposed persons and sanctions checks are often cited as key in preventing money laundering. Can you explain why these alone may not be sufficient to stop identity fraud, and what more robust measures should be in place?
Politically Exposed Persons (PEPs) and sanctions are indeed valuable tools for flagging risk based on verified data. However, a critical question arises regarding the extent to which the available data encompasses all potentially relevant risk cases.
Currently, this coverage is far from comprehensive. If financial institutions, law enforcement agencies, and government entities were to make their knowledge bases accessible for screening purposes—obviously in a controlled manner that preserves privacy—the efficiency of risk assessment would significantly increase. In summary, while the PEPs and sanctions tool is undoubtedly robust, it remains only partially effective due to incomplete data availability.
What role does technology, like AI and machine learning, play in debunking these myths and helping to accurately detect and prevent ID fraud?
AI, and its broader algorithmic family machine learning, plays an increasing role in identity fraud prevention. It does it primarily in the diagnostics of photos and biometrics. AI's big plus over human examination is the ability to detect manipulations that are not visible to the human eye, or as we call them "digital manipulations and generative artifacts." AI's big pluses over AML screening and data verifications are in their ability to "connect the dots" between flags that haven't been pre-identified as related.
AI is a very effective anomaly and relationship discovery tool, so requires less dependence on the genius who may or may not discover them. AI also helps beef up fraud discovery by adding collateral factors such as device flags and digital/social footprint. Just to set the record straight, AI as a discovery or detection tool will not always be accurate since it still relies on learning (hence, "machine learning"), and the availability of the "complete" representative sample set of reality is never there.
But who says that AI will always be learning from samples? Isn't AI about artificial intelligence, and intelligence is about figuring out?
For businesses looking to protect themselves against identity fraud, what are the most effective prevention methods that go beyond the traditional myths?
Organizations stand to gain significantly by approaching identity verification and authentication—specifically onboarding and access management—analogously to their strategies for cyber defense. While early AI manipulations may be detectable through visual or auditory means, the advancement of AI, particularly generative AI, necessitates the implementation of robust automation.
It is essential to accept that reliance on human senses for detection is increasingly unreliable; therefore, detection must transition to a digital framework. This shift requires organizations to adopt a dual-layered AI attack detection strategy, encompassing both case-level and traffic-level analyses, and to critically evaluate their detection methodologies.
Solely relying on AI that distinguishes between large datasets of fake and real images will not provide long-term solutions. Organizations can draw valuable lessons from the evolution of cyber attack detection to address generative AI-powered impersonation attacks more efficiently.
The Green Sheet Inc. is now a proud affiliate of Bankcard Life, a premier community that provides industry-leading training and resources for payment professionals. Click here for more information.
Notice to readers: These are archived articles. Contact names or information may be out of date. We regret any inconvenience.