Skip to content

Deepfakes and Check Fraud — What is the Connection?

  • "Deepfakes" are emerging as a threat
  • Detection of deepfake material is getting harder and harder
  • "It's more important than ever to have a multi-pronged anti-fraud strategy."

As artificial intelligence (AI) technology continues to advance, a new type of media manipulation known as "deepfakes" poses significant challenges for businesses, governments, and individuals. Deepfakes use machine learning techniques to create or modify images, audio, and videos in a way that is incredibly difficult to distinguish from the genuine article.

Unit 21 takes a look at the rapid advancement of deepfake technology, and the growing threat it poses to to businesses, governments, and individuals alike. Deepfakes - media manipulated by artificial intelligence to depict someone as doing or saying something they did not - are becoming increasingly difficult to detect, opening the door to a wide range of malicious uses.

As explained on the page:

A deepfake is a piece of media created or manipulated by artificial intelligence to make a person depicted by the media seem as if they are someone else.

A,Close-up,View,Of,The,Wording,"deepfakes",Prominently,Displayed,On

This is achieved through a "generative adversarial network" (GAN) where two AI models - a "generator" and a "discriminator" - work against each other to create increasingly realistic forgeries.

This process allows the generator to eventually create or manipulate media so accurately that neither artificial intelligence nor human intelligence can easily tell the difference between a deepfake and the genuine media it's based on.

Fighting Back

The potential for abuse is significant, as deepfakes can be used for fraud, impersonation, and the spread of misinformation. As the webpage notes, "Deepfake technology has garnered plenty of controversy for its ability to facilitate potentially abusive – and even illegal – activities."

Deep,Fake.,Deepfake,And,Ai,Artificial,Intelligence,Video,Editing,Technology.

For example, if a criminal steals a person's sensitive information and gets good enough samples of what they look and sound like, they can use deepfakes to create phony ID credentials that are very difficult to determine are counterfeit.

Combating the threat of deepfakes will require a multi-pronged approach, including advancements in detection technology and heightened vigilance from businesses, governments, and individuals. As the webpage states, "This highlights why it's more important than ever to have a multi-pronged anti-fraud strategy."

The fraud team needs to be able to monitor for many different types of suspicious signs that abuse or financial crime could be happening at an FI or marketplace.

Deepfakes and Check Fraud -- What's the Connection?

You may be wondering why we are covering deepfakes; on the surface, they would not appear to be a connection with check fraud.

Well, that's where you are wrong. There are several different connections between the two.

First, deepfakes can be utilized to help create new IDs that can be used to open new accounts -- aka drop accounts. This includes driver license photos and passport photos.

Second, the AI generation utilized by fraudsters to develop new photos and images can also be used to create new checks in an instant. With stolen bank information and a few checks images, the technology can easily create new convincing fake checks. These checks can then be printed on standard printer and deposited via channels like mRDC.

Lastly, that same above technique can be utilized to alter checks. By uploading the image of a stolen check, AI can easily modify the image with a few simple prompts to change the payee and amounts. This bypasses the major flaw of check washing where the item has discoloration of disfigurements that are noticeable by the human eye -- important, as some financial institutions invest in resources to have all items viewed by a human in their mRDC channel.

 Mobile RDC to deposit checks is an essential offering for banking apps.

Detection of deepfakes requires a multi-pronged approach, similar to that of fraud teams utilized for check fraud detection. When it comes to analyzing the images created by AI, fraud teams deploy similar AI technologies that will interrogate the images to detect any flaws or inconsistencies. For checks, this means leveraging technologies like Anywhere Fraud with OrbNet Forensic AI to analyze the check stock, handwriting styles, and signatures for any inconsistencies from previous cleared items. Combine these technologies with others like behavioral analytics, consortium data, and dark web monitoring, and you have a strong defense against check fraud -- even AI-generated check images.

Leave a Comment