AI Will Be the Next Tool for Fraudsters
- AI is a startlingly effective tool
- Fraudsters have already discovered AI's utility
- Corporations and governments alike are warning of AI's impending misuse
Not long ago we talked about the way Generative AI is gaining popularity, embraced for its ability to quickly create images, audio, and text-- quickly and inexpensively, which is the key factor. With very little expenditure of either time or learning effort, anyone can become proficient at using Generative AI.
Microsoft’s chief economist, Michael Schwarz, has a warning for all of us. While serving on a World Economic Forum panel in Geneva on Wednesday (May 3), Mr. Schwartz said, as reported by Bloomberg News via PYMNTS.com. “I am confident AI will be used by bad actors, and yes, it will cause real damage,”
“Once we see real harm, we have to ask ourselves the simple question: ‘Can we regulate that in a way where the good things that will be prevented by this regulation are less important?’ The principles should be, the benefits from the regulation to our society should be greater than the cost to our society.”
Many are Issuing Warnings About AI
Mr. Schwartz is not alone in calling for greater scrutiny of artificial intelligence (AI) tools as their use has become more and more common with the arrival of ChatGPT. Recently, the "godfather of A.I." left Google and warned of the dangers of A.I.:
As PYMNTS reported last month, the rapid development of AI capabilities, coupled with its attractive industry-agnostic integration use cases, has challenged regulators and lawmakers around the globe as they race to address them.
In recent weeks, U.S. Senate Majority Leader Chuck Schumer (D-N.Y.), introduced a framework of rules designed to chart a path for the U.S. to regulate the AI industry, while the Biden administration laid out a formal request for comment to shape U.S. AI policy.
Additionally, at a Congressional heading on April 18, 2023, Federal Trade Commission chair Lina Khan warns that "AI technology like ChatGPT could ‘turbocharge’ fraud."
“AI presents a whole set of opportunities, but also presents a whole set of risks,” Khan told the House representatives. “And I think we’ve already seen ways in which it could be used to turbocharge fraud and scams. We’ve been putting market participants on notice that instances in which AI tools are effectively being designed to deceive people can place them on the hook for FTC action,” she stated.
AI and Fraud: The Next "Must-Have" Tool
Fraudsters are notoriously early adopters of any tech that can aid their enterprise, and AI is no exception.
“People are already using ChatGPT and generative AI to write phishing emails, to create fake personas and synthetic IDs,” Gerhard Oosthuizen, chief technology officer of Entersekt, told PYMNTS earlier this year.
He goes one step further, adding that scammers could even use generative AI tools to ask, “How would I defraud a customer?” -- and the AI engine comes up with a list. They could also simply ask for “10 ways to run a phishing campaign,” and get back a number of fully explained, useful strategies.
One example of generative AI being used as a scam comes from the FTC warning, where generative AI is used to "clone" the voice of a loved one feigning an emergency and needing money.
The applications for generative AI are unlimited. From "cloning" voices to generating images of checks, fraudsters will take every avenue and use every trick in the book to scam victims. Banks will be the last line of defense for their customers, and having technologies in place to stop scammers -- identifying "drop accounts," preventing account takeovers, and deploying AI technologies for check fraud detection -- will be key in stopping fraudsters.