Generative AI Turns Online Business into a Chaos

Posted on

The Growing Threat of AI-Driven Scams

Last year, Ian Lamont found himself in a troubling situation when his inbox filled with inquiries about a job listing he hadn’t posted. Upon checking LinkedIn, he discovered a fake job posting for a “Data Entry Clerk” using his company’s name and logo. This was just the beginning of a larger issue involving AI-generated scams that have become increasingly common.

Lamont soon realized that someone had created a fake profile of his company’s manager, complete with an AI-generated face and only a dozen connections. He spent days warning visitors to his website about the scam and convincing LinkedIn to remove the fake listing. By then, over twenty people had reached out to him directly, and he suspected many more had applied.

The potential of generative AI (GenAI) to enhance business operations is immense. According to a 2023 estimate from McKinsey, GenAI could add more value to the global economy annually than the entire GDP of the United Kingdom. However, this same technology also poses significant risks. Since the launch of ChatGPT in 2022, online businesses have had to contend with a rapidly growing deepfake economy, where it’s increasingly hard to distinguish between real and fake content.

Chainabuse, a scam reporting platform, reported that GenAI-enabled scams have quadrupled in the past year. A Nationwide insurance survey of small business owners last fall found that a quarter had faced at least one AI scam in the previous year. Microsoft claims it now blocks nearly 1.6 million bot-based signup attempts every hour. Renée DiResta, a researcher at Georgetown University, refers to the GenAI boom as the “industrial revolution for scams,” highlighting how it automates fraud, lowers entry barriers, reduces costs, and increases access to targets.

Real-World Consequences

The consequences of falling for an AI-driven scam can be severe. Last year, a finance clerk at the engineering firm Arup joined a video call with what he believed were his colleagues. It turned out that each attendee was a deepfake recreation of a real coworker, including the organization’s chief financial officer. The fraudsters asked the clerk to approve overseas transfers amounting to more than $25 million, and he complied, assuming the request came through the CFO.

Business Insider spoke with professionals across various industries—recruitment, graphic design, publishing, and healthcare—who are working to protect themselves and their customers from AI’s evolving threats. Many feel like they’re playing an endless game of whack-a-mole, with the challenges increasing daily.

Case Studies and Cybersecurity Measures

In another incident, fraudsters used AI to create a French-language replica of the Japanese knives store Oishya and sent automated scam offers to its Instagram followers. Nearly 100 people fell for the scam, believing they had won a free knife by paying a small shipping fee. Kamila Hankiewicz, who runs Oishya, learned about the scam after several victims contacted her. She has since strengthened her cybersecurity measures and launched campaigns to educate customers on spotting fake communications.

Rob Duncan, VP of strategy at Netcraft, notes that GenAI tools allow even novices to clone brand images and craft convincing scam messages within minutes. With cheap tools, attackers can easily impersonate employees, fool customers, or mimic partners across multiple channels.

The Evolving Landscape of AI Scams

Text is just one front in the battle against malicious AI use. With the latest tools, a solo adversary can create a convincing fake job candidate in under an hour. Tatiana Becker, a tech recruiter in New York, says deepfake job candidates have become an “epidemic.” She now asks for ID and poses open-ended questions to detect fakery.

Nicole Yelland, a PR executive, encountered a similar situation when a scammer impersonating a startup recruiter sent her an email with a detailed slide deck outlining a role. During the interview, the “hiring manager” refused to speak and asked her to type responses in the chat. Her alarm bells went off when the interviewer asked for private documents, including her driver’s license.

Identity Verification and Platform Challenges

Videoconferencing platforms like Teams and Zoom are improving at detecting AI-generated accounts, but some experts warn that this may lead to an arms race. Jasson Casey, CEO of Beyond Identity, believes the focus should be on authenticating a person’s identity. His company offers tools that verify meeting participants through biometrics and location data.

Fake versions of individuals are also becoming a problem. In late 2024, scammers ran ads on Facebook featuring a deepfaked version of Jonathan Shaw, the deputy director of the Baker Heart and Diabetes Institute. The fake video falsely claimed that metformin was dangerous and recommended an unproven supplement. Several patients reached out asking how to get the supplement, causing confusion and concern.

The Rise of AI Slop

Another challenge is the proliferation of low-quality, mass-produced AI-generated content known as “AI slop.” Social platforms’ recommendation engines have promoted such content, leading users to fall for fake rental properties, appliances, and other items. On Pinterest, AI-generated “inspo” posts have led to orders for cakes that don’t exist. Nima Etemadi, cofounder of Cake Life Shop, says most customers are receptive to learning about real cake possibilities after being informed.

Similarly, AI-generated books have flooded Amazon, harming publisher sales. Pauline Frommer, president of Frommer Media, says AI-generated guidebooks are reaching the top of lists through fake reviews. These practices make it difficult for new, legitimate brands to compete.

Regulatory and Labeling Efforts

While the FTC now considers fake AI-generated product reviews illegal, official policies have not yet caught up with AI-generated content itself. Platforms like Pinterest and Google have started to watermark and label AI-generated posts, but concerns remain about potential unintended consequences, such as “label fatigue.”

For now, small business owners must stay vigilant. Robin Pugh, executive director of Intelligence for Good, advises validating interactions with actual humans and ensuring money is sent to the correct destination.

Etemadi acknowledges that while AI can help his business, scammers will also use the same tools to become more efficient. “Doing business online gets more necessary and high risk every year,” he says. “AI is just part of that.”

Leave a Reply

Your email address will not be published. Required fields are marked *