Inside the shadow market for AI training accounts
In 2023, a shadow market has emerged around the burgeoning field of AI training, as companies like Scale AI, Mercor, and Surge AI ramp up their hiring efforts to meet the soaring demand for data labeling. These firms, which have collectively raised billions in investments, rely heavily on contractors—often referred to as “ghost workers”—to evaluate AI chatbot responses for major tech clients. However, the rapid growth of this sector has also attracted opportunists who exploit the system by buying and selling “verified” accounts necessary for accessing remote AI training gigs. An investigation by Business Insider uncovered over 100 Facebook groups engaged in the illicit trade of these accounts, prompting Meta to remove numerous posts and groups for violating its policies.
The appeal of these accounts stems from the lucrative nature of AI training work, which can pay upwards of $100 per hour. Contractors must first create accounts and pass screening tests on platforms like Scale AI’s Outlier or Surge AI’s DataAnnotation.tech. However, when projects dwindle, some contractors turn to the black market to purchase accounts from individuals in countries where opportunities remain plentiful. This practice not only undermines the integrity of AI training companies but also poses risks for both buyers and sellers, who face potential scams and the leakage of personal information. Notably, the AI training companies have established strict policies against account reselling and are actively working to combat fraud, yet the sophistication of the underground market mirrors that of other illicit industries like bank fraud and ticket scalping.
As the competition intensifies for quality data labeling, companies like Scale AI are grappling with the challenges posed by fraudulent activities, including the use of VPNs and multiple accounts by contractors. Internal documents reveal ongoing efforts to identify and eliminate suspected scammers, with measures such as banning users from certain countries and monitoring account behavior. The rise of this shadow economy not only complicates the hiring landscape for legitimate contractors but also raises questions about the quality and reliability of AI training data, ultimately impacting the companies that rely on this crucial input for their AI models. As the industry evolves, it remains to be seen how AI training firms will adapt to combat the growing sophistication of fraudsters and protect the integrity of their platforms.
https://www.youtube.com/watch?v=8EkwGnZz-Lk
AI training firms Scale AI, Mercor, Surge AI, and Handshake have raised billions this year.
Getty Images; Tyler Le/BI
A shadow market for AI training accounts has emerged as demand for data labeling surges.
Opportunists are offering to buy and sell “verified” accounts to access remote AI training gigs.
Business Insider found 100 Facebook groups containing posts promoting this shadow market.
There’s a new black market in the world of AI.
Companies like Scale AI, Surge AI, and Mercor are hiring thousands of contractors around the world to help train AI chatbots by evaluating their responses for Big Tech clients — and that’s given rise to a shadow economy in which workers’ accounts are being illicitly offered for sale.
Business Insider’s investigation found at least 100 groups on Facebook illicitly selling access to real and fake AI training accounts, which contractors require to work for data-labeling companies. After we flagged the phenomenon, Meta removed about 40 groups and pieces of content for violating its policies and is continuing to investigate, a spokesperson said.
The prominent AI training companies ban account reselling and say they have safeguards in place to prevent the practice. But what are purported to be “verified” accounts continue to be up for sale on Facebook, WhatsApp, and Telegram.
Internal documents show Scale AI, which received a
$14 billion investment
from Meta in June, has been battling fraudulent and duplicate accounts and VPN misuse for years, and the company has barred users from some countries from projects to curb cheating.
These AI training companies have
raised billions this year
as tech giants race to secure the data needed to train their AI models and hire contractors, who are often referred to as “ghost workers” for their behind-the-scenes
role in AI development
.
Business Insider’s findings show how AI training, also known as data labeling, is attracting scammers and shortcut seekers, similar to the account sharing that happened for
food delivery and ride-hailing apps
. It raises concerns for account buyers and sellers, who could be scammed and have their private information leaked, and for the clients paying the contracting companies.
Sara Saab, vice president of product at Prolific, a UK-based data-labeling startup, said that the company’s research has shown that no single fraud ring is behind the movement, and that the underground industry has reached a level of sophistication seen in bank fraud or concert ticket scalping.
“Technologies that are helping data labeling companies are also helping people with bad intentions, fraudsters and scammers,” she said.
When the work dries up, opportunists step in
Before contractors can receive tasks such as labeling images or suggesting alternative chatbot responses, they have to create accounts and complete screening tests on platforms like Scale AI-operated Outlier or Surge AI-operated DataAnnotation.tech. The work is remote, asynchronous, and pay can hit over $100 an hour.
Platforms like Outlier offer task-based projects that can last anywhere from a few hours to several months, often in regions where language translation or data annotation work is needed, or where pay rates are lower. When projects dry up, so does contractors’ pay. That’s where opportunists step in.
They target would-be trainers by selling “verified” Outlier accounts that belong to people in countries such as the US, where projects are still active, according to two former Outlier contractors in Kenya. Some may be real accounts, and some may be fake — but either way, reselling accounts is barred by the companies.
The two Kenyan contractors, whose identities are known to Business Insider, said they personally know people who had purchased accounts. They added that some also bought accounts if they were unable to pass the screening tests.
Account buyers use tools like a VPN or a “shadow proxy” to mask their true location by routing their internet connection through another person’s device in the target country, said the two former contractors in Kenya.
There are plenty of social media profiles that claim they can teach account buyers how to get around data contracting companies’ rules. Business Insider reviewed YouTube channels and Telegram groups that sell would-be taskers guides to “bypass geo-restrictions” or answers to Outlier’s onboarding tests or specific projects.
Those who “rent” out their accounts to taskers charge an upfront fee, a percentage of future earnings, or both, according to four contractors who have been approached by people looking to buy their accounts.
Business Insider found one paid ad on WhatsApp that was seeking to purchase and resell verified accounts for two of the biggest AI training platforms, Outlier and Mercor.
Playground for scammers
Both sides are wary of scams, two contractors based in the US told Business Insider. Buyers worry that a “seller” may take payment and disappear. Sellers worry that their buyer won’t offer them a percentage of future earnings once they hand over account login details.
In several posts seen by Business Insider, Facebook users said they had been scammed. Some said they paid for an AI training account, only for the scammer to block them and disappear with their money. Others said they were given an email address and password that wasn’t actually registered with an AI training firm.
The two US contractors said that they often received direct messages on Reddit from people seeking to purchase their accounts after they passed the screening tests. Would-be buyers offered them a “fair payment” plus an unspecified cut of future earnings made via the account, according to a chat seen by Business Insider. One contractor said they ignore the requests because their accounts could be banned if they were caught. Another concern: They would be on the hook for income taxes for any paid work performed under their accounts.
Representatives for the biggest data companies, including Scale AI, Mercor, Prolific, and Surge AI, said buying and selling accounts is banned and that they have a variety of mechanisms, from monitoring Facebook groups to account-level pattern analysis, to detect fraud.
Facebook posts advertising the sale of such accounts also violate Meta’s community standards and
fraud and scams policy
, a spokesperson confirmed.
“We use device, IP, and behavioral safeguards to identify and remove suspicious accounts before they can access any customer work,” said a Scale AI spokesperson.
Scale AI confronts ‘cheaters’
Internal Scale AI documents reviewed by Business Insider show that the company, founded in 2016, has been dealing with scammers for at least two years. On a
project for Google
in 2024, thousands of taskers were flagged in a document as “suspected spammers” or “cheaters.” In a 2023 spreadsheet titled “Good and Bad Folks” and another called “suspicious non-US taskers,” which
Scale accidentally left public
to anyone with the link, the company collected details of potential fraudsters.
Another spreadsheet from late 2023 shows rampant use of VPNs and multiple accounts belonging to the same person. The document contains a list of 490 contractors that were removed: 48 for using a VPN and a digital payments app that allows users to withdraw money in US dollars; 70 for accounts registered under the same name; and 11 for having two accounts. Another 21 users were removed for being “low quality” taskers.
The documents also show that a project with an unnamed client was plagued with quality issues. In one project progress tracking document, Scale AI managers discussed various strategies to “be ahead of the spammers.” They included banning certain accounts from Egypt, Kenya, Pakistan, and other countries from participating in the project for using ChatGPT, and blocking the copying and pasting of content. The Kenya-based contractors Business Insider spoke to said that all AI training project opportunities in the region have sharply declined since late 2024.
The black market for Outlier accounts has spurred another cottage industry: hijacking real ones. Contractors have received
fake job promotion emails
asking for their login credentials. Scale AI banned one user for harvesting workers’ contact info and spamming them in the summer, according to an email seen by Business Insider.
Prolific’s Saab described the targeting of AI training platforms as an “accelerating arms race” between them and fraudsters — and said it requires proactive action from companies to stay ahead.
Have a tip? Contact Shubhangi Goel via email at
sgoel@businessinsider.com
or Signal at shuby.85. Use a personal email address and a nonwork device;
here’s our guide to sharing information securely
.
Read the original article on
Business Insider