By Marché Arends and Kathryn Cleary

This investigation was supported by the Pulitzer Center’s Artificial Intelligence Accountability Network 

On the surface, the South African job market looks busy; scrolling through jobs on LinkedIn reveals hundreds, sometimes thousands of job postings for “AI Tutors” or “AI Trainers” from micro-tasking platform, Mindrift. 

Large language models (LLMs) like OpenAI’s ChatGPT, Anthropic’s Claude and Google’s Gemini have become household names throughout the world, as people use these AI tools for assistance with day to day tasks like writing emails, generating images or researching topics. Kenya topped global charts according to a July report by DataReportal and Meltwater, where 42.1% of Kenyan internet users aged 16 and above were found to have used ChatGPT in the past month. 

As Big Tech companies race to build ever more powerful artificial intelligence technologies, the effort isn’t confined to sprawling data centers or stockpiles of advanced chips. It also depends on a growing workforce of digital workers with diverse skillsets. 

These workers, who are now being called AI tutors or trainers, are in some cases, specialists who hold graduate degrees ranging from physics to comparative literature. They are tasked with refining the answers of models like ChatGPT and guiding them toward more sophisticated behaviour. Their job is to correct errors, shape responses and, at least seemingly, “teach” the models how to perform.

Despite the name ‘tutor’ or ‘trainer’, AI models like ChatGPT don’t actually ‘think’, but depend on carefully curated training data that requires human insight and oversight at every step of the development process. As the technology shifts from generating simple text to tackling what developers call “reasoning,” companies have required increasingly technical and precise data.

South African digital worker Tasneem* has been with Mindrift for over a year. Her days usually start in the same way: she opens her laptop, logs onto Discord and hopes there is work for her to do. Much of her day is spent in what feels like an endless loop – waiting, refreshing her screen, and waiting again. 

“There was a project called ‘Elephant’, they made an announcement and everyone had to leave elephant emojis on the post to show their interest,” says Tasneem. “There were so many elephant emojis, but if you’re not fast enough you don’t land the project.”

Digital workers like Tasneem exist within a labour system consisting of subsidiary companies like Mindrift, which offer micro-tasking work, sugar-coated as AI training or tutoring. Micro-tasking is a process where large projects are broken down into smaller tasks and collectively, these tasks contribute to forming large training datasets for different types of AI models, most notably LLMs like ChatGPT.

Over the past year, we have investigated the mass recruitment strategies behind popular AI micro-tasking companies. We collected and analysed thousands of job postings from Mindrift, a subsidiary of former-Russian owned tech company, Toloka. In 2024, Toloka was exposed by The Bureau of Investigative Journalism as having provided training data for the Russian government’s surveillance tech. Since then, Toloka has become part of Nebius group based in the Netherlands, which was spun off from the company’s previous Russian parent, Yandex. 

Hundreds of thousands of job postings for highly-skilled AI tutors and trainers could signal a shift in the data work industry away from so-called ‘digital sweatshops’, but in reality, little has changed. Our investigation reveals that despite thousands of job postings, these companies offer no promise of work, but instead leverage digital workers as collateral to win Big Tech contracts. Experts say this exploitative strategy is fuelled by Big Tech’s insatiable drive towards something that may never exist: super-intelligent AI. 

From mid-March through mid-July of this year, Mindrift advertised over 5,770 jobs on Workable, a prominent third-party applicant tracking system that Mindrift uses to post jobs to LinkedIn. However, according to Mindrift, these aren’t even jobs, they’re ‘gigs’.

We built a dataset of Mindrift’s job postings from Workable through a systematic monitoring of publicly available job postings from March through July. To ensure accuracy in our analysis for the story, the data referred to is from March 16th through to July 19th 2025. 

“Help shape the future of AI,” Mindrift’s job posts read. “Get paid for your expertise,” and “Take part in a part-time, remote, freelance project.” But, what the numerous job ads omit is that there is no promise of work, instead, it’s often as Tasneem says, one big waiting game. 

Many workers inside the company say after onboarding they sit for weeks, even months at a time, with empty dashboards and no tasks to complete. Mindrift workers are not paid hourly but rather per accurately completed task. No tasks means no chance at earning.

“I see the same few names [getting tasks] most of the time,” adds Tasneem.

According to a Mindrift spokesperson, “task availability on the platform can fluctuate. While some contributors begin work immediately after onboarding, others are activated as new projects arise based on demand.” 

On the inside, the recruitment drive was paying off. On 7 July, workers were greeted with a celebratory announcement on Discord: “10,000 MINDS!” read the post, “The Mindrift server has just hit an incredible milestone — 10,000 members in our community!” 

Mindrift announcement of 10,000 community members

When Tasneem saw this, she knew this meant her chance of making money had plummeted. “There is definitely not enough work for 10,000 people,” she says. 

The strategy behind flooding the job market

Kenyan digital worker Ephantus Kanyugi has worked for at least five different micro-tasking platforms, specifically as a trainer to onboard new recruits.

“Just in Kenya, we had over 100 trainers, and each person was required to have 100 people in their boot camps every week. That’s 10,000 people every week,” he says, reflecting on past experiences with other micro-tasking companies.

The majority of workers, Ephantus says, are not paid for any of the work completed during their first month with the company. “You find that a lot of people get frustrated and leave, maybe out of 100, 10 or so remain, and then you are requested to recruit another 100,” he says.

While companies like Mindrift post jobs to platforms like Workable and LinkedIn, digital workers within the company are also able to recruit people within their networks, at times using personal referral links. Mindrift confirmed the existence of a referral programme that allows active AI Tutors to recommend qualified individuals from their professional networks to join Mindrift. “We value these internal recommendations as one of several methods we use to find high-quality, skilled contributors,” explains Mindrift’s spokesperson. 

While this poster alleges that Mindrift will treat applications ‘specially’, Mindrift states that the assessment and selection process is consistent for all applicants, and selection decisions are not based on referral status. .
Clicking on a personal referral link reveals a unique sign-up form that differs from Mindrift’s normal application process, where applicants are required to submit their CV and state their level of language proficiency.

“A referral fee is provided to the referrer after the person they recommended has passed all required quality-assurance checks and onboarding, joined a project, and successfully delivered tasks,” they add.

Of the over 5,770 jobs posted by Mindrift, there are only 238 unique job titles, meaning the same titles were reused over and over across hundreds of postings. 

For example listings titled “Freelance English Editor – AI Tutor”, “Freelance English Proofreader – AI Tutor,”  and “Freelance English Proofreader / AI Tutor” sound and look slightly different, but all have identical job descriptions and requirements. The result is an inflated appearance of opportunity, where the same role may be listed dozens of times across different countries with minimal variation.

LinkedIn was approached for comment about how Mindrift’s job posting behaviour aligned with the platform’s job posting policies, but they failed to respond. 

When asked why they post so many jobs, Mindrift said, “Because these opportunities are available to freelancers worldwide, we post roles across regions so qualified candidates from different locations can apply.”

Mindrift isn’t the only company running mass recruitment campaigns on LinkedIn. Platforms like Scale AI’s Outlier and Surge AI’s DataAnnotation have also turned to bulk posting on LinkedIn to secure digital workers. On 2 May, Outlier had over 42,000 jobs advertised on LinkedIn, and on 24 July, DataAnnotation advertised over 5,000. 

On May 2nd, Outlier had over 42,000 jobs advertised on LinkedIn.
On July 24th, DataAnnotation had over 5,000 jobs advertised on LinkedIn.

For Ephantus, this mass recruitment is part of a broader strategy: it’s easier to keep casting a wide net, than to maintain a stable workforce. “[These companies] invest in recruiting, because it’s cheaper to do recruitment than it is to pay people for actual work,” he says. 

Hype the intelligence, hide the humans

Mindrift’s mass recruitment efforts might have grown their internal server, but reports of empty dashboards and a lack of work suggest a deeper motivation for collecting thousands of AI tutors and trainers. Speaking with experts reveals a practice called labour hedging – a tactic where companies hire large pools of workers to signal scalability. 

Mindrift does this through what they call a talent pool. “Since contributors typically work only a few hours per week, we need a robust and diverse talent pool to meet project needs at scale,” adds Mindrift’s spokesperson.

“This ensures that if, for example, Company A reaches out asking for 50 cybersecurity experts, we’ve already found, vetted, and prepared the right candidates for them,” states Mindrift’s website. 

Tech giants like OpenAI, Anthropic and Google DeepMind are locked in a race to build Artificial General Intelligence (AGI) – systems that can match or surpass human cognitive abilities. However, some experts argue it may not exist at all. 

Still, investors are pouring huge sums of money at an unprecedented rate into achieving AGI. In August 2024, OpenAI’s valuation exceeded $100 billion despite a projected loss of $5 billion while Google recently announced its plan to spend $85 billion on AI and cloud infrastructure in 2025 alone. 

“Look, this is about bubbles, lots of bubbles, and these people make lots of money out of this. The whole digital industry, so to speak, is basically a structure of one bubble after another,” says Edemilson Paraná, a professor of social science at LUT University. “AI is the new face of this very same structure, at least since the 2008 crisis.”

Paraná contextualises the AI bubble through the lens of speculative capitalism, where value isn’t driven by delivery, it’s projected through future potential. “Your stocks are going up because of the expectations that you’re going to monopolise the whole market,” he says. “People are betting on the fact that this will be the only game in town, because this is a ‘winner takes all’ kind of model.”

“Investors are on LinkedIn too, they see this [mass recruitment], it is a signal for them,” says Antonio Casilli, a professor at The Polytechnic Institute of Paris. “This looks more like a communications operation.”

“[These companies] know that LinkedIn is not only the place where they would actually find the talents, find the workers, but they will also be visible. It’s a matter of optics, in this case, to investors,” he adds.

For Adio Dinika, a researcher at the Distributed AI Research Institute, there is a singular motive behind what Casilli calls “a communications operation”: attract Big Tech. And in the attempt to manufacture scale, size matters. While Mindrift might have between 6,000 and 10,000 members, competitors like Scale AI’s Outlier platform boast over 100,000. 

“Some workers have tried to form their own sort of cooperatives, where they label their own data [and] try to attract clients,” says Dinika, “and they suffer, they fail to attract clients because they have a very small pool.”

“A more immediate kind of collateral, as in the case of AI, is the nature of being able to leverage in a very low cost and precarious manner, a massive amount of work. It’s a very important asset for any AI business and to the industry as a whole,” explains Paraná.

For this kind of infrastructure to remain in place, there needs to be a significant amount of hype, he says, adding that there needs to be a pervasive sentiment, in the media and in public discourse, that if companies aren’t actively rushing feverishly towards AGI, they will eventually fall behind.

“It’s an illusion game. You see Sam Altman on TV all the time saying, you know, AGI will be there in one year, AGI will be there in two years,” Paraná says. “And then you keep mobilising people, and then you keep getting money.” 

“AI companies tend to follow a predictable playbook: hype the intelligence, hide the humans,” says Casilli. “For years, that meant downplaying the amount of data work that makes these systems possible. But now the playbook has evolved. Today, the amount is still kept hidden, but the ‘quality’ of this same labour is inflated as part of AI mystique.”

“They’re inflating a discourse to exploit people,” adds Paraná. “This is the crude reality.” 

The workers being used as collateral

Early AI models were trained on relatively simple tasks, like drawing boxes around all the dogs or traffic lights in a selection of images or videos. According to Kenyan digital worker Joan Kinyua, tasks have become more complex. Joan would know – she’s worked for over 20 different micro-tasking companies since 2017.

“Do you know what a cuboid is?” she asks, drawing shapes in the air with her hands. “We started doing 3D tasking now, which was something totally new.”

While the requirements for Mindrift jobs suggest they are looking for higher skilled workers, this doesn’t signal a complete shift away from the industry offering low-pay, long hours and poor working conditions for data labellers and annotators in the Global South, previously reported on across media outlets globally. “You can have a PhD, but you’re doing annotation, so you get paid like everybody else,” says Joan.

For Casilli, there’s been no real shift in the data work industry as the models have not changed to such an extent that only expert ‘trainers’ are needed, rather, he says companies have always recruited these types of people. After over 10 years of research, Casilli says they’ve encountered data workers who have Master’s degrees and Doctoral degrees, with expertise in a range of subjects, but they would be hired to train very low level models or image recognition systems.

“I do not think that they are looking for these highly specialised persons because [for instance], the LLMs have changed entirely.”

The perceived shift in the industry doesn’t account for the fact that lower skilled labeller and annotator positions, targeting workers in the Global South, are also still in high demand. For example in June, Mindrift’s sister platform, Toloka Annotators, advertised roles for workers in South Africa and India on Workable and LinkedIn. 

A Workable link from June shows a Freelance Data Annotator role posted by Toloka Annotators for several cities in India.
A screenshot from June 9th shows a Data Annotator role posted by Toloka Annotators, for Cape Town, South Africa.

A location analysis of 5,770 jobs posted by Mindrift reveals that they advertised jobs across 62 countries. While the United States makes up 10 percent of the postings, South Africa is the third highest posting location. While the data suggests the target locations where Mindrift advertises roles, this does not mean that digital workers in those countries are the ones who apply for them. 

“It’s mainly a majority world phenomenon,” says Casilli.

Even though there are a lot of people who work for micro-tasking and datawork platforms in the United States, Germany or France, Casilli says the majority, for demographic and economic reasons, are in the Global South.

“The majority of the world population is in the Global South. And, the cost of living is in some cases incredibly low when compared to places like San Francisco,” he clarifies. 

Mindrift’s website states that they pay digital workers unique rates based on their location, level and field of expertise, and language skills. “Payment rates are based on cost of living indexes for each region and are constantly being reviewed and updated.” 

“It is a matter of very cynical and low level financial calculations of  ‘how low can I go’ with wages, which are not wages as such,” states Casilli. “This is the biggest waste of social capital in human history,” emphasises Casilli. “These people would be, should be, destined to the best jobs because they are probably the best and the brightest of their generation.”

Worker testimonies about how they are treated on micro-tasking platforms suggest that these companies don’t hold the same view. Dinika, who co-leads the Data Workers’ Inquiry (DWI) project documenting these conditions, mentions speaking to a worker who was told: “We employ the unemployable so be grateful you have a job.”

It may seem strange that highly-qualified people are so eager to gain piecemeal work but many African workers we spoke to expressed feeling grateful despite their working conditions. As the number of recruits continues to rise, competition for work intensifies.  

“The more people that are in the field, the worse the pay gets. Even if you’re the most skilled, you get paid less because of the number of people,” says Joan.

Joan counts herself lucky for having made it into the system at all. “When you start as a group of 100 [trainees] and only 30 of you get the chance to work, you feel fortunate,” she says. 

Nigerian digital worker Blessing* joined Mindrift in January 2024 and only lasted six months until her account was blocked without notice. Despite this, she echoes Joan’s sentiment, “I was one of the lucky ones who was chosen.”

She spent nearly three months completing tests and assessments and waited yet another month before receiving any work. “I contacted support and said I’ve done everything you asked for, why am I not getting any tasks?”

After contacting support several times, Blessing was told that her account had been blocked because she had violated a rule. “I asked them to tell me which rule I violated, but I never heard from them again.” She had around $140 USD left in her account which she has never received.

Ephantus says digital workers, highly qualified or not, are made to feel disposable: “It’s like, hey, we have a lot of people that are waiting in the pool. So if you cannot play along, or if you don’t feel like you want to work anymore for the platform, just leave.”

Workers unite as co-creators of the AI future

“From a very young age, I’ve always wanted to be a woman in tech,” says Joan, who is formally trained in business administration and management. In August 2017, she worked as an administrator at a school but found the job boring. When the opportunity to apply to work as a data labeller arose, she jumped at it.

Like most Kenyan data labellers, Joan’s starting point was Samasource, now Sama, an outsourcing firm with physical offices providing data labelling and content moderation services to Meta. 

Kenyans working for Samasource were exposed to an onslaught of explicit content for hours on end, all for the sake of making Facebook a safer place for users. Many of these workers developed mental health issues including Post Traumatic Stress Disorder and depression as a result of the content they were exposed to. Late 2023, a group of Kenyan digital workers filed a lawsuit against Samasource and Meta, citing Kenyan laws against forced labour, unfair labour practices, intentional infliction of mental health harm and other grievances. 

After Samasource, Joan moved to online platforms like Toloka and Cloud Factory. “I was hopeful things were going to change, that’s why I kept going.” 

But moving online was more of the same. “You’re forever in fear. You’re forever mentally prepared to get fired. You cannot express yourself, you cannot do anything against the organisation. You cannot afford not to hit the targets. You cannot afford not to follow the rules.”

More and more, workers like Joan are rejecting the narrative that they are ‘invisible’ or ‘hidden’, drawing agency from recognition. “When people speak about artificial intelligence, they’re usually speaking about the business aspect, the technological aspect. But nobody is talking about the workers,” she says.

Eight years after starting her first data labelling role, Joan is preparing for her first event as President of the Data Labelers Association (DLA). “We have the venue, we have the Zoom link, so now things are just easing slowly,” she says excitedly. “We are new to the activism field so every process is a learning process for us.”

Members of the The Data Labelers Association (DLA), whose mission is to give data labelers and annotators a collective voice.

The DLA was founded in early 2025, alongside Ephantus and other Kenyan digital workers, with a mission to empower colleagues in the digital work industry and advocate for fair and equitable treatment of data labellers globally. “We know what our rights are,” says Joan, “and we are starting to fight for what we feel we deserve.”

Some things the DLA is fighting for may seem like a given, but to Joan and her colleagues they are still out of reach. “An ideal labour model would begin with transparency and fairness: clear contracts, pay that matches the value and difficulty of the work, opt-in mental health support, grievance mechanisms, and representation in decision-making,” says Joan.

Amidst the AGI frenzy is the question whether humans will be made irrelevant. But Joan and the DLA refuse to entertain this idea.

“We don’t fear obsolescence. What we fear is being discarded while the system we sustain grows wealthier,” says Joan. “The conversation should not be about whether AI will replace us, it should be about whether we will finally be recognised as co-creators of the AI future.”

*Names have been changed to protect sources

Let the world know:

Eric Mugendi

Eric Mugendi is the Senior Editor at Africa Uncensored. He has a background in journalism, editing, and fact-checking, with a focus on technology and public finance. He previously managed PesaCheck, a fact-checking initiative by Code for Africa, where he commissioned and edited content on the veracity and accuracy of statements by public figures.

View all posts

1 comment

Your email address will not be published. Required fields are marked *