Are freelancers and AI agents really vying for client’s trust in the same way? In a lot of ways, AI functions as an intern. Despite a theoretical understanding of processes and workflows, artificial intelligence lacks any real expertise. Just like an intern, AI could list rules and advanced concepts, but their actual application is robotic and often problematic. In short, like an intern who’s just starting out, artificial intelligence suffers from a cruel lack of reliability.
Since 2021, CEOs and freelance clients have started racing to integrate AI to their work environments. The goal is clearly stated: boost productivity, replace workers and save money. The prospect of economic gains also happens to be the main selling point of firms like OpenAI and Anthropic.
And yet, as of now, Artificial Intelligence is not ready. These AI firms that promise a technological revolution are the first ones to admit it. However, this does not stop them from sinking staggering fortunes in marketing to convince the rest of the world that AI can already replace workers. And investors give echo to that message in order to justify their own mind-boggling investments, as well as the abyssal losses they incur.
How is it possible that a tool as unreliable as this, already benefits from such a big adoption? And above all else, how to explain that bosses and freelance clients consider replacing professional expertise by the approximative know-how of the perpetual intern that is AI? Are freelancers and AI agents competing against each other?
Table of Contents
Relying on AI: a mind-boggling proposition
AI can be a remarkable tool in the professional world. The various models currently deployed have demonstrated their capabilities again and again. However, not one among them can boast perfect reliability. Not in the use cases for which they are often marketed anyway.
OpenAI, Anthropic, Perplexity or Google never fail to add this disclaimer: “AI can make mistakes”. Machine learning experts also consider that this is an intrinsic failing of these models. It’s something a recent study out of Salesforce AI Research has proven.
In the professional world, the only workers whose reliability is often doubted are interns. They are allowed and expected to make mistakes because they come to learn. And over time, it is accepted that this inclination will be replaced by expertise.
With AI, it’s not quite the same. Mistakes are allowed, tolerated and ignored. Experience is not accumulated and expertise is dependent on nebulous updates of the model. Even the glaring lack of reliability is called “hallucinations” to better sell the “humanity of the model”.
If there was ever an intern that required as much oversight after a year on the job, as on the first day, they would be called a failure, at best. They would be seen as a drain on the company’s resources. And yet, when it comes to AI, employers seem a lot less discerning. They seem content with the idea that AI sprinkles their work, their products and their brand image with imperfections. Just like a perpetual intern would. A situation that highlights the difference between freelancers and AI agents.
AI’s blunders are well documented. A day does not pass without reports of new catastrophic artificial intelligence mistakes. Databases deleted with no warning. Computers reset with no apparent reason. Critical analytics data with no grounding in reality. Or even scientific sources and quotes pulled out of this air.
If an intern made such mistakes, they would certainly be unceremoniously dismissed. Which is why no company ever gives that kind of power to an intern. But here, multinational companies like Amazon, airlines or renowned companies like Deloitte jump in at the deep end… and risk their capital in so doing. This is what earned Microsoft its new moniker of Microslop.
The perpetual intern: speed over expertise?
AI is just a tool. Sure, the technology is a real jump in terms of output in the future of machine learning. But, despite its name it is not an “intelligent technology”.
LLM models, no matter how sophisticated, are simply statistical models. They must be understood as predictive algorithms that are purely rooted in the global statistical mean.
As such, AI is not properly equipped to handle scenarios that deviate from the ordinary. However, when it comes to accelerating common tasks, artificial intelligence can be remarkably effective. Just like an intern who has perfectly mastered a specific workflow after having it drilled into them. As long as the missions can be processed through that workflow, the AI agent is as efficient as a company veteran.
Thus, AI should be used to boost the worker’s productivity whenever processes show little to no deviation. It cannot yet be used to replace them. It should be seen as a tool just like the office suite of the company’s accounting software.
Above all else, this means that AI’s output must always be vetted by human professionals. Repetitive tasks with clearly defined processes can be delegated to the perpetual intern that are AI agents. Humans are then free to bring their creative thinking and elevate the quality of the output when processing scenarios that deviate from the norm.
Unfortunately, at the moment, the professional world is basically operating in the opposite way. AI outputs and workers assist it. It’s a trend that we’re also noticing in freelancing. Over the last few years, freelancers and AI agents have been pitted against one another.
For instance, clients have been subscribing to vibe-coding or letting AI process huge volumes of material to translate. And without fail, they find themselves back on freelancing platforms to find developers and linguists to refine the work or simply make it acceptable.
We went from a world where expertise was the norm to a world where deployment speed trumps product quality. As a matter of fact, an AI label is perceived negatively by audiences. They see AI as a sign of low stakes and constant amateurism.
Freelancers and AI agents: professional baby-sitting
AI promoters promised to change the way we work. They did keep their promise but certainly not in the way they were expecting.
Historically, internships aimed three essential goals. First, the intern was meant to serve as reinforcement. Secondly, internships were an opportunity to find talented interns. And finally, interns could become employees and benefit from the interrupted transmission of knowledge, so that the company’s know-how was maintained internally.
With AI, this learning curve disappeared. The intern used to be an investment and an asset for the company. But as a perpetual intern, AI does not become more reliable or more discerning over time. Every output still needs to be checked for approval.
That verification process is often time-consuming and turns the human experts into baby-sitters. And all this without mentioning the actual loss of skills they suffer. In the end, both freelancers and AI agents process the same tasks one after the other. This is a very inefficient situation.
Nowadays, embedding AI into professional processes amounts to becoming and staying reliant on external infrastructure that doesn’t cost less over time. In short, this amounts to subcontracting. The main difference lies in the fact that the cost is a subscription to ChatGPT, Claude, Deepseek or any other model, rather than a freelancer’s rate.
Did artificial intelligence emancipate clients from freelancers?
From a company’s perspective, freelancers and AI agents occupy the same lane. Both are external contractors entrusted with one-off or occasional tasks for a price.
That being said, upfront, AI costs less than a freelancer and has the added advantage of being immediately available. Artificial intelligence does not rest and does not tire. It does not require combing through dozens of freelance cover letters and applications whenever a new project requires external expertise. Even better, a single AI model can be multi-disciplinary and virtually omniscient.
In that case, what could ever bring anyone to hire freelancers instead of subscribing to AI? After more than 4 years, the answers are now obvious.
- For freelancers, a lack of reliability or professional mistakes have actual consequences. It’s the complete opposite of an AI that is completely covered by the non-responsibility clause, even when it destroys years of work. The only consequences are for the AI’s user.
- A freelancer is supposed to understand and process the client’s requests. AI also promises to correctly interpret those requests. But haven’t all users have already experienced the loss of hours of productivity wrestling with prompts before finally capitulating and settling for subpar outputs because of frustration ?
- A freelancer can edit their output to take new parameters into account. Or they can take up a task started by another. In the case of AI, each new modification request could change the output completely. And all those who already tried modifying old outputs with new models have been introduced to a new world of frustrations.
- More often than not, the purported savings from entrusting tasks to AI are lost because it’s still needed to hire experts to check, refine or make the outputs usable.
Conclusion
The AI label is now an indicator of approximative work. Despite its capacity to produce satisfying outputs, AI has also demonstrated the scope of the catastrophic blunders it can make. Let’s be reminded that this is not AI’s fault. Artificial Intelligence is only a tool and cannot be held responsible in any way that matters. It’s always the responsibility of professionals who establish AI processes without the appropriate safeguards.
After all, no one would entrust final production work to an intern with no supervision. That being said, artificial intelligence differs from other tools because even its most experienced users don’t understand all its aspects. In short, relying on AI amounts to accepting the perpetual risk of absurdities slipping into the finished product.
The illustrations on this page were provided by GetIllustrations.
