- AI does not yet fully understand its user's intent
- Achieving full "intent extraction" is the next frontier in AI development
- Human workers are still more reliable and better attuned to employer's intent
AI lacks intent. Despite having greatly transformed the way we work, artificial intelligence is not yet capable of replacing us. Let it be said that AI’s performances have grown by leaps and bounds over the last few years. It’s gotten to the point where new models like Mythos are delayed by fear of their disruptive potential.
That being said, having as good a performance as humans is not yet enough justification to replace the whole human workforce. Because, despite its mind-boggling potential, AI lacks intent.

Artificial Intelligence lacks intent
Given their nature, AI models are subordinate to the will of their users. After all, the dialog box is the way to interact with Claude, ChatGPT or Gemini. A channel that emulates and is inspired from the way employers interact with their employees.
Thus, one gives orders to AI and it carries them out. It’s just like how one could direct their employee or a freelance worker to entrust them with one-off missions. The difference resides in the gaps between the command and the expected outcomes. And specifically, how those gaps are bridged.

Let’s take the example of a freelance SEO writer or a freelance data analyst. Can artificial intelligence really fully replace them? Some professionals and companies seem convinced that this is the case.
And yet, AI’s lack of intent makes it less reliable than workers. For example, one can ask AI to process the data from a sales funnel to better make adjustments.
Provided that the generative AI model makes no mistakes and doesn’t hallucinate or invent new data, its analysis will always stay within the parameters set by the prompt. On the other hand, a freelance data analyst may infer that the data is not conclusive enough to satisfy their employer’s expectations. And above all else, they can investigate and identify external variables that provide a fuller picture.
Likewise, a freelance SEO writer could understand that a specific keyword should target navigational intent instead of informational. But an artificial intelligence model may adopt the commercial intent by default if there are no specific instructions or simply because its user assessed their needs wrong. In such cases, can AI course-correct by detaching from its initial instructions?
You don’t know what you don’t know
When using AI, one needs to be mindful of numerous gaps. These are gaps that arise between the needs assessment, the user’s intent, the instruction’s quality, the AI’s interpretation and the expected outcomes. And these gaps are often enough to derail a marketing campaign, a product’s design or any project.

To bridge those caps between the instruction’s quality and the AI’s interpretation, prompt engineering training has gained popularity. But, even this training is not enough to bridge all the gaps. There are still huge gaps when it comes to expertise.
Expertise is already provided when artificial intelligence is used in a field the user has already mastered. They can identify mistakes and diagnose issues in prompt processing.
However, using AI in fields one hasn’t mastered is always a risky proposition. After all, there are no tools to assess the outputs or to find mistakes and hallucinations. Most of all, it’s impossible to know if the AI’s output is aligned with the expectations of the initial prompt. It’s not surprise that “intent extraction” is the new frontier of AI development.
In the end, the current trend is to have experts handle AI models. More and more, project directors are recruiting developers to get the most out of Claude Code. In the same way, the search for freelance linguistic experts capable of refining AI translations has grown on UpWork. It’s the same for graphic designers who are being asked to design visual brand identities and exploitable mock-ups with AI’s assistance.

In a lot of ways, AI functions like a perpetual intern. Sure, it boasts encyclopedic knowledge. But as long as it lacks intent, it will never really be considered as an expert.
Conclusion
With the exception of common tasks with perfectly structured processes, AI can’t replace human experts. That’s the conclusion that is being reached by the professional world. Notwithstanding its performances, artificial intelligence is only a tool. And, like any tool, its efficient use depends on the intent of its user.
The illustrations on this page were provided by Storyset & unDraw.