AI agent evaluation replaces data labeling as the critical path to production deployment
As the landscape of artificial intelligence continues to evolve, particularly with advancements in large language models (LLMs), discussions around the necessity of standalone data labeling tools have intensified. While some industry experts suggest that the rise of LLMs may diminish the demand for traditional data labeling, HumanSignal, the commercial leader behind the open-source Label Studio program, presents a contrasting perspective. The company recently acquired Erud AI and launched its physical Frontier Data Labs, aimed at innovative data collection. However, HumanSignal emphasizes that the challenge extends beyond data creation; it is now crucial to validate whether AI systems trained on this data perform as intended. Co-founder and CEO Michael Malyuk highlights that enterprises require robust evaluation mechanisms to ensure that AI agents can effectively carry out complex tasks that involve reasoning, tool usage, and multi-modal outputs.
The shift from mere data labeling to comprehensive agent evaluation signifies a transformative phase in the AI landscape. Enterprises are no longer just interested in whether a model can accurately classify images or text; they need to assess the decision-making processes of AI agents across intricate, multi-step tasks. This evolution necessitates a more sophisticated approach to evaluation, where human judgment is not only essential but often requires the expertise of specialists, particularly in high-stakes sectors like healthcare and legal services. Malyuk points out that the demand for expert involvement in the evaluation process is paramount, as the cost of errors in these fields can be extraordinarily high. The new capabilities introduced by HumanSignal’s Label Studio Enterprise address this need by providing structured interfaces for human assessments, facilitating multi-reviewer consensus, and enabling domain expertise at scale—all critical components for high-quality evaluation.
As competition within the data labeling and evaluation market intensifies, companies like Labelbox are also pivoting to focus on agent evaluation, recognizing its importance in the evolving landscape. The recent investment by Meta in Scale AI has further disrupted the market, prompting HumanSignal to capitalize on new opportunities. For enterprises building production AI systems, this convergence of data labeling and evaluation infrastructure presents strategic implications. Organizations are encouraged to prioritize the creation of high-quality labeled datasets, invest in dedicated evaluation infrastructure, and recognize that the same tools used for data labeling can effectively serve evaluation needs. As the focus shifts from simply building models to validating them, the ability to systematically prove that AI systems meet the rigorous quality requirements of specific domains becomes critical for success in deploying AI at scale.
As LLMs have continued to improve, there has been some discussion in the industry about the continued need for standalone data labeling tools, as LLMs are increasingly able to work with all types of data.
HumanSignal,
the lead commercial vendor behind the open-source Label Studio program, has a different view. Rather than seeing less demand for data labeling, the company is seeing more.
Earlier this month, HumanSignal acquired Erud AI and launched its physical Frontier Data Labs for novel data collection. But creating data is only half the challenge. Today, the company is tackling what comes next: proving the AI systems trained on that data actually work. The new multi-modal agent evaluation capabilities let enterprises validate complex AI agents generating applications, images, code, and video.
“If you focus on the enterprise segments, then all of the AI solutions that they’re building still need to be evaluated, which is just another word for data labeling by humans and even more so by experts,” HumanSignal co-founder and CEO Michael Malyuk told VentureBeat in an exclusive interview.
The intersection of data labeling and agentic AI evaluation
Having the right data is great, but that’s not the end goal for an enterprise. Where modern data labeling is headed is evaluation.
It’s a fundamental shift in what enterprises need validated: not whether their model correctly classified an image, but whether their AI agent made good decisions across a complex, multi-step task involving reasoning, tool usage and code generation.
If evaluation is just data labeling for AI outputs, then the shift from models to agents represents a step change in what needs to be labeled. Where traditional data labeling might involve marking images or categorizing text, agent evaluation requires judging multi-step reasoning chains, tool selection decisions and multi-modal outputs — all within a single interaction.
“There is this very strong need for not just human in the loop anymore, but expert in the loop,” Malyuk said. He pointed to high-stakes applications like healthcare and legal advice as examples where the cost of errors remains prohibitively high.
The connection between data labeling and AI evaluation runs deeper than semantics. Both activities require the same fundamental capabilities:
Structured interfaces for human judgment
: Whether reviewers are labeling images for training data or assessing whether an agent correctly orchestrated multiple tools, they need purpose-built interfaces to capture their assessments systematically.
Multi-reviewer consensus
: High-quality training datasets require multiple labelers who reconcile disagreements. High-quality evaluation requires the same — multiple experts assessing outputs and resolving differences in judgment.
Domain expertise at scale
: Training modern AI systems requires subject matter experts, not just crowd workers clicking buttons. Evaluating production AI outputs requires the same depth of expertise.
Feedback loops into AI systems
: Labeled training data feeds model development. Evaluation data feeds continuous improvement, fine-tuning and benchmarking.
Evaluating the full agent trace
The challenge with evaluating agents isn’t just the volume of data, it’s the complexity of what needs to be assessed. Agents don’t produce simple text outputs; they generate reasoning chains, make tool selections, and produce artifacts across multiple modalities.
The new capabilities in Label Studio Enterprise address agent validation requirements:
Multi-modal trace inspection:
The platform provides unified interfaces for reviewing complete agent execution traces—reasoning steps, tool calls, and outputs across modalities. This addresses a common pain point where teams must parse separate log streams.
Interactive multi-turn evaluation:
Evaluators assess conversational flows where agents maintain state across multiple turns, validating context tracking and intent interpretation throughout the interaction sequence.
Agent Arena
: Comparative evaluation framework for testing different agent configurations (base models, prompt templates, guardrail implementations) under identical conditions.
Flexible evaluation rubrics
: Teams define domain-specific evaluation criteria programmatically rather than using pre-defined metrics, supporting requirements like comprehension accuracy, response appropriateness or output quality for specific use cases
Agent evaluation is the new battleground for data labeling vendors
HumanSignal isn’t alone in recognizing that agent evaluation represents the next phase of the data labeling market. Competitors are making similar pivots as the industry responds to both technological shifts and market disruption.
Labelbox
launched its Evaluation Studio in August 2025, focused on rubric-based evaluations. Like HumanSignal, the company is expanding beyond traditional data labeling into production AI validation.
The overall competitive landscape for data labeling shifted dramatically in June when Meta invested $14.3 billion for a 49% stake in Scale AI, the market’s previous leader. The deal triggered an exodus of some of Scale’s largest customers. HumanSignal capitalized on the disruption, with Malyuk claiming that his company was able to win multiples competitive deal last quarter. Malyuk cites platform maturity, configuration flexibility, and customer support as differentiators, though competitors make similar claims.
What this means for AI builders
For enterprises building production AI systems, the convergence of data labeling and evaluation infrastructure has several strategic implications:
Start with ground truth.
Investment in creating high-quality labeled datasets with multiple expert reviewers who resolve disagreements pays dividends throughout the AI development lifecycle — from initial training through continuous production improvement.
Observability proves necessary but insufficient.
While monitoring what AI systems do remains important, observability tools measure activity, not quality. Enterprises require dedicated evaluation infrastructure to assess outputs and drive improvement. These are distinct problems requiring different capabilities.
Training data infrastructure doubles as evaluation infrastructure.
Organizations that have invested in data labeling platforms for model development can extend that same infrastructure to production evaluation. These aren’t separate problems requiring separate tools — they’re the same fundamental workflow applied at different lifecycle stages.
For enterprises deploying AI at scale, the bottleneck has shifted from building models to validating them. Organizations that recognize this shift early gain advantages in shipping production AI systems.
The critical question for enterprises has evolved: not whether AI systems are sophisticated enough, but whether organizations can systematically prove they meet the quality requirements of specific high-stakes domains.