Friday, December 26, 2025
Trusted News Since 2020
American News Network
Truth. Integrity. Journalism.
US Tech & AI

AI agent evaluation replaces data labeling as the critical path to production deployment

By Eric November 25, 2025

As artificial intelligence continues to evolve, particularly with the advancement of large language models (LLMs), the role of data labeling tools is coming under scrutiny. However, HumanSignal, a leading commercial vendor of the open-source Label Studio program, argues that the demand for data labeling is actually increasing. This perspective comes in light of their recent acquisition of Erud AI and the launch of their physical Frontier Data Labs, which are focused on novel data collection methods. HumanSignal’s co-founder and CEO, Michael Malyuk, emphasizes that while creating data is crucial, the next challenge lies in validating the performance of AI systems trained on that data. The company is now introducing multi-modal agent evaluation capabilities that allow enterprises to assess complex AI agents responsible for generating applications, images, code, and video.

The shift from traditional data labeling to agent evaluation represents a significant evolution in how enterprises approach AI development. Instead of merely assessing whether a model can correctly classify an image, the focus has turned to evaluating whether AI agents can make sound decisions across complex tasks that require reasoning and tool usage. This new paradigm necessitates expert involvement, particularly in high-stakes fields like healthcare and legal advice, where the consequences of errors can be severe. Malyuk notes that the evaluation process shares many similarities with data labeling, requiring structured interfaces, consensus among multiple reviewers, and domain expertise to ensure high-quality assessments. HumanSignal’s latest features in Label Studio Enterprise address these needs, providing unified interfaces for reviewing agent execution traces, interactive evaluation of conversational flows, and customizable evaluation rubrics tailored to specific use cases.

The competitive landscape for data labeling is rapidly changing, with other companies like Labelbox also pivoting towards agent evaluation. This shift is driven by technological advancements and market disruptions, such as Meta’s substantial investment in Scale AI, which has altered the dynamics within the industry. For enterprises looking to build robust AI systems, this convergence of data labeling and evaluation infrastructure presents strategic implications. Investing in high-quality labeled datasets with expert reviewers not only enhances the initial training of AI models but also facilitates continuous improvement in production. As organizations recognize that the bottleneck has shifted from model development to validation, the ability to systematically prove the quality of AI outputs in high-stakes domains becomes paramount. This evolution in focus underscores a critical question for enterprises: can they demonstrate that their AI systems meet the rigorous quality standards required for their specific applications?

As LLMs have continued to improve, there has been some discussion in the industry about the continued need for standalone data labeling tools, as LLMs are increasingly able to work with all types of data.

HumanSignal,
the lead commercial vendor behind the open-source Label Studio program, has a different view. Rather than seeing less demand for data labeling, the company is seeing more. 
Earlier this month, HumanSignal acquired Erud AI and launched its physical Frontier Data Labs for novel data collection. But creating data is only half the challenge. Today, the company is tackling what comes next: proving the AI systems trained on that data actually work. The new multi-modal agent evaluation capabilities let enterprises validate complex AI agents generating applications, images, code, and video.
“If you focus on the enterprise segments, then all of the AI solutions that they’re building still need to be evaluated, which is just another word for data labeling by humans and even more so by experts,” HumanSignal co-founder and CEO Michael Malyuk told VentureBeat in an exclusive interview.
The intersection of data labeling and agentic AI evaluation
Having the right data is great, but that’s not the end goal for an enterprise. Where modern data labeling is headed is evaluation.
It’s a fundamental shift in what enterprises need validated: not whether their model correctly classified an image, but whether their AI agent made good decisions across a complex, multi-step task involving reasoning, tool usage and code generation.
If evaluation is just data labeling for AI outputs, then the shift from models to agents represents a step change in what needs to be labeled. Where traditional data labeling might involve marking images or categorizing text, agent evaluation requires judging multi-step reasoning chains, tool selection decisions and multi-modal outputs — all within a single interaction.
“There is this very strong need for not just human in the loop anymore, but expert in the loop,” Malyuk said. He pointed to high-stakes applications like healthcare and legal advice as examples where the cost of errors remains prohibitively high.
The connection between data labeling and AI evaluation runs deeper than semantics. Both activities require the same fundamental capabilities:
Structured interfaces for human judgment
: Whether reviewers are labeling images for training data or assessing whether an agent correctly orchestrated multiple tools, they need purpose-built interfaces to capture their assessments systematically.
Multi-reviewer consensus
: High-quality training datasets require multiple labelers who reconcile disagreements. High-quality evaluation requires the same — multiple experts assessing outputs and resolving differences in judgment.
Domain expertise at scale
: Training modern AI systems requires subject matter experts, not just crowd workers clicking buttons. Evaluating production AI outputs requires the same depth of expertise.
Feedback loops into AI systems
: Labeled training data feeds model development. Evaluation data feeds continuous improvement, fine-tuning and benchmarking.
Evaluating the full agent trace
The challenge with evaluating agents isn’t just the volume of data, it’s the complexity of what needs to be assessed. Agents don’t produce simple text outputs; they generate reasoning chains, make tool selections, and produce artifacts across multiple modalities.
The new capabilities in Label Studio Enterprise address agent validation requirements: 
Multi-modal trace inspection:
The platform provides unified interfaces for reviewing complete agent execution traces—reasoning steps, tool calls, and outputs across modalities. This addresses a common pain point where teams must parse separate log streams. 
Interactive multi-turn evaluation:
Evaluators assess conversational flows where agents maintain state across multiple turns, validating context tracking and intent interpretation throughout the interaction sequence. 
Agent Arena
: Comparative evaluation framework for testing different agent configurations (base models, prompt templates, guardrail implementations) under identical conditions. 
Flexible evaluation rubrics
: Teams define domain-specific evaluation criteria programmatically rather than using pre-defined metrics, supporting requirements like comprehension accuracy, response appropriateness or output quality for specific use cases
Agent evaluation is the new battleground for data labeling vendors
HumanSignal isn’t alone in recognizing that agent evaluation represents the next phase of the data labeling market. Competitors are making similar pivots as the industry responds to both technological shifts and market disruption.
Labelbox
launched its Evaluation Studio in August 2025, focused on rubric-based evaluations. Like HumanSignal, the company is expanding beyond traditional data labeling into production AI validation.
The overall competitive landscape for data labeling shifted dramatically in June when Meta invested $14.3 billion for a 49% stake in Scale AI, the market’s previous leader. The deal triggered an exodus of some of Scale’s largest customers. HumanSignal capitalized on the disruption, with Malyuk claiming that his company was able to win multiples competitive deal last quarter. Malyuk cites platform maturity, configuration flexibility, and customer support as differentiators, though competitors make similar claims.
What this means for AI builders
For enterprises building production AI systems, the convergence of data labeling and evaluation infrastructure has several strategic implications:
Start with ground truth.
Investment in creating high-quality labeled datasets with multiple expert reviewers who resolve disagreements pays dividends throughout the AI development lifecycle — from initial training through continuous production improvement.
Observability proves necessary but insufficient.
While monitoring what AI systems do remains important, observability tools measure activity, not quality. Enterprises require dedicated evaluation infrastructure to assess outputs and drive improvement. These are distinct problems requiring different capabilities.
Training data infrastructure doubles as evaluation infrastructure.
Organizations that have invested in data labeling platforms for model development can extend that same infrastructure to production evaluation. These aren’t separate problems requiring separate tools — they’re the same fundamental workflow applied at different lifecycle stages.
For enterprises deploying AI at scale, the bottleneck has shifted from building models to validating them. Organizations that recognize this shift early gain advantages in shipping production AI systems.
The critical question for enterprises has evolved: not whether AI systems are sophisticated enough, but whether organizations can systematically prove they meet the quality requirements of specific high-stakes domains.

Related Articles

The best smart rings for tracking sleep and health
US Tech & AI

The best smart rings for tracking sleep and health

Read More →
Creating a glass box: How NetSuite is engineering trust into AI
US Tech & AI

Creating a glass box: How NetSuite is engineering trust into AI

Read More →
EU investigates Google over AI-generated summaries in search results
US Tech & AI

EU investigates Google over AI-generated summaries in search results

Read More →