Labeling Agent (beta)

In some projects, ML models are just as important as human labelers. Labeling Agents allows you to assign ML models as labelers in your project and evaluate their performance alongside human labelers. This helps you better understand which labeling approach works best for your needs — whether human, machine, or both.

Why Use Labeling Agents?

Labeling Agents simplify the process of testing and comparing ML models inside Datasaur:

  • You no longer need to create separate accounts or log in as the model to run predictions.

  • Model outputs are now part of the same analytics and comparison tools used for human labelers.

  • It’s easier to measure performance and decide what labeling strategy to use.

Requirements

  • Models must be in the same team workspace as the Data Studio project.

  • ML models must be deployed applications from LLM Labs with “Deployed” status.

  • This feature is currently only supported for Span Labeling project.

    • This is a current limitation that will be improved in the future.

Using Labeling Agents

1. Assign Models as Labelers

You can assign models during the project creation process:

  1. Go to Projects page > Create New Project.

  2. Upload files and select Span Labeling.

  3. In the Assignment step, open the Labeling agents tab.

  4. Select one or more deployed models to assign them as labelers.

  5. Complete the project setup.

You can assign both human members and models. Each model counts toward your assignment limit.

2. Launch the Project and Trigger Labeling

When you click Launch Project, models will automatically begin applying labels.

Current limitation:

  • Only the first label set is used.

  • Each span will only have one label.

  • Labeling agents cannot yet draw arrows.

3. Review Labels Applied by the Labeling Agent

Once all documents are fully labeled — either through external model assistance or manual input, the project can undergo a final review. This stage typically involves a reviewer ensuring the consistency and accuracy of all annotations before submission or export through Reviewer Mode.

4. View and Compare Performance

You can track the performance of both human labelers and models from the Analytics page.

From here, you’ll be able to compare IAA scores and other metrics across all labelers — human and model.

Best Practices

  • Use the external model as a timesaving aid but always include a human review step.

  • Train your model with high-quality data to improve suggestion accuracy.

  • Communicate clearly with labelers about how to handle model predictions.

  • Automate some of the work with consensus by using multiple models, e.g. use the consensus of 3 and deploy 3 Labeling Agents, then focus only on those that are not accepted through consensus.

FAQs

  • Can I assign multiple models to the same project?

    • Yes. You can assign up to 10 Labeling Agents.

  • Can I use Labeling Agents in Line Labeling?

    • Not yet. They can be assigned to Span + Line project but will only apply labels for Span Labeling.

  • How are Labeling Agent labels shown in the UI?

    • They are treated like human labelers but are masked. You’ll see their labels in the Reviewer mode and analytics.

Last updated