# Labeling Agent

In some projects, ML models are just as important as human labelers. Labeling Agents allows you to assign ML models as labelers in your project and evaluate their performance alongside human labelers. This helps you better understand which labeling approach works best for your needs — whether human, machine, or both.

## Why Use Labeling Agents?

Labeling Agents simplify the process of testing and comparing ML models inside Datasaur:

* You no longer need to create separate accounts or log in as the model to run predictions.
* Model outputs are now part of the same analytics and comparison tools used for human labelers.
* It’s easier to measure performance and decide what labeling strategy to use.

## Requirements

* Models must be in the same team workspace as the Data Studio project.
* ML models must be deployed applications from [**LLM Labs**](https://docs.datasaur.ai/llm-projects/deployment) with “Deployed” status.
* This feature is currently only supported for **Span Labeling** project.
  * This is currently a limitation that will be improved in the future.

## How to Create a Labeling Agent

**Supported Labeling Types**: `Span labeling`**,** `Row labeling`

In LLM Labs, create a new sandbox and set up the model to act as a labeling agent. To help the model understand what to label, you’ll need to provide clear system and user instructions. You can see [this page](https://docs.datasaur.ai/llm-projects/sandbox) to learn more.

The output of the LLM Labs Sandbox must be in JSON object format, aligned with the label set defined in your NLP project. This ensures compatibility with regex-based string matching for labeling in your NLP platform.

#### 1. Prepare the label set

Make sure the label set you will be using matches the one you configure in step 2. Below is a simple example of labels that can later be used in Data Studio:

```json
{
  "name": "Labeling agent Label set",
  "options": [
    { "id": "NhsjWIgaAQH3g6dsvtW6a", "color": "#f93b90", "parentId": null, "label": "PERSON" },
    { "id": "X1bKK7Nxf9SGaBfDpzH7g", "color": "#d4e455", "parentId": null, "label": "DATE" },
    { "id": "NP2RJr7tD5aMfVBnG6TOm", "color": "#85c98e", "parentId": null, "label": "ORG" }
  ]
}
```

#### 2. Define your instructions

In LLM Labs, create a new sandbox and set up the model to act as a labeling agent. To help the model understand what to label, you’ll need to provide clear system and user instructions. Below is an example setup:

**System Instruction**

```
You are an expert data labeler
```

**User Instruction**

````
Given the document text, please extract the following information and present it in JSON format as shown below:

PERSON: People, including fictional.  
DATE: Absolute or relative dates or periods.
ORG: Companies, agencies, institutions, etc.

Instructions Summary:  
1. Extract and present the information in the specified JSON format.  
2. Ensure that all extracted data is accurate and corresponds directly to the content of each document.

Return the value of extracted fields in JSON structure in plain text, following this JSON FORMAT  
{
    "PERSON": ["People, including fictional."],
    "DATE": ["Absolute or relative dates or periods."],
    "ORG": ["Companies, agencies, institutions, etc."],
}

VERY IMPORTANT  
RETURN THE ANSWER WITHOUT ```json  
ANSWER PRECISELY GIVEN FROM THE SENTENCE PROMPT AND DON'T MASK THE ANSWER, ANSWER BASED ON THE GIVEN SENTENCE
````

#### 3. Test with a prompt example

To check if your instructions work as expected, you can test them using an example sentence. Here's how you might write a prompt:

```
Label set:
- PERSON
- DATE
- ORG

Sentence:
Ivan Lee is the CEO and Founder of Datasaur.ai. He graduated with a Computer Science B.S. from Stanford University. He was chosen for the selective Mayfield Fellows entrepreneurship program in 2010. Ivan went on to found Loki Studios, an iOS game studio. After raising institutional funding from DCM's A-Fund and launching a profitable game, Loki was acquired by Yahoo.
```

After you click the **Run** button, the expected output will be:

```
{
  "PERSON": ["Ivan Lee"],
  "DATE": ["2010"],
  "ORG": ["Datasaur.ai", "Stanford University", "Mayfield Fellows", "Loki Studios", "DCM's A-Fund", "Yahoo"]
}
```

#### 4. Deploy the model

You need to deploy the model first before it becomes available and visible in Data Studio as a Labeling Agent.

<figure><img src="https://448889121-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-MbjY0HseEqu7LtYAt4d%2Fuploads%2Fgit-blob-7ea4771f949e2ee06197d21426ff85bbc84f070e%2FSandbox%20-%20labeling%20agent%20-%20highlight%20deploy.png?alt=media" alt=""><figcaption></figcaption></figure>

## Using Labeling Agents

Once you’ve set up the model, you can now assign it as a labeler in Data Studio.

#### 1. Assign models as labelers

You can assign models during the project creation process:

1. Go to **Projects page** > **Create New Project**.
2. Upload files and select **Span Labeling**.
3. In the **Assignment** step, open the **Labeling agents** tab.
4. Once you are in the Labeling agents tab. You can select the deployed LLM Labs Sandbox.

   <figure><img src="https://448889121-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-MbjY0HseEqu7LtYAt4d%2Fuploads%2Fgit-blob-379f64e489f232da60b7c63d1c1b05a8384d5b00%2FStep%204%20-%20labeling%20agents%20(default%20for%20row%20labeling%20projects)%20(1).jpg?alt=media" alt="Selecting deployed LLM Labs Sandbox as labeling agent"><figcaption><p>Selecting deployed LLM Labs Sandbox as labeling agent</p></figcaption></figure>
5. For Row labeling project, you need to set the agent task by clicking **Set a default agent task** button.

   <figure><img src="https://448889121-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-MbjY0HseEqu7LtYAt4d%2Fuploads%2Fgit-blob-f25ee55bbda546e30ecb82b62ba3ec92bdabf3bc%2FStep%204%20-%20labeling%20agents%20(Setup%20agent%20task)%20(1).jpg?alt=media" alt=""><figcaption><p>Configuring labeling agents tasks</p></figcaption></figure>

   Configuring labeling agents tasks

   1. **Target question**: the column(s) you wish to answer.
   2. **Input columns:** the column(s) your model should use as input.

   **Supported Question Types**

   The Row-Based Labeling Agent supports the following question types:

   | Question Type             | Description                                            |
   | ------------------------- | ------------------------------------------------------ |
   | **Radio**                 | Single-select from a predefined list of options        |
   | **Dropdown**              | Single-select via a dropdown menu                      |
   | **Hierarchical Dropdown** | Nested dropdown with parent-child option relationships |
   | **Text**                  | Free-form text input                                   |
   | **Date**                  | Date picker input                                      |
   | **Time**                  | Time picker input                                      |
   | **Checkbox**              | Multi-select from a list of options                    |
   | **Slider**                | Numeric value selection via a slider control           |
   | **URL**                   | Text input validated as a URL                          |
6. Complete the project setup.

{% hint style="info" %}
You can assign both human members and models. Each model counts toward your assignment limit.
{% endhint %}

#### 2. **Launch the project and trigger labeling**

When you click **Launch Project**, models will automatically begin applying labels.

**Current limitation:**

* Only the first label set is used.
* Each span will only have one label.
* Labeling agents cannot yet draw arrows.

#### **3. Review labels applied by the labeling agent**

Once all documents are fully labeled — either through external model assistance or manual input, the project can undergo a final review. This stage typically involves a reviewer ensuring the consistency and accuracy of all annotations before submission or export through Reviewer Mode.

#### 4. View and compare performance

You can track the performance of both human labelers and models from the [Analytics page.](https://docs.datasaur.ai/workspace-management/analytics)

From here, you’ll be able to compare IAA scores and other metrics across all labelers — human and models.

## Best Practices

* Use the external model as a timesaving aid but always include a human review step.
* Train your model with high-quality data to improve suggestion accuracy.
* Communicate clearly with labelers about how to handle model predictions.
* Automate some of the work with consensus by using multiple models, e.g., use the consensus of 3 and deploy 3 Labeling Agents, then focus only on those that are not accepted through consensus.

## FAQs

* **Can I assign multiple models to the same project?**
  * Yes. You can assign up to 10 Labeling Agents.
* **Can I use Labeling Agents in Line Labeling?**
  * Not yet. They can be assigned to Span + Line project but will only apply labels for Span Labeling.
* **How are Labeling Agent labels shown in the UI?**
  * They are treated like human labelers but are masked. You’ll see their labels in the Reviewer mode and analytics.
