Datasaur
Visit our websitePricingBlogPlaygroundAPI Docs
  • Welcome to Datasaur
    • Getting started with Datasaur
  • Data Studio Projects
    • Labeling Task Types
      • Span Based
        • OCR Labeling
        • Audio Project
      • Row Based
      • Document Based
      • Bounding Box
      • Conversational
      • Mixed Labeling
      • Project Templates
        • Test Project
    • Creating a Project
      • Data Formats
      • Data Samples
      • Split Files
      • Consensus
      • Dynamic Review Capabilities
    • Pre-Labeled Project
    • Let's Get Labeling!
      • Span Based
        • Span + Line Labeling
      • Row & Document Based
      • Bounding Box Labeling
      • Conversational Labeling
      • Label Sets / Question Sets
        • Dynamic Question Set
      • Multiple Label Sets
    • Reviewing Projects
      • Review Sampling
    • Adding Documents to an Ongoing Project
    • Export Project
  • LLM Projects
    • LLM Labs Introduction
    • Sandbox
      • Direct Access LLMs
      • File Attachment
      • Conversational Prompt
    • Deployment
      • Deployment API
    • Knowledge base
      • External Object Storage
      • File Properties
    • Models
      • Amazon SageMaker JumpStart
      • Amazon Bedrock
      • Open AI
      • Azure OpenAI
      • Vertex AI
      • Custom model
      • Fine-tuning
      • LLM Comparison Table
    • Evaluation
      • Automated Evaluation
        • Multi-application evaluation
        • Custom metrics
      • Ranking (RLHF)
      • Rating
      • Performance Monitoring
    • Dataset
    • Pricing Plan
  • Workspace Management
    • Workspace
    • Role & Permission
    • Analytics
      • Inter-Annotator Agreement (IAA)
        • Cohen's Kappa Calculation
        • Krippendorff's Alpha Calculation
      • Custom Report Builder
      • Project Report
      • Evaluation Metrics
    • Activity
    • File Transformer
      • Import Transformer
      • Export Transformer
      • Upload File Transformer
      • Running File Transformer
    • Label Management
      • Label Set Management
      • Question Set Management
    • Project Management
      • Self-Assignment
        • Self-Unassign
      • Transfer Assignment Ownership
      • Reset Labeling Work
      • Mark Document as Complete
      • Project Status Workflow
        • Read-only Mode
      • Comment Feature
      • Archive Project
    • Automation
      • Action: Create Projects
  • Assisted Labeling
    • ML Assisted Labeling
      • Amazon Comprehend
      • Amazon SageMaker
      • Azure ML
      • CoreNLP NER
      • CoreNLP POS
      • Custom API
      • FewNERD
      • Google Vertex AI
      • Hugging Face
      • LLM Assisted Labeling
        • Prompt Examples
        • Custom Provider
      • LLM Labs (beta)
      • NLTK
      • Sentiment Analysis
      • spaCy
      • SparkNLP NER
      • SparkNLP POS
    • Data Programming
      • Example of Labeling Functions
      • Labeling Function Analysis
      • Inter-Annotator Agreement for Data Programming
    • Predictive Labeling
  • Assisted Review
    • Label Error Detection
  • Building Your Own Model
    • Datasaur Dinamic
      • Datasaur Dinamic with Hugging Face
      • Datasaur Dinamic with Amazon SageMaker Autopilot
  • Advanced
    • Script-Generated Question
    • Shortcuts
    • Extensions
      • Labels
      • Review
      • Document and Row Labeling
      • Bounding Box Labels
      • List of Files
      • Comments
      • Analytics
      • Dictionary
      • Search
      • Labeling Guidelines
      • Metadata
      • Grammar Checker
      • ML Assisted Labeling
      • Data Programming
      • Datasaur Dinamic
      • Predictive Labeling
      • Label Error Detection
      • LLM Sandbox
    • Tokenizers
  • Integrations
    • External Object Storage
      • AWS S3
        • With IRSA
      • Google Cloud Storage
      • Azure Blob Storage
      • Dropbox
    • SAML
      • Okta
      • Microsoft Entra ID
    • SCIM
      • Okta
      • Microsoft Entra ID
    • Webhook Notifications
      • Webhook Signature
      • Events
      • Custom Headers
    • Robosaur
      • Commands
        • Create Projects
        • Apply Project Tags
        • Export Projects
        • Generate Time Per Task Report
        • Split Document
      • Storage Options
  • API
    • Datasaur APIs
    • Credentials
    • Create Project
      • New mutation (createProject)
      • Python Script Example
    • Adding Documents
    • Labeling
      • Create Label Set
      • Add Label Sets into Existing Project
      • Get List of Label Sets in a Project
      • Add Label Set Item into Project's Label Set
      • Programmatic API Labeling
      • Inserting Span and Arrow Label into Document
    • Export Project
      • Custom Webhook
    • Get Data
      • Get List of Projects
      • Get Document Information
      • Get List of Tags
      • Get Cabinet
      • Export Team Overview
      • Check Job
    • Custom OCR
      • Importable Format
    • Custom ASR
    • Run ML-Assisted Labeling
  • Security and Compliance
    • Security and Compliance
      • 2FA
  • Compatibility & Updates
    • Common Terminology
    • Recommended Machine Specifications
    • Supported Formats
    • Supported Languages
    • Release Notes
      • Version 6
        • 6.112.0
        • 6.111.0
        • 6.110.0
        • 6.109.0
        • 6.108.0
        • 6.107.0
        • 6.106.0
        • 6.105.0
        • 6.104.0
        • 6.103.0
        • 6.102.0
        • 6.101.0
        • 6.100.0
        • 6.99.0
        • 6.98.0
        • 6.97.0
        • 6.96.0
        • 6.95.0
        • 6.94.0
        • 6.93.0
        • 6.92.0
        • 6.91.0
        • 6.90.0
        • 6.89.0
        • 6.88.0
        • 6.87.0
        • 6.86.0
        • 6.85.0
        • 6.84.0
        • 6.83.0
        • 6.82.0
        • 6.81.0
        • 6.80.0
        • 6.79.0
        • 6.78.0
        • 6.77.0
        • 6.76.0
        • 6.75.0
        • 6.74.0
        • 6.73.0
        • 6.72.0
        • 6.71.0
        • 6.70.0
        • 6.69.0
        • 6.68.0
        • 6.67.0
        • 6.66.0
        • 6.65.0
        • 6.64.0
        • 6.63.0
        • 6.62.0
        • 6.61.0
        • 6.60.0
        • 6.59.0
        • 6.58.0
        • 6.57.0
        • 6.56.0
        • 6.55.0
        • 6.54.0
        • 6.53.0
        • 6.52.0
        • 6.51.0
        • 6.50.0
        • 6.49.0
        • 6.48.0
        • 6.47.0
        • 6.46.0
        • 6.45.0
        • 6.44.0
        • 6.43.0
        • 6.42.0
        • 6.41.0
        • 6.40.0
        • 6.39.0
        • 6.38.0
        • 6.37.0
        • 6.36.0
        • 6.35.0
        • 6.34.0
        • 6.33.0
        • 6.32.0
        • 6.31.0
        • 6.30.0
        • 6.29.0
        • 6.28.0
        • 6.27.0
        • 6.26.0
        • 6.25.0
        • 6.24.0
        • 6.23.0
        • 6.22.0
        • 6.21.0
        • 6.20.0
        • 6.19.0
        • 6.18.0
        • 6.17.0
        • 6.16.0
        • 6.15.0
        • 6.14.0
        • 6.13.0
        • 6.12.0
        • 6.11.0
        • 6.10.0
        • 6.9.0
        • 6.8.0
        • 6.7.0
        • 6.6.0
        • 6.5.0
        • 6.4.0
        • 6.3.0
        • 6.2.0
        • 6.1.0
        • 6.0.0
      • Version 5
        • 5.63.0
        • 5.62.0
        • 5.61.0
        • 5.60.0
  • Deployment
    • Self-Hosted
      • AWS Marketplace
        • Data Studio
        • LLM Labs
Powered by GitBook
On this page
  • Span-based Labeling
  • Color-coded Labels
  • Limit Selection to Bottom-level Labels Only
  • Bounding Box Labeling
  • Label Sets
  • Text Transcription
  • Require caption
  • Row-based/Document-based Labeling
  • Question Hint
  • Question Types
  • Advanced Settings
  • Answer Validation Script
  • Hierarchical Label Sets or Dropdown Options
  • Components of this file
  1. Data Studio Projects
  2. Let's Get Labeling!

Label Sets / Question Sets

Last updated 2 days ago

Span-based Labeling

For span-based labeling, a label set is a single-column .csv following the structure below:

Column 1

Label 1

Label 2

Label 3

etc...

We provide twelve colors you can configure manually from the extension. You can also create a label set with your desired label colors in it. A sample file is provided below. Note: we do support any HTML color codes (as seen below).

  • Note: label,color is the header. This will always be the first row in the .csv.

label,color
Annabeth Chase,#df3920
Harry Potter,#ff8000
Hermione Granger,#4db34d
John Watson,#3399cc
Percy Jackson,#cc3399
Sherlock Holmes,#9933cc

Note: colored label sets only work for the .csv format.

Color-coded Labels

Datasaur supports HTML color codes. For your reference, below are the default colors provided by Datasaur for better viewing clarity in your project.

  • #df3920

  • #ff8000

  • #ffc826

  • #91b34d

  • #4db34d

  • #33cc99

  • #3399cc

  • #3370cc

  • #3333cc

  • #7033cc

  • #9933cc

  • #cc3399

Limit Selection to Bottom-level Labels Only

In projects with hierarchical label structures, some labels serve as broad categories while others act as bottom-level classifications. This setting ensures precise data annotation by restricting selection to only bottom-level labels—those without child labels. It prevents the use of broad categories, reducing ambiguity and improving consistency in labeled data.

When to Use?

This setting is particularly useful for projects that require detailed and specific classification, ensuring that only the most precise labels are applied.

Here are the example use cases for the food categorization:

  • Fruit

    • Apple

    • Banana

  • Vegetable

    • Carrot

    • Spinach

With this setting enabled, labelers can only select Apple, Banana, Carrot, and Spinach, but not the broader categories Fruit or Vegetable.

How to Configure the Setting

This setting can be enabled when creating a new project, modifying an existing one, or managing label set templates.

Project Creation Wizard

  1. Create new project.

  2. Go to Step 3 and select Span labeling.

  3. In the Label Set section, click the triple-dot menu on the label set you want to modify and enable the setting.

Within the Project

  1. Open Labels extension and select the label set.

  2. Click the triple-dot menu and choose one of the following:

    1. Add new label set.

    2. Replace existing label set.

    3. Edit label set.

  3. Expand the Label set settings accordion and enable the setting.

Label Management

  1. Navigate to Label management page.

  2. Select Add label set or update the existing label set.

  3. Expand Label set settings accordion and enable the setting.

Labeling Behavior

Once this setting is enabled, label selection will be limited to bottom-level labels. This means:

  • Only bottom-level labels can be selected.

  • Parent labels with child labels will no longer be selectable.

  • Keyboard shortcuts (numbers, arrow keys, Enter) will apply only to bottom-level labels.

Bounding Box Labeling

Label Sets

You can utilize .csv, .tsv, or .json formats for the bounding box label set.

  • For .csv/.tsv, we support color names (e.g., red), hex values (e.g., #00FF00), and RGB (e.g., rgb(0,0,255)). You can also utilize the label set with just names, as shown in the Datasaur sample - Bbox only name.csv below. Other values such as captionAllowed and captionRequired will use default settings.

  • For .json, we support hex and RGB only.

Text Transcription

The Text Transcription setting allows the labeler to add corresponding text to a bounding box. Disabling this setting means the labeler could not add the text.

Require caption

By turning on the Text Transcription setting, the labeler can add text to a bounding box. You can choose whether a specific label must have a text by disabling or enabling the Require caption checkbox.

Row-based/Document-based Labeling

For row-based or document-based projects, a label set is a .csv with questions in the first column and answers in subsequent columns:

Column 1
Column 2
Column 3
Column 4.

Question 1

Answer 1

Answer 2

Answer 3

Question 2

Answer 1

Answer 2

Question 3

Answer 1

Answer 2

Answer 3

Answer 4

Answer 5

You can also create a .json for a label set that has multiple question types.

Question Hint

You can optionally set hints for each question. You can include additional instructions or explanations in the questions’ hint, which can help labelers in submitting answers most relevant to the task. You can set a text of up to around 65,000 characters for a single question’s hint.

Supported Format

Formatting
Syntax

Bold

your text

Italic

your text

Underline

<u>your text</u>

• Bullet

dashes (-) or asterisks (*)

1. Numbering

1., 2., 3., etc.

[your text] (https://example.com)

Keep in mind, the markdown symbols will count towards the character limit.

Best Practices

We recommend keeping hints brief and focused on the relevant information for the labelers’ task. Longer hints may appear as large text blocks, which can clutter the UI. For more complex information or media, consider including links that labelers can easily click on.

Question Types

As mentioned before, label sets for row-based and document-based projects are sets of questions. Let's take a look at the question types available below.

1. Text Field

Text Field allows the labeler to answer questions by typing in free-form text, up to a single line at a time.

Users also can add some validation by expanding the Advanced Settings.

2. Text Area

Text Area allows the labeler to answer questions by typing in free-form text. In contrast to Text Fields, this allows for multiple-line answers.

3. Dropdown

Dropdown requires labelers to answer questions by picking one of several multiple-choice answers.

  • If you have a .csv with a pre-set list of answers, you can upload the .csv as an answer set.

  • You can also allow the labelers to select multiple answers by checking the box for Allow multiple choices.

4. Hierarchical Dropdown

Hierarchical dropdown allows the labeler to answer questions with hierarchically organized options.

5. True/False

Previously known as the Yes/No, this question type has now been renamed to True/False.

True/False allows the labelers to answer the question by checking it. You can also put a description.

6. Single Choice

Previously known as the Radio Button, this question type has now been renamed to Single Choice.

Single Choice allows the labeler to answer questions by selecting one answer.

You can configure up to 25 answer options for this question type.

You can also insert a hint to give a description of the Single Choice. Here is an example of using the Single Choice in the labeling process:

7. Multiple Choice

Multiple Choice allows the labeler to submit multiple answers by selecting more than one option from a list, or they can choose just one option if necessary.

The options are displayed as a staggered grid of checkboxes, making it more suitable for a smaller and simpler set of options. You can configure up to 25 answer options for this question type.

8. Date

Date allows the labeler to answer the question in two ways. The key benefit of selecting Date is that this format validates that a correct date has been filled in.

  • Typing the date in manually.

  • Clicking on the calendar symbol, then selecting the date.

If you want to fill date questions with the current timestamp at the time the labeler opens the project, you can check the Use current date as default value box on Step 3.

9. Time

Time allows the labeler to answer the question in two ways. The key benefit of selecting Time is that this format validates that a correct time has been filled in.

  • Typing it manually.

  • Clicking on the clock symbol, then selecting the time.

If you want to fill time questions with the current timestamp at the time the labeler opens the project, you can check the Use current time as default value box on Step 3.

10. Slider

Slider allows the labeler to answer the question by moving the sliding bar (ex: from 1 to 10).

To avoid subjective measurement, you can also hide the value from labelers in Step 3. Please note that the value will be visible in the reviewer mode.

You have the flexibility to personalize the slider color according to your preferences. While the default color for “Start at” and “End to” is blue, we provide 11 alternative default color options for you to select from.

To get a glimpse of how the color will appear, simply drag the slider thumb on the Preview.

Please note that we only allow numbers as the slider value.

11. URL

URL allows you to put the URL links and apply validation on it.

12. Grouped Attributes

Grouped Attributes allows the labeler to combine multiple questions that pertain to a single group.

13. Script-Generated Questions

Only supported in Row Labeling project.

Advanced Settings

In Row labeling projects, you can use the advanced setting “Refer answer to table column.”

Refer answer to table column

This feature is beneficial if you want to link answers to specific columns. A typical scenario for this is when you have a pre-labeled file and need to review the responses. Enabling this eliminates the need to apply the answers from scratch!

To enable this feature, navigate to Step 3 of the Project Creation Wizard and locate the Advanced Settings section. Here, you can choose the column headers for the questions you wish to bind.

Please note that this configuration can only be done during the project creation process.

After completing the project creation process, open the created project. You can now observe the binding result in the Document Labeling extension. The bound question is now filled with the answer from the bound column of the selected row.

Answer Validation Script

The Answer Validation Script is a highly flexible feature powered by TypeScript designed to help validate the logic of answering a row in Row labeling tasks. With this feature, you can write validation scripts to handle complex scenarios, such as verifying labeled data using other answers, comparing data across questions, or using external APIs for dynamic validation. Once the script is configured, if labelers or reviewers attempt to submit a row that fails validation, an error message will be displayed.

This functionality enables better control, accuracy, and consistency in the labeling process.

Key Capabilities

  1. Row-Specific Validation: Validates data based on the content of the current row.

  2. Cross-Question Validation: Checks answers by comparing them with answers from other questions.

  3. API-based Validation: Incorporates validations that rely on external APIs or external business logic.

How to Configure the Validation Script

Note

  • The validation cannot be configured if the questions have not been set up yet.

  • Only accessible by Admins in Reviewer mode.

  1. Go to the Row Labeling Extension inside the project.

  2. Click on the three-dot menu.

  3. Select "Configure answer validation script…".

Configuring the Script

When opening the Answer Validation Script dialog for the first time, you will be prompted with this template:

View Template
/**
 * This function should return a ValidationResult object.
 *
 * @param {ValidationArgs} args - The arguments for the validation.
 * @returns {ValidationResult} The result of the validation.
 * @example
 * To return an error:
 *   return { errorMessage: "This is a sample error message." };
 * To return no error:
 * return {}
 */
async ({ columns, row, questions, answers }: ValidationArgs): Promise<ValidationResult> => {
  **// TODO: Implement your validation logic here.**

  return {};

  /** Helper functions with access to the validation args */

  /**
   * Get the answer for a question by its label.
   * For grouped attributes, pass in the questions array from the grouped attribute.
   * @param label - The label of the question.
   * @param searchQuestions - Optional. The questions to search through. Defaults to the questions in the validation args.
   * @returns The answer for the question.
   */
  function getAnswerByQuestionLabel(label: string, searchQuestions: Question[] = questions) {
    const question = searchQuestions.find((q) => q.label === label);
    if (!question) throw new Error(`Could not find question with label: "${label}".`);
    return answers[`Q${question.id}`];
  }

  /**
   * Get the value of a cell by its column label.
   * @param label - The label of the column.
   * @returns The value of the cell.
   */
  function getCellValueByColumnLabel(label: string) {
    const column = columns.find((column) => column.name === label);
    if (!column) {
      throw new Error(`Couldn't find column with label: "${label}".`);
    }

    const cell = row.find((row) => row.index === column.id);
    if (!cell) {
      throw new Error(`Couldn't find cell with index: ${column.id}. Column: "${column.name}"`);
    }

    return cell.content;
  }
};

To decide whether to pass or fail the submission, you can return an object with or without errorMessage as the property:

async ({ columns, row, questions, answers }: ValidationArgs): Promise<ValidationResult> => {
  // this script will always prevent the labeler to submit the answer.
  return { errorMessage: 'Please double-check your answers.' };
}
async ({ columns, row, questions, answers }: ValidationArgs): Promise<ValidationResult> => {
  // this script will always allow the labelers to submit the answer.
  return {};
}

When validating, you will likely need to access certain information to determine whether the answer is valid or requires adjustment before submission. You can access all information provided from the function argument as demonstrated in the function to write the desired validation behavior.

  • columns: TableColumn[] holds information about the column structure of the data being labeled.

View Structure
interface TableColumn {
  id: number;
  name: string;
  displayed: boolean;
  labelerRestricted: boolean;
  rowQuestionId?: number;
}
  • row: Cell[] is an array of cells containing data that is being labeled.

View Structure
interface Cell {
  line: number;
  index: number;
  content: string;
  tokens: string[];
  metadata?: CellMetadata[];
}

interface CellMetadata {
  key: string;
  value: string;
  type?: string;
  pinned?: boolean;
  config?: TextMetadataConfig;
}

interface TextMetadataConfig {
  backgroundColor: string;
  color: string;
  borderColor: string;
}
  • questions: Question[] is an array of questions of the project.

View Structure
enum QuestionType {
  DROPDOWN = 'DROPDOWN',
  HIERARCHICAL_DROPDOWN = 'HIERARCHICAL_DROPDOWN',
  NESTED = 'NESTED',
  TEXT = 'TEXT',
  SLIDER = 'SLIDER',
  DATA = 'DATA',
  DATE = 'DATE',
  TIME = 'TIME',
  CHECKBOX = 'CHECKBOX',
  URL = 'URL',
}

enum SliderTheme {
  PLAIN = 'PLAIN',
  GRADIENT = 'GRADIENT',
}

interface Question {
  id: number;
  label: string;
  type: QuestionType;
  required: boolean;

  activationConditionLogic?: string;
  bindToColumn?: string;

  config: QuestionConfig;
}

interface QuestionConfig {
  multiple?: boolean;

  // TEXT
  maxLength?: number;
  minLength?: number;
  pattern?: string;
  multiline?: boolean;

  // SLIDER
  theme?: SliderTheme;
  min?: number;
  max?: number;
  step?: number;

  // DATE TIME
  format: string;
  defaultValue?: string;

  // DROPDOWN HIERARCHICAL
  options?: Array<{ id: string; label: string; parentId?: string | null }>;

  // GROUPED
  questions?: Array<Question>;

  // CHECKBOX
  hint?: string;
}
  • answers is an object containing answers with question’s id as the key. Depending on the question, it can be in 4 different types based on the question type:

    multiple: false
    multiple: true

    normal question

    string

    string[]

    grouped attributes

    Answer

    Answer[]

View Structure
type Answer = string | string[] | Answers[];

interface Answers {
  [questionId: string]: Answer;
}

We also provided some helper functions in the template to help some most basic data access, such as:

  • function getCellValueByColumnLabel(label: string): string;

    This function helps you obtain data based on the column’s label.

  • function getAnswerByQuestionLabel(label: string, searchQuestions: Question[] = questions): Answers;

    This function helps you obtain the answer value based on the question’s label.

Validating Answer Through an API Call

Disclaimer

  • We do not accept any responsibility for any API calls that are misrouted, improperly configured, or sent to unintended parties, which may lead to the exposure, leakage, or compromise of data confidentiality.

  • Users are fully responsible for ensuring the accuracy, security, and integrity of API configurations and transmissions. By using our services, you acknowledge and accept these responsibilities.

Examples

Validating answers between two questions
async ({ questions, answers }: ValidationArgs): Promise<ValidationResult> => {
  const questionAAnswer = getAnswerByQuestionLabel("Question A");
  const questionBAnswer = getAnswerByQuestionLabel("Question B");

  if (questionAAnswer !== questionBAnswer) {
    return { errorMessage: "The answer to Question A must match Question B." };
  }

  return {}; // Pass the validation
  
  // ...existing helpers provided by template
};
Validating an answer based on a cell value
async ({ columns, row, questions, answers }: ValidationArgs): Promise<ValidationResult> => {
  const statusColumnValue = getCellValueByColumnLabel("Status");
  const approvalStatusAnswer = getAnswerByQuestionLabel("Approval Status");

  if (statusColumnValue === "Complete" && approvalStatusAnswer !== "Approved") {
    return { errorMessage: "If 'Status' is 'Complete', 'Approval Status' must be 'Approved'." };
  }

  return {};
  
  // ...existing helpers provided by template
};
Validating an answer through an API request using the Fetch API
async ({ columns, row, questions, answers }: ValidationArgs): Promise<ValidationResult> => {
	const response = await fetch('<https://some.api.net/validate>', ...);
	const data = await response.json(); // assume the API returns a JSON object: { result: 'valid|invalid', message: 'the error message' };
	
	if (data.result === 'invalid') {
	  return { errorMessage: data.message };
	}

  return {};
  
  // ...existing helpers provided by template
}

FAQs

  1. Can I validate across multiple rows?

    No, the validation script is row-specific. It operates on individual rows being labeled.

  2. What happens if there's an error in the script?

    Unhandled exceptions or errors in the script will result in validation errors being shown and prevent the labeler from submitting their answers. You may choose to catch the error inside the script and let the submission continue if needed.

Hierarchical Label Sets or Dropdown Options

Users can upload multi-level hierarchical label set for Span Labeling projects, and Hierarchical dropdown options for Row or Document Labeling projects.

Here’s a sample that can be used for both hierarchical label set and dropdown options in CSV format:

id,label
1,Novel
1.1,Author
1.1.1,Name
1.1.2,Works
1.2,Title
1.2.1,Main Title
1.2.2,Subtitle
2,Characters
2.1,Antagonist
2.2,Protagonist

Components of this file

1. Header

The header id,label will always be the first row in the CSV file. The first label/option should have 1 as the ID, just like in the example above.

2. ID format

The ID format follows a structure similar to Microsoft Word's numbering format. In the example above:

  • Novel is the root level.

    • 1 is the ID for the root level.

  • Author is a second-level category under Novel.

    • 1.1 is the ID for the second level.

Important Notes

When importing data, the CSV format allows dots (.) to represent hierarchical relationships. However, these dots are automatically converted into a different ID structure in the JSON structure because dots are reserved for path traversal operations in the system. This means dots must not be ****used in JSON IDs. Here's how it works:

  • CSV Input:

    id,label
    1,Novel
    1.1,Author
  • Will be converted to JSON as:

    [  
      {    
        "label": "Novel",    
        "id": "1"  
      },  
      {    
        "label": "Author",    
        "id": "2",    
        "parentId": "1"  
      }
    ]

    In JSON format:

    • ✅ Correct: "id": "2"

    • ❌ Incorrect: "id": "1.1"

    Using dots in JSON IDs will cause incorrect path resolution when selecting items in the hierarchy.

3. Hierarchical Label Set in Span Labeling Projects

The hierarchy will be visible in the Labels extension and the label dropdown.

You can also use the same label name under different parent labels.

id,label
1,Software
1.1,Java
2,Geography
2.1,Java

Even though "Java" appears two times, each belongs to a different parent, making it contextually unique.

However, using the same label name more than once under the same parent is not allowed:

id,label
1,Fruit
1.1,Apple
1.2,Apple

In this case, the system will flag an error because both "Apple" entries are under "Fruit".

4. Hierarchical Dropdown Options in Row or Document Labeling Projects

You have to choose Hierarchical Dropdown as the question type when creating project.

The hierarchy will be displayed in the Row Labeling extension and the answer column in the table.

Tips

  • Clicking the Home icon navigates directly to the top-level label.

  • You can search for bottom-level options globally.

Question hints can be set during project creation or when configuring a question set on page.

The extensions are able to parse markdown syntaxes in question hints. This gives you flexibility in formatting the text; enabling you to present lists, attach links to external sites, or emphasize certain parts of the hint, all using a familiar set of syntaxes. The following are examples of the supported formatting.

Just like with the Dropdown type, you can also upload an answer set once you have created the hierarchical question. The format for hierarchical label sets can be .

When it comes to colors, you have the choice of using . If you opt for any of these choices, the dropdown will be labeled as “Custom”.

Script-Generated Questions is an advanced question type that dynamically generates different questions for each row based on its data. Unlike predefined question sets, this approach allows for flexible, on-the-fly question generation, making it ideal for scenarios where static question lists are insufficient. For more details, see this page .

This feature only available in Row Labeling project and disabled by default. Please reach out to if your team needs this feature, and we'll assist you!

You can include API requests in your validation, enabling dynamic or third-party validations by using the .

Label management
Document and Row Labeling
found below
hex codes, color names, or RGB values
here
support@datasaur.ai
Fetch API
Link
Labels
30B
Datasaur sample - Token label set.csv
NER label set
141B
Datasaur sample - Token label set (colored).csv
Colored label set
24B
Datasaur sample - Bbox label set (only name).csv
201B
Datasaur sample - Bbox label set (with header).csv
158B
Datasaur sample - Bbox label set (without header).csv
564B
Datasaur sample - Bbox.json
2KB
Datasaur sample - Bbox (with custom attributes).json
61B
Datasaur sample - Row or Document question set (dropdown only).csv
Book Review question set that only contain dropdown question type
3KB
Datasaur sample - Row or Document question set (multiple).json
Book Review question set
178B
Datasaur sample - Question set (dropdown).csv
Book Genre-Answer set
861B
Datasaur sample - Question set (custom slider color).json
148B
Datasaur sample - Hierarchical label set (dropdown options).csv
Label Color Palette
Setting question hint for project creation
Setting question hint from Label management
Setting the hint with link markdown syntax
How the link will be shown in the labeling extension UI
In Step 3
Single Choice Question Type
Labeling with Single Choice
A multiple choice question