# Hugging Face

**Supported labeling types**: `Span labeling`**,** `Row labeling`

Datasaur integrates directly with Hugging Face, providing access to their 10k+ models.

After choosing Hugging Face as the option, you can navigate to [Hugging Face](https://huggingface.co/models) and choose the available model. If you already host your own private models on Hugging Face, you can use those as well.

#### Span Labeling

For span labeling, you can either enter the model name or the endpoint URL if you're using a self-hosted model. There's no need to provide a model name or API token when using your own endpoint. You can also set the confidence score to manually adjust the prediction threshold based on your needs.

<figure><img src="https://448889121-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-MbjY0HseEqu7LtYAt4d%2Fuploads%2Fgit-blob-9ab62c5d77186da8f09eb90c1e50b81edd2c2d92%2FExtension%20-%20ML-assisted%20Labeling%20-%20Span%20labeling%20-%20Hugging%20Face%20-%20highlight.png?alt=media" alt="Image of ML Assisted with Hugging Face for Span Based"><figcaption><p><strong>ML-assisted labeling</strong> with Hugging Face for span labeling</p></figcaption></figure>

#### Row Labeling

In row labeling, you can choose the **Target text** as your input and the **Target question** as your desired output. To get started, enter either the **model name** or the **Hugging Face Inference Endpoint URL**, along with your **API token**.

When choosing models for predicting labels, you use a text-classification model, the model should return a list of dictionaries/object where each object contain all prediction (positive, negative, neutral) like this

```
[
[ { label: "positive", score: 0.8 }, { label: "neutral", score: 0.15 }, { label: "negative", score: 0.05 } ],
[ { label: "negative", score: 0.6 }, { label: "neutral", score: 0.3 }, { label: "positive", score: 0.1 } ]
]
```

or just a single list/array that contains objects of single prediction (the highest score) like this

```
[
{ label: "positive", score: 0.8 },
{ label: "negative", score: 0.6 }
]
```

This feature also includes an option for Faster Prediction Speed, which significantly improves performance by processing entire rows at once. However, this action can’t be undone.

Finally, you can adjust the **Confidence score** to manually set the prediction threshold according to your preference.

<figure><img src="https://448889121-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-MbjY0HseEqu7LtYAt4d%2Fuploads%2Fgit-blob-4753f268e591fe20a6d8552069570e2c651b3b3d%2FExtension%20-%20ML-assisted%20Labeling%20-%20Row%20labeling%20-%20Hugging%20Face%20-%20highlight.png?alt=media" alt="Image of ML Assisted with Hugging Face for Row Based"><figcaption><p><strong>ML-assisted labeling</strong> with Hugging Face for row labeling</p></figcaption></figure>

If you click the **Predict labels** button, the project will automatically apply labels to the document based on the loaded model.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.datasaur.ai/assisted-labeling/ml-assisted-labeling/ml-assisted-using-huggingface.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
