# Custom metrics

### Overview

Custom metrics in automated evaluation projects let you define your own evaluation setup to fit your project’s needs. With custom metrics, you control the evaluation criteria, scoring ranges, and evaluator models to ensure accurate and meaningful assessments.

### Get started

To use custom metrics:

1. Navigate to the **Evaluation** page under LLM Labs menu.
2. Click the **Create evaluation project** button and choose **Automated evaluation** project type, then **Continue**.

   <figure><img src="/files/CESwPmudc4oEmZkYXy4p" alt=""><figcaption></figcaption></figure>
3. Configure your evaluation by selecting the models to evaluate and choosing a dataset from the library. If you don’t have one, you can also upload a dataset in a CSV format containing two columns: `prompt` and `expected completion`.

   <figure><img src="/files/w93nqErszbr2VjRip28J" alt=""><figcaption></figcaption></figure>
4. In step 2, change the metric to **Custom.**
5. Set up the custom metric configuration, which consists:

   * **Evaluator model:** The model that will evaluate the outputs of your model.
   * **Custom evaluator name:** Enter a unique name to identify your custom evaluation.
   * **Minimum and maximum score:** Define the scoring range (example: 0 to 100). If the metric uses inverted scoring, you can set the minimum value higher than the maximum (example: 100 to 0).
   * **Prompt:** Write a clear, detailed prompt that explains the evaluation process. Include specific criteria and instructions for assessing responses.

   <figure><img src="/files/pT02OHq5mm0lJyP5vGHF" alt=""><figcaption></figcaption></figure>
6. Click **Create evaluation project** and wait for your evaluation process to finish.

### **Analyze the evaluation results**

After the evaluation process is completed, you can analyze the results. [Learn more on how to analyze the result.](https://docs.datasaur.ai/llm-projects/evaluation/automated-evaluation#analyzing-the-evaluation-results)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.datasaur.ai/llm-projects/evaluation/automated-evaluation/custom-metrics.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
