# Performance Monitoring

### Overview

Performance Monitoring enhances Automated Evaluation by letting you schedule regular evaluations of your models. Once the interval is set, evaluations run automatically, and you can check the results for timely improvements.

### Get started

To use performance monitoring:

1. Navigate to the **Performance monitoring** page under **Evaluation** menu.
2. Click **Create performance monitoring**.

   <figure><img src="/files/4B8SiJDQP7r8rFE5iWUY" alt=""><figcaption></figcaption></figure>
3. Configure your evaluation by selecting a model to evaluate and choosing a [dataset](https://docs.datasaur.ai/llm-projects/dataset) from the library. If you don’t have one, you can also upload a dataset in a CSV format containing two columns: `prompt` and `expected completion`.

   <figure><img src="/files/cvQy2pdUUDyepgc6ZElo" alt=""><figcaption></figcaption></figure>
4. Select the metric, provider, and the evaluator model you want to use for evaluation. [Learn more about the evaluators and metrics](https://docs.datasaur.ai/llm-projects/evaluation/automated-evaluation#evaluators).

   <figure><img src="/files/ZFHl2I5Bxt0tO7XZw7g8" alt=""><figcaption></figcaption></figure>
5. In the final step, you will need to set and configure the schedule for the evaluation process. You will need to configure:

   1. **Recurrence**: You can choose the frequency of your evaluation. The available options are:
      1. Daily at 12:00 AM: Your evaluation will be performed on a daily basis at midnight.
      2. Weekly on Sunday at 12:00 AM: Your evaluation will be performed weekly on Sunday at midnight.
      3. Monthly on day at 12:00 AM: Your evaluation will be performed monthly on the first day of the month at midnight.
      4. Custom: You can set and configure your own evaluation frequency.\\

         <figure><img src="/files/Ss6rLFIsan6XsCyS1aEQ" alt=""><figcaption></figcaption></figure>
   2. **Monitor performance drift**: You can get notified for LLM performance drift over time. Datasaur will notify you via email when any generated completion deviates beyond a specified threshold during scheduled evaluations, indicating potential performance deterioration.
   3. **Run immediately**: Evaluate your model right away after creating the project, regardless of the recurrence settings.

   <figure><img src="/files/J6eykr3SSu5yxnFTubAi" alt=""><figcaption></figcaption></figure>
6. Click **Create evaluation project**, and your performance monitoring project will be created.

   <figure><img src="/files/yoU8xFCoMMwccCPMin7N" alt=""><figcaption></figcaption></figure>

### Evaluation process

Inside the project, you can click **Run now** to manually start the evaluation process.

<figure><img src="/files/sCZcpkqPo1Bw4uQc7I5I" alt=""><figcaption></figcaption></figure>

Once the evaluation process has started, you will need to wait until it is completed. You'll receive an email once it's finished, or you can refresh the page to see the latest update.

<figure><img src="/files/rltrxx5dz87dpRX38q59" alt=""><figcaption></figcaption></figure>

### Analyze the evaluation result

After the evaluation process is completed, you can analyze the results.

<figure><img src="/files/yxr1CG1nPQKejtHZWmP9" alt=""><figcaption></figcaption></figure>

#### **Summary**

On the summary section, you can see the cost and the processing time of the evaluation process. You can also see the average evaluator score and the performance result.

<figure><img src="/files/hBvmUt6w1sHED1vm80Rd" alt=""><figcaption></figcaption></figure>

#### Result and score

In the results section, you can see the completions generated by your model, along with their scores for the selected metric, reasons behind the scores, and overall performance.

<figure><img src="/files/A8uChRGQpI2S2OuAQRHF" alt=""><figcaption></figcaption></figure>

#### **Evaluation details**

To view the evaluation details of a completion, click the **More** icon (three dots) at the far right of the row, then select **View details**.

<figure><img src="/files/dZkP7zNRjxh5ZclyWPSK" alt=""><figcaption></figcaption></figure>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.datasaur.ai/llm-projects/evaluation/performance-monitoring.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
