Ranking (RLHF)

Overview

The LLM Evaluation Ranking feature in Datasaur's LLM Labs allows users to generate datasets for Reinforcement Learning from Human Feedback (RLHF). This feature is part of the LLM Labs Evaluation Module and provides a seamless way to generate and rank completions based on previously saved Ground Truth prompts.

Prerequisites

To use LLM Ranking Evaluation, you need to complete some prerequisites based on what you want to evaluate:

To evaluate pre-generated completion results:

  1. Prepare a dataset in a CSV file with several columns: prompt and completion_1, completion_2, completion_3, and so forth up to completion_xx.

To evaluate LLM applications:

  1. Ensure the LLM application is deployed.

  2. Prepare a dataset in a CSV file with one column: prompt.

Getting started

To begin using the Ranking evaluation:

  1. Navigate to the Evaluation page under LLM Labs menu.

  2. Click the Create evaluation project button and choose Rating project type.

  1. Set up your project. Choose what you want to evaluate with Ranking:

    1. Evaluate pre-generated completions

      1. Upload the dataset in a CSV file with several columns: prompt and completion_1, completion_2, completion_3, and so forth up to completion_xx.

    2. Evaluate LLM applications

      1. Upload the dataset in a CSV file with one column: prompt.

        1. Select the LLM application that you want to use to generate the completions. If you can’t find your application in the list, go to the playground where your application is created, and deploy it. You can only evaluate deployed LLM application.

  2. Click the Create evaluation project button.

Evaluate the completions

In Ranking evaluation project, we support two user roles:

  1. Labeler: As a labeler, you will need to ranking several completions for each prompt from best to worst. The labeler can be a subject-matter expert that will evaluate your LLM application completions.

  2. Reviewer: As a reviewer, you will need to review the labelers’ work.

Labeler

Each prompt comes with a minimum of two completions. As a labeler, you have to rank the completion from best to worst, by dragging the completion. After that, submit the answer to move to the next prompt.

Reviewer

As a reviewer, you have to review the labelers' answers. When there are conflicts between labelers, you must choose the most accurate one. Alternatively, you can give your own ranking.

Assignments

By default, when you create a Ranking evaluation in LLM Labs, the project creator is assigned both Labeler and Reviewer roles. You can update the Ranking evaluation roles by following these steps:

  1. Open your Ranking evaluation project.

  2. Switch to Reviewer mode.

  1. Open the project settings from File > Settings.

  2. Navigate to the Assignment menu.

  3. In the Assignment section, you can change roles and add new members to your project. You can also configure conflict resolution and dynamic review assignment.

Last updated