Automated Evaluation
Overview
The automated evaluation feature addresses the challenges users face when manually evaluating completions. This process is time-consuming, labor-intensive, and prone to human error, leading to inconsistent evaluations. Automating the evaluation process helps users save time, improve accuracy, and ensure consistent evaluations.
Prerequisites
To use automated evaluation, you need to complete some prerequisites based on what you want to evaluate:
To evaluate an existing model in Datasaur:
Ensure the model is deployed from Sandbox.
Prepare a ground truth dataset in a CSV file with two columns:
prompt
andexpected completion
.
To evaluate pre-generated completions (CSV file):
Prepare a ground truth dataset in a CSV file with three columns:
prompt
,completion
, andexpected completion
.
Getting started
Navigate to the Evaluation page under LLM Labs menu.
Click the Create evaluation project button and choose Automated evaluation project type, then Continue.
Configure your evaluation project. You can evaluate two types with automated evaluation:
Model from Sandbox
Upload the ground truth dataset in a CSV format containing two columns:
prompt
andexpected completion
.
Pre-generated completions
Upload the pre-generated completions combined with the ground truth dataset in a CSV format with three columns:
prompt
,completion
, andexpected completion
.
Manage evaluation: Select the metric, provider, and the evaluator model you want to use for evaluation.
Analyze the evaluation results
After the evaluation process is completed, you can analyze the results:
For models:
Generation cost and processing time: View the total cost and time taken for generating completions.
Average score: See the overall performance score given by the evaluator.
Detailed results: For each prompt, you can examine:
The quality of the generated completion
Processing time
Individual score
For pre-generated completions:
Average score: See the overall performance score given by the evaluator.
Detailed results: For each prompt, you can examine:
The quality of the pre-generated completion
Individual score
Evaluators
Automated evaluation supports various industry-standard evaluators to provide you with comprehensive insights into your model's performance. Each evaluator comes with a set of specific metrics tailored to different aspects of LLM evaluation.
Langchain
Answer Correctness: Measures the accuracy of the LLM's response compared to the ground truth.
Ragas
Answer Correctness: Measures the accuracy of the LLM's response compared to the ground truth.
Deepeval
Answer relevance: Evaluates how relevant the LLM's responses are to the given questions.
Bias: Assesses the presence of bias in the LLM's outputs based on predefined criteria.
Toxicity: Detects and quantifies toxic language or harmful content in the LLM's responses.
Last updated