LLM Labs

LLM Labs is a platform designed for your LLM experimentation. It allows you to integrate your knowledge base and deploy LLM applications easily.

Getting Started

Navigate to Datasaur App and log in with your credentials. Once logged in, go to the Menu and select 'LLM Labs'.

Playground

Creating your LLM Application

  1. Creating a New Application: Click on “Create new application” to begin crafting your LLM application.

  1. Design your Prompt Template: Utilize the interface to build your prompt template. You can add more templates by clicking “Add prompt template”.

  1. Duplicate for Efficiency: Use the “Duplicate prompt template” feature for quicker prompt iterations.

  1. Model Selection and Comparison: Change the LLM model in Advance Configuration per prompt template to compare results. Currently, OpenAI LLM Provider is available, with more to come.

  1. Testing Your Prompts: After crafting your prompt, use “Add prompt” to test its effectiveness.

  1. Bulk Prompt Upload: Use the upload feature (.CSV file) for bulk prompts. Format your file as described, with each row representing a prompt and each column header as a variable. Please find the example of format file below.

  1. Viewing Results: Click “Run Prompt” to see the outcomes. Monitor the processing time for each prompt and view the average times for each prompt template.

  1. Hyperparameter Adjustment: Fine-tune your application by adjusting hyperparameters in Advanced configuration per template.

Tips: You can expand the prompt template for better crafting space, expand the prompt template.

Deployment

  1. Choosing Your Template: After finalizing your application, go to the “Deployment” tab and select the prompt template for deployment.

  1. Deployment Success: Your LLM Application is now ready, accessible via cURL, Python, and TypeScript.

Cost Prediction Calculation

Overview

The Cost Prediction for Inference feature aims to enhance user experience by providing transparency and predictability in terms of the costs incurred during language model inference. Users can now estimate the financial implications of their inferencing activities within LLM Lab.

How to see the cost prediction?

In the meantime, we only support cost prediction for OpenAI models.

  1. Open your LLM Application.

  2. Write your prompt query, and the predicted cost from your prompt templates will be shown.

The cost prediction will calculate the cost based on your available prompt template, the more prompt template you have the cost will be more expensive.

View Prediction Details

To see the prediction details, you just have to click the cost prediction. It will show you a dialog of the detailed cost.

You can also break down and compare the cost prediction based on your available prompt template.

Last updated