Sandbox

Sandbox is a platform designed for your LLM experimentation. It allows you to integrate your knowledge base and deploy LLM applications easily.

Overview

Sandbox is a key feature within LLM Labs, provides a user-friendly environment specifically designed for LLM experimentation. It allows you to:

  • Connect your preferred LLM model: Integrate your choice of LLM models to explore their functionalities.

  • Create a dedicated sandbox: Set up a personalized workspace for your LLM experimentation.

  • Configure your sandbox (LLM application): This configuration process defines your LLM application. It involves defining elements like:

    • Prompt Template: Craft a template that specifies the format for user prompts sent to your LLM model. This ensures consistency and clarity in user interactions.

    • Context (Knowledge base): Optionally, integrate a context knowledge base to provide additional background information to the LLM model, potentially improving its understanding and response accuracy.

    • Configuration: Fine-tune various parameters for the connected LLM model, such as temperature or token settings, to optimize its performance for your specific use case.

  • Run prompts: Test your LLM models with various prompts to evaluate their responses and refine your approach.

Sandbox offers several advantages for LLM enthusiasts and developers:

  • Reduced Risk: Experiment with different models without committing to deployment, minimizing potential risks associated with real-world use cases.

  • Enhanced Understanding: Gain deeper insights into individual LLM models and their capabilities through hands-on experimentation.

  • Optimized Configuration: Fine-tune model parameters within the sandbox to achieve the best possible results for your specific needs.

  • Streamlined Development: Test and refine your LLM applications in a controlled environment before deployment, ensuring optimal performance.

Getting Started with the Sandbox

Sandbox is designed for ease of use. Here's a quick guide to get you started:

Step 1: Create Your Sandbox

  • Click on Create new sandbox to establish your dedicated workspace.

  • You can assign a descriptive name to your sandbox for easy identification.

Step 2: Configure the sandbox application

  • Access the Configuration section within your sandbox.

  • You can adjust various parameters for the connected LLM model, such as temperature or token settings.

  • Experiment with different configurations to observe their impact on the model's responses. Learn More about Models.

Step 3: Run Prompts

  • Enter your desired prompt within the designated area.

  • This prompt can be a question, a task instruction, or any text input you want the LLM model to process.

  • Click Run all to trigger the model's response based on your prompt.

Deploying the LLM

Once you've crafted the perfect LLM configuration within the sandbox, you can seamlessly transition it into a real-world application.

This configured Sandbox environment translates to your LLM application. LLM Labs empowers you to effortlessly deploy your LLM application, making it accessible via API for integration into your workflows.

You will need to create your API Key first to use the API integration.

To create the API Key, click the Create API Key button and insert the API Key name. Once the API Key is generated, you can use it to the API integration.

The deployment process is designed for simplicity. Here's how to deploy your LLM application:

  1. Navigate to the Deployment Page: Within the sandbox, locate the dedicated deployment section.

  2. Choose Your LLM Application: Select the specific LLM application (configured Sandbox) you want to deploy.

  3. Deploy and Access: Initiate the deployment process. Upon successful completion, you'll be able to access your deployed LLM Application through various programming languages like cURL, Python, and Typescript, allowing you to integrate it into your development projects.

Save to library

The Save to library feature allows you to save applications from the sandbox into a library. This enables you to reuse these applications across various features, such as evaluation, without needing to rebuild them from scratch.

Save to Library feature can be used by users within the same workspace, meaning you can share your application with other team members in that workspace.

How to save application to library

To save your application to library, you can just simply click the Save to library option from the application three-dots icon.

Once you’ve clicked the Save to library button, you will need to enter the application name that you want to save in the library.

Click the Save application button once you’ve entered the name. And the application will be saved in the library.

Access the application from library in sandbox

  1. Navigate to the Sandbox page.

  2. Click the arrow button next to the Add application button.

  3. Choose the saved application that you wish to use, and click Select application button to use the saved application.

Access the application from library in evaluation

  1. Navigate to the Evaluation page under LLM Labs menu.

  2. Click the Create evaluation project button and choose Automated evaluation project type.

  3. Once you’ve in the creation wizard, choose the application that is saved from the library.

  4. Continue the creation wizard process for automated evaluation. Learn more on how to create an automated evaluation project.

Auto-generate instruction (BETA)

Auto generate instruction feature in LLM Labs is designed to help user in generating better-structured system instructions in the Sandbox. This tool enables users to generate customized instructions for LLM applications, making it easier to define the assistant's behavior and optimize responses.

  • This feature is currently in BETA and uses OpenAI GPT-4o for instruction generation, providing robust assistance tailored to various LLM applications.

  • Users are encouraged to refine and double check generated instructions to meet specific needs, as auto-generated content may sometimes require minor adjustments for optimal results.

Get started

To auto-generate instructions:

  1. Navigate to the Sandbox page under LLM Labs menu, and open your Sandbox.

  2. Select your application and click the Auto-generate button.

  3. In the Auto-generate instruction dialog, describe what you’d like the model to achieve. For example, enter tasks like "Automate professional email drafting" or "Summarize contracts and legal terms".

You may also select from example instructions such as "Code writer, debugger, and optimizer" to guide the auto-generation process.

  1. Click Generate instruction button to receive a structured system instruction based on your input. Review the generated instruction to ensure it matches your intended use case, making adjustments as necessary for clarity or specificity.

  2. Click the Use this instruction button to add the system instruction into your application.

Anonymize PII

The Anonymize PII feature enhances the privacy and security of sensitive data during inferencing in the Sandbox. This feature is designed to automatically detect and mask Personally Identifiable Information (PII), ensuring that sensitive details remain confidential while retaining readability and context in the output.

How it works

When users submit prompts containing PII in the Sandbox, the Anonymize PII feature securely masks the sensitive information in a structured format. For instance, names, organizations, and locations are replaced with easily identifiable placeholders. These placeholders make it clear that data has been redacted without losing the context required for meaningful results.

Example:

  • Original input: “John Doe works at Datasaur Software and attends meetings with Innova Aspire Inc.”

  • Masked output: “[PERSON#0] works at [ORGANIZATION#0] and attends meetings with [ORGANIZATION#1]

This ensures consistency, especially when the same PII subject appears multiple times in the same prompt. The system assigns unique identifiers for different entities, avoiding confusion caused by generic replacements.

How to use

  1. Navigate to the sandbox where you will configure your model application.

  2. Open the Hyperparameter configurations dialog by clicking the gear icon in the application.

  3. Look for the toggle labeled Anonymize PII (English only) within the configuration options, and simply enable the toggle to activate the feature.

  4. Run your prompt as usual, and all PII in the output will be automatically masked.

Cost Prediction Calculation

Overview

The Cost Prediction for Inference feature aims to enhance user experience by providing transparency and predictability in terms of the costs incurred during language model inference. Users can now estimate the financial implications of their inferencing activities within LLM Lab.

How to see the cost prediction?

  1. Open your sandbox.

  2. Write your prompt query, and the predicted cost from your prompt templates will be shown.

The cost prediction will calculate the cost based on your available application, the more application you have the cost will be more expensive.

View Prediction Details

To see the prediction details, you just have to click the cost prediction. It will show you a dialog of the detailed cost.

You can also break down and compare the cost prediction based on your available application.

Last updated