Sandbox
Sandbox is a platform designed for your LLM experimentation. It allows you to integrate your knowledge base and deploy models easily.
Overview
Sandbox is a key feature within LLM Labs, provides a user-friendly environment specifically designed for LLM experimentation. It allows you to:
Connect your preferred base models: Integrate your choice of base models to explore their functionalities.
Create a dedicated sandbox: Set up a personalized workspace for your LLM experimentation.
Configure your model: This configuration process defines your models. It involves defining elements like:
Instruction: Craft an instruction that specifies the format for user prompts sent to your model. This ensures consistency and clarity in user interactions.
Context (knowledge base): Optionally, integrate a context knowledge base to provide additional background information to the model, potentially improving its understanding and response accuracy.
Configuration: Fine-tune various parameters for the connected models, such as temperature or token settings, to optimize its performance for your specific use case.
Run prompts: Test your models with various prompts to evaluate their responses and refine your approach.
Sandbox offers several advantages for LLM enthusiasts and developers:
Reduced risk: Experiment with different models without committing to deployment, minimizing potential risks associated with real-world use cases.
Enhanced understanding: Gain deeper insights into individual model and their capabilities through hands-on experimentation.
Optimized configuration: Fine-tune model parameters within the sandbox to achieve the best possible results for your specific needs.
Streamlined development: Test and refine your models in a controlled environment before deployment, ensuring optimal performance.
Getting started with the Sandbox
Sandbox is designed for ease of use. Here's a quick guide to get you started:
Step 1: Create your Sandbox

Click Create new sandbox.
You can rename your sandbox to make it easier to identify.
Step 2: Configure the model

In the model you want to configure, click Change model settings from the More menu (three dots).
You can adjust various parameters for your models, such as temperature or token settings.
Experiment with different configurations to observe their impact on the base model's responses. Learn more about base models.
Step 3: Run prompts

Enter your desired prompts within the designated area.
This prompt can be a question, a task instruction, or any text input you want the model to process.
Click Run selected to to trigger the models to generate responses to your prompts.
Deploy models
Save to library
The Save to library feature allows you to save models from the sandbox. This enables you to reuse these models across various features, such as evaluation, without needing to rebuild them from scratch.
How to save model to the library
To save your model, click the Save to library option from the model's More menu (three-dots).

After clicking Save to library, give your model a descriptive name, and click Save model to save it to the library.

Access the models from library in sandbox
In the Models section, click Select from library from the More menu (three dots).
Choose the saved model that you wish to use, and click Select model.
Access the saved models in Evaluation
Navigate to the Evaluation page under LLM Labs menu.
Click Create evaluation project and choose Automated evaluation project type.
In the Model dropdown, select the saved models you want to evaluate.
Continue the creation process for automated evaluation. Learn more on how to create an automated evaluation project.
Add prompts from dataset
In LLM Labs, you can efficiently test your models against multiple inputs by importing prompts directly from an existing dataset.
How to add prompts from a dataset
In the Prompts section, click Add prompts from dataset from the More menu (three dots).
After selecting this option, you will be able to choose a dataset and map its columns to be used as prompts.
Model configuration

You can configure your models with your desired settings. Several parameters that you can adjust based on your needs are:
Temperature: This parameter controls the randomness of the generated text. A higher temperature will result in more diverse and creative responses, while a lower temperature will produce more focused and predictable responses.
Top P: This parameter, also known as nucleus sampling, controls the cumulative probability threshold for token selection. A higher Top P value will result in more focused and relevant responses, while a lower Top P value will allow for more diverse and unexpected responses.
Maximum output tokens: This parameter sets the maximum number of tokens that will be generated in the response. A higher maximum length will allow for longer and more detailed responses, while a lower maximum length will result in shorter and more concise responses.
Maximum knowledge base tokens: This parameter, defines the upper limit on the number of tokens that can be stored in the knowledge base. This is crucial for maintaining efficient storage and retrieval times. It ensures that the knowledge base doesn't become overloaded with data, which can slow down queries and decrease performance.
Similarity score: This parameter controls how closely the generated text matches the original prompt in terms of content and style. A higher similarity score will result in responses that more closely align with the prompt, while a lower similarity score will allow for more divergent responses.
Advanced hyperparameters
Advanced hyperparameters in LLM Labs Sandbox allow users to customize their model behavior further by adding additional parameters in JSON format. This gives users greater control over the underlying model's response generation and can optimize performance for specific use cases.
Get started
In the model you want to configure, click Change model settings from the More menu (three dots).
Scroll down to the section labeled Advanced hyperparameters.
Enter your desired hyperparameters in valid JSON format.
JSON Format Example
{ "stream": true }
Anonymize PII
The Anonymize PII feature enhances the privacy and security of sensitive data during inferencing in the Sandbox. This feature is designed to automatically detect and mask Personally Identifiable Information (PII), ensuring that sensitive details remain confidential while retaining readability and context in the output.
How it works
When users submit prompts containing PII in the Sandbox, the Anonymize PII feature securely masks the sensitive information in a structured format. For instance, names, organizations, and locations are replaced with easily identifiable placeholders. These placeholders make it clear that data has been redacted without losing the context required for meaningful results.
Example:
Original input: “John Doe works at Datasaur Software and attends meetings with Innova Aspire Inc.”
Masked output: “[PERSON#0] works at [ORGANIZATION#0] and attends meetings with [ORGANIZATION#1]”
This ensures consistency, especially when the same PII subject appears multiple times in the same prompt. The system assigns unique identifiers for different entities, avoiding confusion caused by generic replacements.
How to use
In the model you want to configure, click Change model settings from the More menu (three dots).
Look for the toggle labeled Anonymize PII (English only) within the configuration options, and simply enable the toggle to activate the feature.
Run your prompt as usual, and all PII in the output will be automatically masked.
Auto-generate instruction (BETA)
Auto generate instruction feature in LLM Labs is designed to help you in generating better-structured system instructions in the Sandbox. This tool enables you to generate customized instructions for your models, making it easier to define the assistant's behavior and optimize responses.
Get started
To auto-generate instructions:
Click the Help me write button in a model.
In the Auto-generate instruction dialog, describe what you’d like the model to achieve.
Click Generate instruction button to receive a structured system instruction based on your input. Review the generated instruction to ensure it matches your intended use case.
Click the Use this instruction button to add the system instruction into your model.
Cost prediction calculation
Overview
Cost prediction provides transparency and predictability in costs. You can see how much your inference activities will cost in LLM Labs.
How to see the cost prediction?
Open your sandbox.
Write your prompts, and the predicted cost for each prompt will appear.
View prediction details
To see the prediction details, click the View predicted cost option from the More menu (three dots) of each prompt.

It will show you a dialog of the detailed cost. You can also break down and compare the cost prediction based on your available models.

Last updated