Fine-tuning

Overview

LLMs in general are typically trained for generic use cases. Fine-tuning LLMs allows users to further train the model to provide more accurate answers for a specific domain or use case. This process involves providing the model with a dataset containing examples of input and output from a specific domain. LLM Labs helps simplify this process by providing a user-friendly way to fine-tune and deploy open-source models, allowing you to tailor LLMs to your exact needs.

Create fine-tuned models

This section guides you through the process of fine-tuning your models in LLM Labs.

Step 1: Set up model

  1. Navigate to the Models page.

  2. On the Available tab, go to the Fine-tuned LLMs section, and click Create fine-tuned model.

    My models page
  3. Set up your fine-tuning job.

    Set up model
    1. Name your fine-tuned model.

    2. Select a base model that you want to fine-tune. You can select either:

      1. Pre-trained LLMs. Currently, we support:

        1. Amazon Nova Micro

        2. Amazon Nova Lite

        3. Amazon Nova Pro

        4. Amazon Titan Text G1 - Express

        5. Amazon Titan Text G1 - Lite

        6. Cohere Command

        7. Cohere Command Light

        8. Meta Llama 3.1 8B

        9. Meta Llama 3.1 70B

      2. Existing fine-tuned models

    3. Choose a dataset. You can upload either a .csv consisting of 2 columns: prompt, expected completion, or you can choose an existing dataset from the library. For the validation dataset, you have 3 options:

      1. Split from selected dataset: Datasaur will split the uploaded dataset and use it for validation data. You will need to configure the validation size using a percentage.

        Split from selected dataset
      2. Use new dataset: You will need to add a new dataset to use as validation.

        Use new dataset
      3. None: Choose this option if you don't want to add a validation dataset. Please note that if you select Cohere Command or Cohere Command Light as the base model, you are required to have validation data.

Step 2: Adjust hyperparameters

Next, you will need to configure the hyperparameters for your fine-tuning project.

Adjust hyperparameters

The fundamental hyperparameters are:

Models
Hyperparameters

Amazon Titan Text G1 - Express

Epochs, and Learning rate.

Amazon Titan Text G1 - Lite

Epochs, and Learning rate.

Cohere Command

Epochs, and Learning rate.

Cohere Command Light

Epochs, and Learning rate.

Meta Llama 2 13B

Epochs, and Learning rate.

Meta Llama 2 70B

Epochs, and Learning rate.

In addition to the fundamental hyperparameters, there are advanced hyperparameters with preset default values. These hyperparameters are always applied, but you can adjust them for further hyperparameter fine-tuning if desired. The advanced hyperparameters include:

Models
Hyperparameters

Amazon Titan Text G1 - Express

Batch size, and Learning rate warmup steps

Amazon Titan Text G1 - Lite

Batch size, and Learning rate warmup steps

Cohere Command

Batch size, Early stopping threshold, and Early stopping patience.

Cohere Command Light

Batch size, Early stopping threshold, and Early stopping patience.

Meta Llama 2 13B

Batch size.

Meta Llama 2 70B

Batch size.

Step 3: Review job

  1. The last step is to review your fine-tuning job before you start the process.

    Review job
  2. You can also view the predicted cost by clicking the View total predicted cost button on the Costs section. It will calculate and show you the total predicted cost for starting the fine-tuning process.

Please note that this is just a cost prediction. The final cost may be higher or lower, as each model has its own tokenizer.

  1. Once you have reviewed the configuration, you will need to check the acknowledgement checkbox.

  2. Lastly, click Start fine-tuning job and the training process will start.

It will take several hours for the training process to be completed. Datasaur will notify you by email when the training process is complete.

  1. Once the training process is complete, your model will be available to deploy.

    My models page

Model management

Model status

There are 7 possible statuses for the fine-tuned models.

  1. Training: The model is currently being trained on your dataset. This status indicates that the training process is in progress, and the model is learning from your data.

    Training status
  2. Training failed: The model training process has failed due to an error. This status indicates that the training process was interrupted, and you may need to investigate and resolve the issue.

    Training failed
  3. Stopping training: The model training process is being stopped. This status indicates that someone has chosen to stop the training.

    Stopping training
  4. Training stopped: The model training process has been stopped. This status indicates that the training process has been successfully stopped, and you can’t continue the training once it stopped.

    Training stopped
  5. Not deployed: The model has been trained but has not yet been deployed for use. You can deploy the model to use it in Sandbox.

    Not deployed
  6. Deploying: The model is being deployed for use. This status indicates that the deployment process is in progress, and the model will soon be available for use in Sandbox.

    Deploying
  7. Deployed: The model has been successfully deployed. This status indicates that the model is now available for use in Sandbox, and you can start using it to generate predictions or responses.

    Deployed

Deploy models

To deploy a fine-tuned model:

  1. Click Deploy model to start the deployment.

  2. In the dialog that appears, specify the auto undeploy schedule.

  3. Click Deploy model in the dialog to confirm and the process will start.

    Deploying process
  4. Once the process is finish, your model will be available to use for experiment in Sandbox. Learn more about Sandbox

    Model deployed

Undeploy models

  1. Click the more menu (three-dots) in the right corner of the model card and select Undeploy.

    Undeploy model
  2. Confirm the process by clicking Undeploy in the dialog that appears.

  3. Your model will be undeployed and you will no longer be charged for the hourly cost.

View model details

To view the model details, click the more menu (three-dots) and select View details. The details of the fine-tuned model will be shown.

View details menu

In this dialog, you can view the dataset, validation dataset, models used, hyperparameter configurations, the creator, and storage cost information.

Delete models

To delete a fine-tuned model, click the more menu (three-dots) and select Delete.

In the dialog that appears, check the acknowledgement checkbox, and confirm the deletion by clicking Delete model.

Use in Sandbox

Once a fine-tuned model is deployed, it will be available in the Sandbox for further experimentation and testing. This allows you to integrate and test the specialized model within your specific applications. Learn more about Sandbox.

Access model via Sandbox

Last updated