Fine-tuning
Last updated
Last updated
Fine-tuning pre-trained models allows you to tailor them to specific tasks, leading to enhanced performance and relevance in your LLM applications. LLM Labs leverages the capabilities of AWS Bedrock, a platform that facilitates the seamless fine-tuning and deployment of open-source models through a user-friendly interface.
This section guides you through the process of fine-tuning your models in LLM Labs.
Navigate to the Models page.
Open My models tab, and then open the Fine-tuned LLMs section. Click on the Create fine-tuned model button.
Once the creation wizard is opened, you need to set up your project.
Name your fine-tuned model.
Select a base model that you want to fine-tune. You can select either:
Pre-trained LLMs. Currently, we support:
Amazon Titan Text G1 - Express
Amazon Titan Text G1 - Lite
Cohere Command
Cohere Command Light
Meta Llama 2 13B
Meta Llama 2 70B
Existing fine-tuned models
Choose a dataset. You can upload either a .csv consisting of 2 columns: prompts
, expected completions
, or you can choose an existing dataset from the library. For the validation dataset, you have 3 options:
Split from selected dataset: Datasaur will split the uploaded dataset and use it for validation data. You will need to configure the validation size using a percentage.
Use new dataset: You will need to add a new dataset to use as validation.
None: Choose this option if you don't want to add a validation dataset.
Please note that if you select Cohere Command or Cohere Command Light as the base model, you are required to have validation data.
Next, you will need to configure and adjust the hyperparameters for your fine-tuning project.
The fundamental hyperparameters are:
In addition to the fundamental hyperparameters, there are advanced hyperparameters with preset default values. These hyperparameters are always applied, but you can adjust them for further hyperparameter fine-tuning if desired. The advanced hyperparameters include:
The last step of the creation wizard is to review your fine-tuning job before you start the process. In this step, you will need to review all the configurations that you have chosen. The important part of this page is to review the cost that it will take to start the fine-tuning process.
You can also view the predicted cost by clicking the View total predicted cost button on the Costs section.
It will calculate and show you the total predicted cost for starting the fine-tuning process.
Please note that this is just a cost prediction. The final cost may be higher or lower, as each model has its own tokenizer.
Once you have reviewed the configuration, you will need to check the acknowledgement checkbox.
Lastly, click the Start fine-tuning job button to start the fine-tuning process.
The training process has now started.
It will take several hours for the training process to be completed. Datasaur will notify you by email when the training process is complete.
Once the training process is complete, your model will be available in the My models section.
There are 7 possible statuses for the fine-tuned model.
Training: The model is currently being trained on your dataset. This status indicates that the training process is in progress, and the model is learning from your data.
Training failed: The model training process has failed due to an error. This status indicates that the training process was interrupted, and you may need to investigate and resolve the issue.
Stopping training: The model training process is being stopped. This status indicates that someone has chosen to stop the training.
Training stopped: The model training process has been stopped. This status indicates that the training process has been successfully stopped, and you can’t continue the training once it stopped.
Not deployed: The model has been trained but has not yet been deployed for use. You can deploy the model to use it in Sandbox.
Deploying: The model is being deployed for use. This status indicates that the deployment process is in progress, and the model will soon be available for use in Sandbox.
Deployed: The model has been successfully deployed. This status indicates that the model is now available for use in Sandbox, and you can start using it to generate predictions or responses.
Learn more about deploying or undeploying models.
To view the model details, click the three-dots button in the top right corner of the model card.
Choose the View details menu, and the details of the fine-tuned model will be shown.
Here you can view the dataset, validation dataset, models used, hyperparameter configurations, the creator, and storage cost information.
To delete the fine-tune model, click the three-dots button in the top right corner of the model card.
Choose the Delete option.
Acknowledge that you understand the impact of deleting the fine-tuned model by checking the acknowledgement checkbox.
After that, confirm the delete process by clicking the Delete model button, and your model will be deleted.
To deploy a fine-tuned model:
Navigate to the My models tab, and expand the Fine-tuned LLMs section.
Click the Deploy model button to start the deployment.
Once you click the Deploy model button, you can specify the auto undeploy schedule based on your needs.
Click the Deploy model button to start the deployment process.
Once deployment is complete, your model will be available to use for experiment in Sandbox. Learn more about Sandbox.
To undeploy a model:
Navigate to the My models tab, and expand the Fine-tuned LLMs section.
Click the three-dots button in the right corner of the model card.
Choose the Undeploy option.
Confirm the undeployment by clicking the Undeploy button, and your model will be undeployed. You will no longer be charged for the hourly cost.
Once the fine-tuned model is deployed, it will be available in the Sandbox for further experimentation and testing. This allows you to integrate and test the specialized model within your specific applications. Learn more about Sandbox.
Models | Hyperparameters |
---|---|
Models | Hyperparameters |
---|---|
Amazon Titan Text G1 - Express
Epochs, and Learning rate.
Amazon Titan Text G1 - Lite
Epochs, and Learning rate.
Cohere Command
Epochs, and Learning rate.
Cohere Command Light
Epochs, and Learning rate.
Meta Llama 2 13B
Epochs, and Learning rate.
Meta Llama 2 70B
Epochs, and Learning rate.
Amazon Titan Text G1 - Express
Batch size, and Learning rate warmup steps
Amazon Titan Text G1 - Lite
Batch size, and Learning rate warmup steps
Cohere Command
Batch size, Early stopping threshold, and Early stopping patience.
Cohere Command Light
Batch size, Early stopping threshold, and Early stopping patience.
Meta Llama 2 13B
Batch size.
Meta Llama 2 70B
Batch size.