6.70.0

July 11, 2024

What's new 💡

Data Studio

  • Analytics Overview — Display the last updated information for metrics on the Home tab.

  • Enhanced the relabeling behavior for rejected labels in Span Labeling projects during Reviewer mode when 'Enable checkboxes in labelbox' is checked.

  • External Object Storage — Enhance protection for Google Cloud Storage by introducing a new security token field. Rest assured, your existing buckets will work as usual.

  • Improved the Grammar Checker extension to prevent cropped content from appearing.

  • Introduced a new access to configure Review Sampling in project settings, enhancing the user experience by allowing direct activation within projects.

  • Project Analytics (project details page) — Display the last updated information for metrics on the Trends tab.

  • SCIM — Support for the new Supervisor role.

LLM Labs

  • Added a feature to handle Vertex AI model safety configuration attributes. Users will now be prevented from asking offensive, insensitive, or factually incorrect questions.

  • Added a new banner to showcase Claude 3.5 integration with LLM Labs.

  • Added a new export feature in automated evaluation.

  • Improved the automated evaluation result detail dialog.

  • Improved the automated evaluation UI flow.

  • Improved the color coding of automated evaluation results in the automated evaluation details page.

  • Improved the custom delimiter detector to accept semicolon-delimited CSV files in automated evaluation.

  • Limited the number of prompts and applications in a single playground. Users can now add only 5 applications and 20 prompts in a playground.

Bug fixes 🐞

Data Studio

  • Fixed an issue where bounding boxes remained gray after removing labels in Bounding Box Labeling projects.

  • Fixed an issue where the go-to-line navigation reset, causing users to start from the beginning of the row in Row Labeling projects.

LLM Labs

  • Fixed a bug in the title transition between the "Run all" and "Run selected" buttons.

  • Fixed an issue where the cost calculation was not accurately calculating costs for evaluations using the Direct Access LLMs from OpenAI.

Last updated