Model Context Protocol (MCP)

Overview

Model Context Protocol (MCP) is an open protocol that standardizes how applications provide tools and context to LLMs. This enables LLMs to go beyond answering questions by allowing them to interact with external tools, retrieve relevant context on demand, and perform complex, multi-step tasks. MCP transforms LLMs from passive responders into active agents capable of tool use, in-application reasoning, and dynamic task execution. This enhances their ability to assist with workflows, automate decisions, and collaborate more effectively within real-world systems.

This document assumes you are already familiar with the concept of Tool Calling and Model Context Protocol. If you need more information about these topics, you can read the links below:

This document outlines LLM Labs' support for MCP servers and how to connect them. You'll learn how to configure the server, set up secure communication, and enable the integration to begin managing tools, context, and workflows through a standardized interface.

Please note that we are not affiliated with any of the tools or MCP servers mentioned in the document. They are included for demonstration purpose only, and their use falls outside our terms of service. They should be used at your own discretion.

LLM Labs' support for MCP servers

Supported Transports

LLM Labs can connect to remote MCP servers: MCP servers that implement the following transports:

Support for MCP servers with stdio transport is achievable with the help of additional tool to adapt stdio MCP servers as a remote MCP server. There are several open source tools available that helps converting MCP servers, such as Supergateway.

Supported authentication methods

Remote MCP servers implement different levels of authentication method, such as: open access (no authentication), OAuth, and API key-based authentication through Authorization header.

LLM Labs supports open access and API key-based authentication, with OAuth support planned for a future release.

Supported MCP Server Features

As written in the specification, there are 3 main features: Prompts, Resources, and Tools. LLM Labs only supports the Tools feature at this moment, as we believe that this feature is the most broadly used.

Connecting your LLM Labs application to remote MCP servers

Configuring LLM Labs application

We can configure the LLM Labs application as an MCP client that can connect remote MCP servers. Follow the steps below:

  1. Go to Sandbox and open an existing one or create a new one.

  2. Open the desired application to connect to a remote MCP server, or create a new application in a sandbox.

  3. Ensure that the selected LLM model supports tool calling feature.

  4. Register the desired MCP servers via Advanced hyperparameters (The gear icon in an application)

    You can configure the server under datasaur.mcpServer.[mcp_server_name] with the following fields:

    • url (required): The URL of the MCP server.

    • headers (optional): Headers for the connection.

    Example 1: No authentication

    {
      /* ... other configuration ... */
      "datasaur": {
        "mcpServers": {
          "fetch": {
            "url": "<https://remote.mcpservers.org/fetch/mcp>"
          }
        }
      }
    }

    Example 2: With API key via authorization header

    {
      /* ... other configuration ... */
      "datasaur": {
        "mcpServers": {
          "stripe": {
            "url": "<https://mcp.stripe.com/>",
            "headers": {
              "Authorization": "Bearer TOKEN"
            }
          }
        }
      }
    }
  5. After configuring the application, you can run it against some prompts. It will automatically try to access the tools provided when running the prompt.

Consuming the MCP call outputs in your application

Our deployed application conform with OpenAI Chat Completion API specification that doesn’t natively support MCP call outputs. OpenAI supports the MCP call outputs via their new Response API. We are working to support the Response API specification for deployed applications, and will be available soon.

While we are working on the Response API support, we also extend the OpenAI Chat Completion API’s Message Object with an additional events field, with the following components:

  • type: The type of event.

    There are three types of events for Tool Runner Calls:

    • tool_runner_call: Triggered when the model calls a tool. It includes the arguments passed to the tool.

    • tool_runner_call_result: Triggered when a tool runs successfully. It contains the tool's execution result.

    • tool_runner_call_error: Triggered when a tool fails during execution. It includes error information.

  • data: The event's payload.

  • content_at: The character index in the content where the event occurred.

Alternative approach: connecting to MCP servers during runtime

While configuring it directly from the application is the most common approach for MCP server integration, there are some cases where you may need to connect MCP servers dynamically from your system when calling the deployed application endpoint. In this case, you can send the MCP Servers configuration directly by specifying it as a tool similar to the OpenAI's Remote MCP Interface, for example:

{
  "messages": [{ "role": "user", "content": "Create a payment link for me." }],
  "tools": [
    {
      "type": "mcp",
      "server_label": "stripe",
      "server_url": "<https://mcp.stripe.com/>",
      "headers": {
        "Authorization": "Bearer {STRIPE_API_KEY}"
      }
    }
  ]
}
  • Parameters:

    • server_label (required): The name of the MCP Server.

    • server_url (required): The URL of the MCP Server.

    • headers (optional): Headers for the connection.

We hope this guide helps you get started with connecting MCP servers to LLM Labs. As we continue to expand support and improve integration capabilities, your feedback is always welcome. If you have any questions or run into issues, please don't hesitate to reach out to our team.

Last updated