We're excited to announce the launch of the Tinybird MCP Server: a remote, hosted MCP server that allows LLMs and AI agents to connect directly to your Tinybird workspaces. Now, you can instantly make your real-time data LLM-ready without any setup or infrastructure.
Get started now: Read the Tinybird MCP docs
Quick Start
For clients and IDEs, add the Tinybird MCP Server to your mcp.json config:
{
"mcpServers": {
"tinybird": {
"url": "https://cloud.tinybird.co/mcp?token=TB_TOKEN&host=TB_HOST"
}
}
}
Or, use your preferred SDK or agent framework. Here's a quick example with Agno:
from agno.agent import Agent
from agno.models.anthropic import Claude
from agno.tools.mcp import MCPTools
with MCPTools(
url=f"https://mcp.tinybird.co?token={tinybird_api_key}&host={tinybird_host}"
) as mcp_tools:
agent = Agent(
model=Claude(id="claude-4-opus-20250514"),
tools=[mcp_tools]
)
agent.aprint_response("top pages visited in the last 7 days", stream=True)
Output:
ββ Message ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β
β top pages visited in the last 7 days β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββ Tool Calls βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β
β β’ explore_data(prompt=top pages visited in the last 7 days ) β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββ Response (3.9s) ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β
β The top 7 pages with the most pageviews are: β
β β
β 1. / - 24,001 visits β
β 2. /pricing - 19,323 visits β
β ... β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
The MCP client will have access to resources secured by the supplied static token or JWT.
Note that Tinybird MCP currently only supports Streamable HTTP as the transport protocol. If your MCP client doesn't support it, you'll need to use the mcp-remote
package as a bridge. You can find more implementation code snippets in the Tinybird MCP docs.
Why MCP?
Tinybird has always been a platform to build APIs to integrate analytics into user-facing applications. While APIs are useful for software applications and services, they aren't always useful for LLMs.
Put simply, the Tinybird MCP Server exposes the resources in your Tinybird workspaces in a language that LLMs can understand. It allows agents to reason with your data, use tools to discover insights and make recommendations, and-when appropriate-make calls to your existing API endpoints to fetch the data they need.
Understanding the Tinybird MCP Server
The Tinybird MCP Server is a remote, hosted MCP Server that gives AI agents access to your data. It is secured by the same static tokens or JWTs that you use to secure your API endpoints, so agents only access to the data sources and APIs within the token's scope.
Core tools
The Tinybird MCP Server exposes the following core tools:
explore_data
: An agentic tool that can perform the same advanced explorations that Tinybird uses internally in its Explorations feature. It can craft and optimize SQL, surface relevant fields, and guide agents to the best possible queries.text_to_sql
: An agent that can inspect your workspace, understand the shape of your data, and interpret natural language questions within your data's context to generate SQL.execute_query
: Run SQL queries against the Tinybird SQL API.list_endpoints
/list_datasources
: Discover API endpoints and data sources available to the provided token.list_service_datasources
: Discover workspace and organization service data sources for health metrics and analysis.
Those first two tools, explore_data
and text_to_sql
are particularly powerful, as they are not hard-coded tools, but additional agents that can perform complex tasks. We believe that multi-agent communication is critical for a robust agentic analytics experience, and MCP provides a solid framework for agent-to-agent communication.
Endpoint tools
In addition, the Tinybird MCP Server exposes as a tool every deployed API endpoint available within the supplied token's scope.
These tools share names with and function similarly to the corresponding API endpoints. They accept parameters, return results in JSON format, and respect rate limits and authorization.
But, why expose endpoints as tools when we already have text-to-SQL tools? A few reasons:
- LLMs can struggle to build valid, efficient SQL. Our findings suggest that even the most advanced LLMs struggle to produce SQL as well as a human. Their text-to-SQL abilities are subject to interpretation of the prompt, their ability to distinguish SQL dialects, and their understanding of the underlying data model. Even with the work we've done to build effective text-to-SQL prompts for our Explorations feature, having deterministic API endpoints can speed things up by avoiding syntax errors, auto-fix retries, and non-performant queries. (By the way, if you'd like to dig into the travails of LLM SQL generation, check out our LLM SQL Generation Benchmark).
- Tinybird APIs are documented in natural language. Tinybird APIs aren't just SQL queries. A
.pipe
file defining a Tinybird API Endpoint also includes a plaintext description, which can provide useful context for LLMs. Natural language descriptions can help agents identify the correct endpoint tools needed to accomplish their tasks efficiently. - Simplifies prompting. As Tinybird API endpoints and the MCP endpoint tools are deterministic in their output, they simplify prompting by eliminating the need for detailed instructions required for LLMs to generate precise and valid SQL.
Security
When you give agents access to a database, you must be careful to avoid exposing private or sensitive data. In multi-tenant environments, you run the risk of cross-contamination-allowing agents to access data from one customer for another customer.
The Tinybird MCP Server eliminates data leakage by using token-based authentication. In the same way that your Tinybird workspace resources are secured by tokens, the MCP Server and it's tools are also subject to token-based authentication.
This provides the following security benefits:
- Built-in access control: Leverages scoped Tinybird static tokens and/or JWTs to limit MCP client data access.
- Zero data leakage: Tinybird MCP never exposes more than the underlying credentials allow.
- RBAC for your MCPs: Give precise access with row-level authorization for multi-tenant environments. This allows you to expose MCP endpoints directly to end users, which is ideal for building agentic experiences or custom data apps where each user only sees the data for which they have access.
Observability
The Tinybird MCP Server includes built-in observability. Every tool call is tracked in Tinybird service data sources using the from=mcp
URL parameter, so you have detailed observability about how agents access and use your resources via the MCP server.
-- Returns the APIs most requested by MCP clients
SELECT
pipe_id,
count() AS count
FROM tinybird.pipe_stats
WHERE url like '%from=mcp%'
ORDER BY count DESC
Works with Cursor, clients, and SDKs
Any agent, IDE, or client SDK can interact with Tinybird via Streamable HTTP protocol. Configuration is simple and standardized. Check out the docs for configuration templates.
How to use the Tinybird MCP Server
The Tinybird MCP Server is useful in any scenario where you, your agents, or your users want to have natural language conversations with the data you store in Tinybird. With it, you can integrate real-time analytics into any agentic workflow or AI data app.
Some examples of how we're using the Tinybird MCP Server internally at Tinybird:
- Explorations Feature: The Explorations conversational UI feature is powered by the Tinybird MCP Server. This is the perfect example of an agentic UI feature in a multi-tenant app leveraging the Tinybird MCP Server with row-based security policies secured by workspace-level tokens.
- Birdwatcher Agent: We've created an AI agent that can answer questions about your Tinybird workspace. The agent uses the Tinybird MCP Server and service data sources to monitor your Tinybird usage and help you identify areas for optimization, cost reduction, and performance improvements. It can run as a Slack app, as a CLI, or as a standalone, ambient agent to run scheduled analysis and exploration. You can find the Birdwatcher Agent code (and additional Tinybird AI resources) in this repository.
- Web/Product Analytics: Send web traffic data to Tinybird (start with the Web Analytics Template) and connect a Slack agent via MCP to get daily content performance summaries or product usage insights.
- Custom Conversational Data Apps: Integrate your Tinybird data and endpoints into your user-facing, agentic applications. Leverage user-level JWTs to give your end users the ability to query and explore Tinybird data using natural language.
- Other Ideas: Pretty much any scenario where you want LLMs or agents to securely and efficiently access your data or analytics APIs.
Example usage
Below you'll find example code to integrate the Tinybird MCP Server into your agentic workflows and AI apps.
Agno
from agno.agent import Agent
from agno.models.anthropic import Claude
from agno.tools.mcp import MCPTools
import asyncio
import os
tinybird_api_key = os.getenv("TINYBIRD_API_KEY")
tinybird_host = os.getenv("TINYBIRD_HOST")
async def main():
async with MCPTools(
transport="streamable-http",
url=f"https://cloud.tinybird.co/mcp?token={tinybird_api_key}&host={tinybird_host}",
timeout_seconds=120
) as mcp_tools:
# Setup and run the agent
agent = Agent(
model=Claude(id="claude-4-opus-20250514"),
tools=[mcp_tools]
)
await agent.aprint_response(
"top 5 pipes with the most errors in the last 24 hours",
stream=True,
)
if __name__ == "__main__":
asyncio.run(main())
Example output:
uv run python agno.py
ββ Message ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β
β top 5 pipes with the most errors in the last 24 hours β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββ Tool Calls βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β
β β’ explore_data(prompt=top 5 pipes with the most errors in the last 24 hours) β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββ Response (22.9s) βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β
β The top 5 pipes with the most errors are: β
β β
β 1. logistics - 279 errors β
β 2. cloud_compute - 278 errors β
β 3. data_lake - 275 errors β
β 4. quantum_sim - 272 errors β
β 5. ml_platform - 267 errors β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Pydantic
import os
from dotenv import load_dotenv
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStreamableHTTP
import asyncio
load_dotenv()
SYSTEM_PROMPT = """
- You are a data analyst and you'll be provided with Tinybird tools.
- You will be asked questions about the data in the workspace provided.
"""
async def main():
tinybird = MCPServerStreamableHTTP(
f"https://cloud.tinybird.co/mcp?token={os.getenv('TINYBIRD_API_KEY')}&host={os.getenv('TINYBIRD_HOST')}"
)
agent = Agent(
model="anthropic:claude-4-opus-20250514",
mcp_servers=[tinybird],
system_prompt=f"{SYSTEM_PROMPT}",
)
async with agent.run_mcp_servers():
print("Running agent...")
result = await agent.run("top 5 pipes with the most errors in the last 24 hours")
print(result.output)
asyncio.run(main())
Example output:
uv run python pydantic.py
Running agent...
Based on the available data, here are the **top 5 pipes with the most errors in
the most recent 24-hour period** (May 30-31, 2025):
## Top 5 Pipes with Most Errors (Last 24 Hours of Available Data)
1. **cloud_compute** (quantum_systems)
- **13 errors** (18.06% of all errors)
- Error types: Timeout, ServiceUnavailableError, RateLimitError
- Time range: May 30, 01:25:34 - May 31, 02:31:37
2. **ml_platform** (data_pioneers)
- **7 errors** (9.72% of all errors)
- Error types: Timeout, ServiceUnavailableError, AuthenticationError, RateLimitError
- Time range: May 30, 04:28:47 - May 31, 12:08:06
3. **payments** (acme_corp)
- **7 errors** (9.72% of all errors)
- Error types: Timeout, ServiceUnavailableError, AuthenticationError, RateLimitError
- Time range: May 30, 07:09:41 - May 31, 13:17:47
4. **pos_system** (future_retail)
- **6 errors** (8.33% of all errors)
- Error types: Timeout, ServiceUnavailableError, RateLimitError
- Time range: May 30, 09:50:32 - May 31, 20:21:21
5. **quantum_sim** (quantum_systems)
- **6 errors** (8.33% of all errors)
- Error types: Timeout, ServiceUnavailableError, AuthenticationError
- Time range: May 30, 08:10:51 - May 31, 20:21:20
**Note:** The data in this workspace is from 2025, so I'm showing the most
recent 24-hour period with error data (May 30-31, 2025). The most common error
types across all pipes are:
- ServiceUnavailableError
- Timeout
- RateLimitError
- AuthenticationError
Vercel AI SDK
import { anthropic } from "@ai-sdk/anthropic";
import {
generateText,
experimental_createMCPClient as createMCPClient,
type Message,
} from "ai";
import {
StreamableHTTPClientTransport,
} from "@modelcontextprotocol/sdk/client/streamableHttp";
import * as dotenv from "dotenv";
dotenv.config();
async function main() {
const messages: Message[] = [{
id: "1",
role: "user",
content: "top 5 pipes with the most errors"
}];
const url = new URL(
`https://cloud.tinybird.co/mcp?token=${process.env.TINYBIRD_API_KEY}&host=${process.env.TINYBIRD_HOST}`
);
const mcpClient = await createMCPClient({
transport: new StreamableHTTPClientTransport(url, {
sessionId: "session_123",
}),
});
const tbTools = await mcpClient.tools();
const result = await generateText({
model: anthropic("claude-4-opus-20250514"),
messages,
maxSteps: 5,
system: `You are a helpful data analyst. Use service data sources and the explore_data tool to answer the user's question.`,
tools: {...tbTools}
});
console.log(result.text);
}
main();
Example output:
npm start
> tsx vercel.ts
Based on the analysis of your pipe error data, here are the **top 5 pipes with the
most errors**:
1. **llm_messages** - 349 errors
- Pipe ID: t_15fd151a587a40c8b7c85a0ccb9a20d7
2. **query_api** - 62 errors
- Pipe ID: query_api
3. **generic_counter** - 38 errors
- Pipe ID: t_16832801592d4af4be4cf0d8ed6a6c81
4. **llm_usage** - 5 errors
- Pipe ID: t_b4871551faa64d4c929892e34ba36d30
5. **llm_dimensions** - 0 errors
- Pipe ID: t_1a0b49d5a1274196ba6ba15c001c845b
The "llm_messages" pipe stands out with significantly more errors (349) compared
to the others. This suggests it might need investigation to understand the root
cause of these errors. The "query_api" pipe has the second highest error count
with 62 errors.
Would you like me to investigate further into any specific pipe's errors, such
as looking at recent error patterns or error details?
Next steps
The Tinybird MCP Server makes your real-time data and analytics APIs LLM-ready, with enterprise-grade security, observability, and zero infrastructure setup. Use it to build apps, agents, and multi-agent workflows that need access to real-time analytics.
Get started now:
Stay in the loop: