PricingDocs
Bars

Data Platform

Managed ClickHouse
Production-ready with Tinybird's DX
Streaming ingestion
High-throughput streaming ingest
Schema iteration
Safe migrations with zero downtime
Connectors
Plug and play Kafka, S3, and GCS

Developer Experience

Instant SQL APIs
Turn SQL into an endpoint
BI & Tool Connections
Connect your BI tools and ORMs
Tinybird Code
Ingest and query from your terminal

Enterprise

Tinybird AI
AI resources for LLMs and agents
High availability
Fault-tolerance and auto failovers
Security and compliance
Certified SOC 2 Type II for enterprise
Sign inSign up
Product []

Data Platform

Managed ClickHouse
Production-ready with Tinybird's DX
Streaming ingestion
High-throughput streaming ingest
Schema iteration
Safe migrations with zero downtime
Connectors
Plug and play Kafka, S3, and GCS

Developer Experience

Instant SQL APIs
Turn SQL into an endpoint
BI & Tool Connections
Connect your BI tools and ORMs
Tinybird Code
Ingest and query from your terminal

Enterprise

Tinybird AI
AI resources for LLMs and agents
High availability
Fault-tolerance and auto failovers
Security and compliance
Certified SOC 2 Type II for enterprise
PricingDocs
Resources []

Learn

Blog
Musings on transformations, tables and everything in between
Customer Stories
We help software teams ship features with massive data sets
Videos
Learn how to use Tinybird with our videos
ClickHouse for Developers
Understand ClickHouse with our video series

Build

Templates
Explore our collection of templates
Tinybird Builds
We build stuff live with Tinybird and our partners
Changelog
The latest updates to Tinybird

Community

Slack Community
Join our Slack community to get help and share your ideas
Open Source Program
Get help adding Tinybird to your open source project
Schema > Evolution
Join the most read technical biweekly engineering newsletter

Our Columns:

Skip the infra work. Deploy your first ClickHouse
project now.

Get started for freeRead the docs
A geometric decoration with a matrix of rectangles.

Product /

ProductWatch the demoPricingSecurityRequest a demo

Company /

About UsPartnersShopCareers

Features /

Managed ClickHouseStreaming IngestionSchema IterationConnectorsInstant SQL APIsBI & Tool ConnectionsTinybird CodeTinybird AIHigh AvailabilitySecurity & Compliance

Support /

DocsSupportTroubleshootingCommunityChangelog

Resources /

ObservabilityBlogCustomer StoriesTemplatesTinybird BuildsTinybird for StartupsRSS FeedNewsletter

Integrations /

Apache KafkaConfluent CloudRedpandaGoogle BigQuerySnowflakePostgres Table FunctionAmazon DynamoDBAmazon S3

Use Cases /

User-facing dashboardsReal-time Change Data Capture (CDC)Gaming analyticsWeb analyticsReal-time personalizationUser-generated content (UGC) analyticsContent recommendation systemsVector search
All systems operational

Copyright © 2025 Tinybird. All rights reserved

|

Terms & conditionsCookiesTrust CenterCompliance Helpline
Tinybird wordmark
PricingDocs
Bars

Data Platform

Managed ClickHouse
Production-ready with Tinybird's DX
Streaming ingestion
High-throughput streaming ingest
Schema iteration
Safe migrations with zero downtime
Connectors
Plug and play Kafka, S3, and GCS

Developer Experience

Instant SQL APIs
Turn SQL into an endpoint
BI & Tool Connections
Connect your BI tools and ORMs
Tinybird Code
Ingest and query from your terminal

Enterprise

Tinybird AI
AI resources for LLMs and agents
High availability
Fault-tolerance and auto failovers
Security and compliance
Certified SOC 2 Type II for enterprise
Sign inSign up
Product []

Data Platform

Managed ClickHouse
Production-ready with Tinybird's DX
Streaming ingestion
High-throughput streaming ingest
Schema iteration
Safe migrations with zero downtime
Connectors
Plug and play Kafka, S3, and GCS

Developer Experience

Instant SQL APIs
Turn SQL into an endpoint
BI & Tool Connections
Connect your BI tools and ORMs
Tinybird Code
Ingest and query from your terminal

Enterprise

Tinybird AI
AI resources for LLMs and agents
High availability
Fault-tolerance and auto failovers
Security and compliance
Certified SOC 2 Type II for enterprise
PricingDocs
Resources []

Learn

Blog
Musings on transformations, tables and everything in between
Customer Stories
We help software teams ship features with massive data sets
Videos
Learn how to use Tinybird with our videos
ClickHouse for Developers
Understand ClickHouse with our video series

Build

Templates
Explore our collection of templates
Tinybird Builds
We build stuff live with Tinybird and our partners
Changelog
The latest updates to Tinybird

Community

Slack Community
Join our Slack community to get help and share your ideas
Open Source Program
Get help adding Tinybird to your open source project
Schema > Evolution
Join the most read technical biweekly engineering newsletter

Skip the infra work. Deploy your first ClickHouse
project now.

Get started for freeRead the docs
A geometric decoration with a matrix of rectangles.

Product /

ProductWatch the demoPricingSecurityRequest a demo

Company /

About UsPartnersShopCareers

Features /

Managed ClickHouseStreaming IngestionSchema IterationConnectorsInstant SQL APIsBI & Tool ConnectionsTinybird CodeTinybird AIHigh AvailabilitySecurity & Compliance

Support /

DocsSupportTroubleshootingCommunityChangelog

Resources /

ObservabilityBlogCustomer StoriesTemplatesTinybird BuildsTinybird for StartupsRSS FeedNewsletter

Integrations /

Apache KafkaConfluent CloudRedpandaGoogle BigQuerySnowflakePostgres Table FunctionAmazon DynamoDBAmazon S3

Use Cases /

User-facing dashboardsReal-time Change Data Capture (CDC)Gaming analyticsWeb analyticsReal-time personalizationUser-generated content (UGC) analyticsContent recommendation systemsVector search
All systems operational

Copyright © 2025 Tinybird. All rights reserved

|

Terms & conditionsCookiesTrust CenterCompliance Helpline
Tinybird wordmark
PricingDocs
Bars

Data Platform

Managed ClickHouse
Production-ready with Tinybird's DX
Streaming ingestion
High-throughput streaming ingest
Schema iteration
Safe migrations with zero downtime
Connectors
Plug and play Kafka, S3, and GCS

Developer Experience

Instant SQL APIs
Turn SQL into an endpoint
BI & Tool Connections
Connect your BI tools and ORMs
Tinybird Code
Ingest and query from your terminal

Enterprise

Tinybird AI
AI resources for LLMs and agents
High availability
Fault-tolerance and auto failovers
Security and compliance
Certified SOC 2 Type II for enterprise
Sign inSign up
Product []

Data Platform

Managed ClickHouse
Production-ready with Tinybird's DX
Streaming ingestion
High-throughput streaming ingest
Schema iteration
Safe migrations with zero downtime
Connectors
Plug and play Kafka, S3, and GCS

Developer Experience

Instant SQL APIs
Turn SQL into an endpoint
BI & Tool Connections
Connect your BI tools and ORMs
Tinybird Code
Ingest and query from your terminal

Enterprise

Tinybird AI
AI resources for LLMs and agents
High availability
Fault-tolerance and auto failovers
Security and compliance
Certified SOC 2 Type II for enterprise
PricingDocs
Resources []

Learn

Blog
Musings on transformations, tables and everything in between
Customer Stories
We help software teams ship features with massive data sets
Videos
Learn how to use Tinybird with our videos
ClickHouse for Developers
Understand ClickHouse with our video series

Build

Templates
Explore our collection of templates
Tinybird Builds
We build stuff live with Tinybird and our partners
Changelog
The latest updates to Tinybird

Community

Slack Community
Join our Slack community to get help and share your ideas
Open Source Program
Get help adding Tinybird to your open source project
Schema > Evolution
Join the most read technical biweekly engineering newsletter
Back to Blog
Share this article:
Back
Jun 09, 2025

MCP vs APIs: When to Use Which for AI Agent Development

MCP enforces consistency that HTTP APIs lack, which is essential for LLMs' autonomous tool selection and agent autonomy. This post breaks down real-world trade-offs of using MCPs and/or APIs
AI x Data
Jorge Sancha
Jorge SanchaCo-founder

Deciding between using MCP (Model Context Protocal) or APIs when building AI agents is not necessarily a binary choice.

In this post I break down the technical trade-offs and share a few learnings on when to use which.

What We're Actually Comparing

Model Context Protocol (MCP) is a standardized wire protocol that lets AI systems communicate with external services using natural language. Think of it as a universal adapter for tools. MCP servers wrap your existing APIs, resources and data, exposing them in a way that LLMs can discover and use autonomously during conversations and depending on the situation.

discovery and autonomous use are the key words to remember when thinking about MCPs vs APIs

Traditional APIs (REST, GraphQL) remain the backbone of software integration. When programming agents, you can either hard-code API calls or implement function calls that you can pass as tools and that the agent can invoke. The agent's capabilities are fixed at design time; you decide what it can do.

MCP servers don't replace APIs; they usually leverage them. They often wrap underlying and existing APIs. They add a "conversational layer" that make APIs LLM-friendly. In fact, there are great reasons to have MCPs only use APIs under the hood, as we will see later.

When MCP is the Right Choice

MCP shines in scenarios where you need LLMs to reason and make choices on their own. They also happen to be fantastic for rapid prototyping via Chat interfaces such as Claude.

Dynamic Tool Selection

As you may already know, an MCP Server will provide access to your LLM to a set features of an underlying service. These features can be either tools (executable functions that the LLM model can discover and correctly execute), resources (context and data, for the user or the AI model to use) or prompts (templated structured messages and instructions for interacting with language models).

When your agent needs to reason in real time about which tools to use in order to accomplish a given task, MCP excels.

Instead of pre-defining every possible query, you let the LLM figure out what it needs through natural language. This is powerful for agentic analytics where an AI might dynamically form database queries to answer arbitrary user questions.

Multi-Tool Workflows

If your agent needs to either use many different tools or resources from a different service, or to interact with multiple different services (e.g. an agent that monitors real-time stock data, alerts users and stores information), MCP offers a unified way to integrate across tools and services.

Rather than managing numerous API SDKs and formats, you connect to multiple MCP servers all speaking the same protocol. The model can use any tool mid-conversation, treating them like plug-and-play modules.

Agent Autonomy

MCP enables true agent autonomy: the ability to iteratively call tools, get results, and decide next steps in a loop.

An agent analyzing sales data might query for summary stats, decide it needs more detail, and make follow-up queries, all without explicit workflow coding.

Rapid Prototyping and Tool Integration

Another often underestimated advantage of MCP's is how quickly you can validate an idea for an agent before actually building anything.

Modern AI platforms with a conversational interface like Claude can be configured to connect to multiple MCP servers; the model will discover available tools and decide which to use, call them as needed and and handle errors without custom integration code.

For rapid prototyping, this is transformative. You can connect MCPs to Claude, write a comprehensive prompt describing what you want the agent to do, and test whether your idea works. It's often the fastest way to validate an agent concept before committing to full development.

I recently wanted to build an agent that would monitor our code repositories and evaluate Pull Requests to decide whether they were good or bad candidates to write insightful technical content. I configured claude to use MCPs for Gitlab, Resend and Tinybird. I wrote a comprehensive Prompt that asked Claude to analyze the last 25 PRs, look at the name, description and code changes, try to understand and evaluate the significance of the code change and decide whether a technical blog post be written that was both interesting for our audience and aligned with Tinybird's worldviews.

It worked like a charm and, without a line of code, it was immediately obvious that it could be easily automated as a background agent.

However, once I started coding, I realised that getting the last PRs from Gitlab and iterating through them was easier, faster and more performant to implement simply by using their APIs. Which leads me to...

When Direct APIs Are Better

Traditional API integration can be still extremely useful in many scenarios, specially when you require deterministic results out of every call. It can also help improve performance for those things where you don't need an LLM to reason over your results.

Performance and Real-Time Requirements

For high-performance, low-latency applications, direct API calls are more efficient. MCP adds a reasoning layer that, while powerful, introduces latency as the model decides how to use tools. In time-sensitive workflows—monitoring stock prices, IoT sensors, real-time analytics—direct API calls provide predictable performance.

Complex Data Operations

Large-scale data requests need custom API logic. Current MCP-driven agents struggle with pagination, bulk data pulls, and complex data transformations. If an API returns 100 records at a time, an MCP agent might not automatically paginate through results, risking incomplete data.

For bulk operations, developer-managed API calls with proper data filtering are more reliable and cost-effective than letting an agent attempt brute-force calls that could exceed context windows and drive up costs.

Multi-Source Data Orchestration

While fulfilling requests that require combining multiple services or databases is, in principle, a perfect use case for MCPS, sometimes it can be hard for agents to mix data from different sources in a single response.

Integrating, for instance, Slack + Jira + a database in one answer may exceed what current MCP sessions can handle. This is where more deterministic pipelines where your code calls each API and passes on the data to an LLM for processing may be a better combination.

Security and Deterministic Operations

Another thing to worry about with MCP is security. How do you ensure an Agent doesn't leak data or schemas you don't want leaked?

For operations that must guarantee specific actions or adhere to strict policies, direct API calls under tight control may be safer. Agents using MCP have significant autonomy—they could call functions in unintended ways if prompts or tool descriptions aren't perfect or if there isn't a strong security mechanism built in.

For financial transactions, sensitive data updates, or regulated environments, you want validation, error handling, and security checks at every step. Traditional API integration usually leverages existing API management features, like security, governance, auditing, and rate limiting—providing observable, enforceable control points.

That doesn't mean you cannot use MCPs in these cases, but it means your MCP needs to be able to enforce all those things.

The Hybrid Approach

In practice, the most effective agent systems use both approaches strategically:

  • MCP for flexible, on-the-fly tool use and natural language reasoning, control and querying
  • Direct APIs for efficient bulk operations, deterministic operations and enforcing constraints
  • Claude/Cursor or others + MCPs can be amazing for rapid prototyping. You can them optimize with custom API integration where needed.

As always in software, the choice is about using the right tool for each job.

The Real Impact on Development

What's interesting is that MCP may actually increase API usage rather than replace it. Each user request might cause an agent to make numerous API calls as it thinks and iterates. This drives demand for more robust, well-documented APIs.

Bear in mind that simply exposing the APIs under an MCP doesn't make them effective tools: great descriptions and instructions on how and when the LLM should use them can make a big difference on how well they perform their assigned tasks.

The emergence of MCP also highlights the importance of designing APIs with AI consumption in mind: MCP enforces consistency that HTTP APIs lack (even with things like OpenAPI describing the interfaces they expose). While OpenAPI documents existing patterns, MCP prescribes specific ones: single input schemas, deterministic execution, runtime discovery.

This matters because LLM-generated HTTP requests to APIs are error-prone, often with hallucinated paths, and wrong parameters. MCP's deterministic execution means you can test, sanitize inputs, and handle errors in actual code, not hope that the LLM formats requests correctly.

That being said, is early days for MCPs and when you use a few different ones, you can tell which ones are "mature" and have actually been heavily used and iterated on and which ones haven't.

What are we doing at Tinybird

At Tinybird, we're incredibly excited about Agentic Analytics, and we believe Tinybird is a great platform to build on. For agents to be truly useful, they need real-time, governed, and performant access to data. Here's why Tinybird is uniquely suited for this new wave of AI development:

  • Low-latency APIs as a Semantic Layer. Tinybird is built around turning SQL queries into low-latency, parameterized API endpoints. These endpoints act as "query lambdas" that you can describe as tools for an agent. This creates a powerful semantic layer that gives you precise control over what is and what isn't available for agents, effectively creating a secure and well-defined boundary for them to operate within.

  • Built-in Security, Observability, and RBAC. Security is paramount when giving agents data access. With Tinybird, security, observability, and RBAC are built-in features. Our Auth Tokens (which are JWTs) allow you to implement granular role-based access. You can create different tokens for multiple agents, granting each one access only to the specific tools and data it needs to function.

  • Serverless Scale and Performance. Agentic workflows often involve iterative, rapid-fire queries. Tinybird's performance makes our API endpoints extremely fast, providing the low-latency responses needed for a fluid user experience. And because it's serverless, it scales effortlessly with your agent's usage.

We're just getting started and are committed to making Tinybird the best platform for building and deploying data-intensive AI agents. We are currently developing more tooling and infrastructure to support developers who are building these agentic systems, so keep an eye on tinybird.ai for more news soon.

Do you like this post? Spread it!

Skip the infra work. Deploy your first ClickHouse
project now.

Get started for freeRead the docs
A geometric decoration with a matrix of rectangles.

Product /

ProductWatch the demoPricingSecurityRequest a demo

Company /

About UsPartnersShopCareers

Features /

Managed ClickHouseStreaming IngestionSchema IterationConnectorsInstant SQL APIsBI & Tool ConnectionsTinybird CodeTinybird AIHigh AvailabilitySecurity & Compliance

Support /

DocsSupportTroubleshootingCommunityChangelog

Resources /

ObservabilityBlogCustomer StoriesTemplatesTinybird BuildsTinybird for StartupsRSS FeedNewsletter

Integrations /

Apache KafkaConfluent CloudRedpandaGoogle BigQuerySnowflakePostgres Table FunctionAmazon DynamoDBAmazon S3

Use Cases /

User-facing dashboardsReal-time Change Data Capture (CDC)Gaming analyticsWeb analyticsReal-time personalizationUser-generated content (UGC) analyticsContent recommendation systemsVector search
All systems operational

Copyright © 2025 Tinybird. All rights reserved

|

Terms & conditionsCookiesTrust CenterCompliance Helpline
Tinybird wordmark

Related posts

AI x Data
Jul 02, 2025
How to build an analytics agent with Agno and Tinybird: Step-by-step
Cameron Archer
Cameron ArcherTech Writer
1How to build an analytics agent with Agno and Tinybird: Step-by-step
AI x Data
Jun 23, 2025
Building an autonomous analytics agent with Agno and Tinybird
Alberto Romeu
Alberto RomeuSoftware Engineer
1Building an autonomous analytics agent with Agno and Tinybird
AI x Data
Jun 23, 2025
Chat with your data using the Birdwatcher Slack App
Alberto Romeu
Alberto RomeuSoftware Engineer
1Chat with your data using the Birdwatcher Slack App
AI x Data
Jul 07, 2025
10 Analytics Agents examples you can copy
Alberto Romeu
Alberto RomeuSoftware Engineer
110 Analytics Agents examples you can copy
AI x Data
May 08, 2025
Which LLM writes the best analytical SQL?
Victor Ramirez Garcia
Victor Ramirez GarciaSoftware Engineer
1Which LLM writes the best analytical SQL?
AI x Data
Oct 09, 2025
OpenAI Agent Builder + Tinybird MCP: Building a data-driven agent workflow
Cameron Archer
Cameron ArcherTech Writer
1OpenAI Agent Builder + Tinybird MCP: Building a data-driven agent workflow
AI x Data
Aug 22, 2025
Don't Trust the Prompt: Use RLAC to secure LLM database access
Jorge Sancha
Jorge SanchaCo-founder
1Don't Trust the Prompt: Use RLAC to secure LLM database access
AI x Data
May 16, 2025
We graded 19 LLMs on SQL. You graded us.
Victor Ramirez Garcia
Victor Ramirez GarciaSoftware Engineer
1We graded 19 LLMs on SQL. You graded us.
AI x Data
Jul 21, 2025
Why LLMs struggle with analytics
Jorge Sancha
Jorge SanchaCo-founder
1Why LLMs struggle with analytics
AI x Data
Apr 10, 2025
Build natural language filters for real-time analytics dashboards
Cameron Archer
Cameron ArcherTech Writer
1Build natural language filters for real-time analytics dashboards