
Data Platform


See counts for datasources, endpoints, connections, pipes, and more. Plus a 7-day error tracking chart with recent errors and timestamps.
Sparkline charts for vCPU time, requests, and ingested rows over a 7-day window. Spot trends and anomalies without writing a single query.
Last deployment summary, editable markdown workspace description, and quick access to API hosts, MCP server URLs, and BI tool connection parameters.
See counts for datasources, endpoints, connections, pipes, and more. Plus a 7-day error tracking chart with recent errors and timestamps.
Sparkline charts for vCPU time, requests, and ingested rows over a 7-day window. Spot trends and anomalies without writing a single query.
Last deployment summary, editable markdown workspace description, and quick access to API hosts, MCP server URLs, and BI tool connection parameters.
Filter by type, deployment ID, connection, errors, or usage metrics. Use natural language to find resources instantly across your entire workspace.
Click any resource to open a detail panel. See API URLs, schema, metrics, and diffs. Toggle between list and lineage views without losing your position.
Browse pipe_stats_rt, endpoint_errors, kafka_ops_log, and more. Access ClickHouse system.* tables directly. Queries against service datasources are free.
Filter by type, deployment ID, connection, errors, or usage metrics. Use natural language to find resources instantly across your entire workspace.
Click any resource to open a detail panel. See API URLs, schema, metrics, and diffs. Toggle between list and lineage views without losing your position.
Browse pipe_stats_rt, endpoint_errors, kafka_ops_log, and more. Access ClickHouse system.* tables directly. Queries against service datasources are free.

Chronological view of all deployments with changes and logs. See who deployed, when, and what changed across your workspace.
See exact changes between deployments down to specific SQL lines. Compare resource definitions side by side to understand what changed.
Click into any deployment to see what changed. View contextual resource details, API URLs for endpoints, schema for datasources, and full diffs.
Chronological view of all deployments with changes and logs. See who deployed, when, and what changed across your workspace.
See exact changes between deployments down to specific SQL lines. Compare resource definitions side by side to understand what changed.
Click into any deployment to see what changed. View contextual resource details, API URLs for endpoints, schema for datasources, and full diffs.

CPU time, QPS, average memory, ingested rows, and ingest errors. All in real-time charts with adjustable time ranges.
Monitor copy executions, sink executions, rows in quarantine, and their error rates. See what's running and what's failing.
Filter jobs by status: error, working, waiting, or done. Quickly identify failed operations and drill into the details.
CPU time, QPS, average memory, ingested rows, and ingest errors. All in real-time charts with adjustable time ranges.
Monitor copy executions, sink executions, rows in quarantine, and their error rates. See what's running and what's failing.
Filter jobs by status: error, working, waiting, or done. Quickly identify failed operations and drill into the details.
Filter by log type, source, resource name, and time range. Toggle errors only to focus on what matters. Update in real-time or refresh on demand.
Click any log entry to open a split view of the underlying resource. Browse its data, columns, settings, lineage, and resource-level logs without leaving the page.
Run tb logs to inspect your logs directly from the CLI. Get real-time visibility into your data operations without leaving your terminal.
Filter by log type, source, resource name, and time range. Toggle errors only to focus on what matters. Update in real-time or refresh on demand.
Click any log entry to open a split view of the underlying resource. Browse its data, columns, settings, lineage, and resource-level logs without leaving the page.
Run tb logs to inspect your logs directly from the CLI. Get real-time visibility into your data operations without leaving your terminal.

Build and iterate on queries in a multi-node editor. Chain SQL nodes together to prototype complex pipelines before deploying.
Generate and refine queries with CMD+K. AI understands your schema and suggests fixes, helping you debug pipelines faster.
See query results as tables. Export data in JSON or CSV to use in external tools or share with your team.
Build and iterate on queries in a multi-node editor. Chain SQL nodes together to prototype complex pipelines before deploying.
Generate and refine queries with CMD+K. AI understands your schema and suggests fixes, helping you debug pipelines faster.
See query results as tables. Export data in JSON or CSV to use in external tools or share with your team.

Track endpoint latency, error rates, and request volumes over time. Visualize any data source, including service data sources like pipe_stats_rt, without writing SQL.
Granularity auto-adjusts to your time range. Drag-to-zoom into any period for a closer look, syncing automatically with the time selector.
Compare metrics side by side with table and chart views. Monitor storage consumption, request patterns, and resource usage across your workspace.
Track endpoint latency, error rates, and request volumes over time. Visualize any data source, including service data sources like pipe_stats_rt, without writing SQL.
Granularity auto-adjusts to your time range. Drag-to-zoom into any period for a closer look, syncing automatically with the time selector.
Compare metrics side by side with table and chart views. Monitor storage consumption, request patterns, and resource usage across your workspace.

Use @syntax to reference specific data sources and add workspace rules to fine-tune how the agent responds. The more context it has, the more accurate its answers.
Get improvement suggestions, identify performance bottlenecks, and surface data quality issues. Focus on intelligent analysis, not visualization.
Reasoning nodes created during analysis can be exported to Playgrounds. Continue refining queries in a full SQL environment.
Use @syntax to reference specific data sources and add workspace rules to fine-tune how the agent responds. The more context it has, the more accurate its answers.
Get improvement suggestions, identify performance bottlenecks, and surface data quality issues. Focus on intelligent analysis, not visualization.
Reasoning nodes created during analysis can be exported to Playgrounds. Continue refining queries in a full SQL environment.


Built-in
A workspace is where all your Tinybird resources live: datasources, pipes, endpoints, connections, explorations, playgrounds, and more. You can access and manage your workspace from the UI or the CLI. The UI gives you visual tools to monitor performance, explore data, inspect logs, and debug issues. The CLI lets you deploy, iterate, and automate.
The Overview page is your workspace dashboard. It shows resource summaries with counts for datasources, endpoints, materializations, connections, pipes, copies, and sinks. It also includes sparkline charts for vCPU time, requests, and ingested rows over a 7-day window, the last deployment status, and quick access to API hosts, MCP server URLs, and BI tool connection parameters.
The Resources page is a unified table view of all your workspace resources with inline metrics like status, request count, error count, and average latency. It features composable filters to narrow down by type, deployment ID, connection, errors, or usage metrics. You can also use natural language to find resources, toggle between list and lineage views, and open a split-screen detail panel to inspect any resource without losing your place. The Resources page also gives you access to service datasources (pipe_stats_rt, endpoint_errors, datasources_storage, kafka_ops_log, and more) and ClickHouse system.* tables. Queries against service datasources are free.
The Deployments page shows a chronological view of all your releases. Click into any deployment to see exactly what changed, including a SQL diff viewer that highlights changes down to specific lines. You can compare resource definitions side by side and see contextual details like API URLs and schema changes.
The Observability page gives you real-time metrics for your workspace health. Track CPU time, QPS, average memory, ingested rows, rows in quarantine, ingest errors, copy and sink executions, and their error rates. Filter by job status and time range to spot issues across your data infrastructure.
The Logs page lets you browse every append, import, materialization, copy, replace, and GET request in your workspace. Filter by log type, source, resource name, time range, or errors only. Click any log entry to open a split view where you can inspect the underlying resource's data, columns, settings, lineage, and resource-level logs. You can also stream logs from the CLI using tb logs.
Playgrounds is for prototyping and debugging queries. Write SQL in a multi-node editor, chain nodes together to build complex pipelines, and use AI (CMD+K) to generate or refine SQL. You can inspect results as tables and export data in JSON or CSV.
Time Series is for monitoring endpoint performance and tracking trends over time. Visualize data sources — including service data sources like pipe_stats_rt, endpoint_errors, and datasources_storage — without writing SQL. Granularity adjusts automatically to your selected time range, and you can drag to zoom into specific periods to investigate incidents or observe patterns.
Explorations is an AI agent with full context of your data. Use @syntax to reference specific data sources and add workspace rules to customize how the agent responds. It identifies performance bottlenecks, surfaces data quality issues, and provides actionable insights. Reasoning nodes from the analysis can be exported to Playgrounds for further refinement.
Each tool serves a different part of the workflow. Overview gives you a snapshot of your workspace health. Resources lets you inspect and filter everything you've deployed. Deployments tracks what changed and when. Observability and Logs help you monitor performance and debug issues in real time. When you need to dig deeper, Playgrounds lets you write and test SQL, Time Series visualizes trends over time, and Explorations uses AI to analyze your data. They all share the same workspace context, so you can move between them naturally.
The CLI handles the development workflow: initialize projects, build resources, deploy to production, and manage tokens. The UI gives you visibility into what's running: monitor metrics, browse logs, inspect resources, and explore data. You develop and deploy with the CLI, then use the UI to understand what's happening. Some features like Playgrounds, Time Series, and Explorations are UI-only tools. Others like logs are available from both: browse them in the UI or stream them with tb logs.
Service datasources are built-in datasources that track everything happening in your workspace: API requests (pipe_stats_rt, pipe_stats), BI queries (bi_stats_rt, bi_stats), data operations (datasources_ops_log, block_log), storage usage (datasources_storage), Kafka operations (kafka_ops_log), sink operations (sinks_ops_log), endpoint errors, background jobs (jobs_log), AI usage (llm_usage), and more. Queries against service datasources are free and don't count toward your usage limits. You can query them from Playgrounds, visualize them in Time Series, or reference them in Explorations.
Yes. You can keep your Playgrounds, Time Series or Explorations private, or share them with anyone in your workspace. Playgrounds and Time Series configurations persist in your workspace so team members can access and build on each other's work.
All tools work with any data source in your Tinybird workspace, including data sources, pipes, and materialized views. You can also explore service data sources like pipe_stats_rt, pipe_stats, endpoint_errors, datasources_storage, datasources_ops_log, and llm_usage — queries against service data sources are free. In Explorations, use @syntax to reference specific data sources to focus the AI on the right context.
Not necessarily. Explorations lets you query data using natural language. Time Series generates SQL from your configuration choices. However, Playgrounds is designed for SQL users who want full control. Knowing SQL helps you get the most out of each tool.
In Playgrounds, press CMD+K to invoke AI for SQL generation. It understands your schema and suggests fixes, helping you debug pipelines faster. In Explorations, an AI agent with full context of your data analyzes questions, identifies issues, and provides insights. You can export reasoning nodes from Explorations to Playgrounds to continue refining queries in a full SQL environment.

