When we released the TypeScript SDK, one of the first questions we got: "Where's the Python version?"
Fair question. A great amount of Tinybird's user base works in Python: data engineers, data platform teams, backend developers building with Django or FastAPI. They shouldn't have to context-switch to a custom DSL to define their data layer.
uv add tinybird-python-sdk
The Python SDK lets you define your entire Tinybird project as Python code. Not an API wrapper. Datasources, pipes, endpoints, connections, materialized views, copy pipes. All defined in Python, with types that flow from your schema to your editor and your type checker.
Start with the code
A datasource:
from tinybird import datasource, types as t, engine
page_views = datasource(
"page_views",
schema={
"timestamp": t.DateTime(),
"session_id": t.String(),
"pathname": t.String(),
"country": t.String().low_cardinality().nullable(),
"device_type": t.String().low_cardinality(),
},
engine=engine.MergeTree(
sorting_key=["pathname", "timestamp"],
partition_key="toYYYYMM(timestamp)",
),
)
An endpoint:
from tinybird import endpoint, node, params as p, types as t
top_pages = endpoint(
"top_pages",
params={
"start_date": p.DateTime(),
"end_date": p.DateTime(),
"limit": p.Int32(default=10),
},
nodes=[
node(
name="aggregated",
sql="""
SELECT pathname, count() AS views, uniqExact(session_id) AS unique_sessions
FROM page_views
WHERE timestamp >= {{DateTime(start_date)}}
AND timestamp <= {{DateTime(end_date)}}
GROUP BY pathname
ORDER BY views DESC
LIMIT {{Int32(limit, 10)}}
""",
),
],
output={
"pathname": t.String(),
"views": t.UInt64(),
"unique_sessions": t.UInt64(),
},
)
Query it, and the response is typed:
from tinybird import Tinybird
from endpoints import top_pages
tb = Tinybird(token="...")
results = tb.query(top_pages, start_date="2026-03-01", end_date="2026-03-10")
for row in results:
print(row.pathname, row.views) # IDE autocomplete works here
No manual type definitions. The types come from your schema.
Works with Pydantic
If your application already uses Pydantic models, you can use them directly as datasource schemas:
from pydantic import BaseModel
from tinybird import datasource, from_model
class PageView(BaseModel):
timestamp: datetime
session_id: str
pathname: str
country: str | None = None
device_type: str = "unknown"
page_views = datasource("page_views", schema=from_model(PageView))
One model, shared between your application logic and your data layer. Ingestion validates against the model. Schema changes show up as type errors in your editor, not as broken pipelines in production.
Typed ingestion
The runtime client checks your data against the datasource schema at the type level:
from tinybird import Tinybird
from datasources import page_views
tb = Tinybird(token="...")
await tb.ingest(page_views, [
{
"timestamp": datetime.now(),
"session_id": "abc-123",
"pathname": "/pricing",
"country": "US",
"device_type": "desktop",
}
])
Pass a field that doesn't exist in the schema, or pass the wrong type, and mypy or pyright catches it before you run the code. The type system covers the full surface of ClickHouse® types, with chainable modifiers like .nullable(), .low_cardinality(), .default(), and .codec().
Why this matters for Python teams
Data engineers already work in Python. Their ETL scripts, their ML pipelines, their FastAPI services. When Tinybird resources are also Python, everything lives in one codebase: same review process, same CI, same toolchain.
And for coding agents, Python is native territory. When your Tinybird resources are Python files in your repo, Claude Code, Codex, or Cursor can read your datasource schemas, write endpoints, add materialized views, and fix issues. The custom DSL was a bit hard to them. Python isn't.
Development workflow
tb init # scaffold a project
tb build # build against Tinybird Local or a branch
tb dev # watch mode with hot reload
tb deploy # deploy to production
tb dev watches your Python files and syncs changes to a Tinybird branch as you save. tb pull downloads existing cloud resources, and tb migrate converts .datasource and .pipe files to Python definitions if you want to move incrementally.
The SDK also works alongside existing datafiles. Your tinybird.config.py can include both:
config = {
"include": [
"lib/tinybird.py", # new Python definitions
"tinybird/*.datasource", # existing datafiles
"tinybird/*.pipe",
],
}
Migrate at your own pace. Both formats are first-class.
Get started
uv add tinybird-python-sdk
tb init
tb init scaffolds your project and offers to install Tinybird agent skills so your coding agent understands Tinybird conventions from the start.
PS: If you need help migrating from Classic to Forward, get in touch with support.
