---
title: "Smarter deployments in Tinybird: faster schema changes for the most common operations"
excerpt: "Most schema changes don't need a full table rebuild. We added ALTER TABLE support to Tinybird deployments so simple changes deploy in seconds, not minutes."
authors: "Jordi Orihuela"
categories: "Engineering Excellence"
createdOn: "2026-03-25 00:00:00"
publishedOn: "2026-03-25 00:00:00"
updatedOn: "2026-03-25 00:00:00"
status: "published"
---

In Tinybird, if you want to change the schema of your ClickHouse® tables, you change the Data Source (table) or Pipe (query, materialization...) definition in your project and run `tb deploy`. You don't write migration scripts, coordinate DDL across replicas, or pause ingestion. Tinybird figures out what changed, migrates your data, and keeps everything running. That's been true for a while.

What's new is how we do it under the hood. Until now, every schema change triggered a full table rebuild: create a new table, backfill, swap. Add a column? Rebuild. Change a TTL? Rebuild. It worked. It was correct. And for large tables with streaming ingestion, it was painfully slow.

We've shipped ALTER TABLE support in deployments. Now, when you change a TTL, add a column, drop a column, or make a column nullable, Tinybird applies an `ALTER TABLE` instead of rebuilding. No data movement. No backfill. A deploy that used to take minutes on a 200GB table now finishes in under 5 seconds.

## The brute-force approach

When you deploy a schema change in Tinybird, the deployment engine compares your local `.datasource` files against what's running in production. If the schema changed, the system needs to get from state A to state B without losing data or breaking live queries.

The original approach was always the same, regardless of what changed:

1. Create a new table with the updated schema
2. Set up ingestion routing so new data flows into the new table
3. Backfill historical data from the old table (via `FORWARD_QUERY` or Materialized View replay)
4. Swap the old table for the new one atomically
5. Drop the old table

This is the safest possible strategy. It works for any schema change: column type modifications, sorting key changes, engine swaps. The new table is fully built and validated before anything touches production.

But it's expensive. A table with hundreds of gigabytes needs a full backfill. If you're streaming thousands of events per second, the routing and swap must be carefully timed to avoid gaps or duplicates. For a one-column addition with a default value, you're moving terabytes of data that don't need to move.

We've [optimized this over time](https://www.tinybird.co/blog/when-not-to-migrate-your-data). The deployment engine learned to trace the data lineage graph, find the most upstream change, and only rebuild what's downstream. If you modify a Materialized View but the 14TB Kafka landing table hasn't changed, the landing table stays put. That was a big improvement. But even with smart lineage analysis, any schema change on any datasource still meant: create table, backfill, swap. For a simple column addition, that's still too much work.

## Why ALTER TABLE in ClickHouse® is tricky

ClickHouse® `ALTER TABLE` is not like PostgreSQL `ALTER TABLE`. Some operations are metadata-only and finish instantly. Others trigger mutations that rewrite data parts on disk asynchronously. And some operations that look simple just aren't supported.

A few things that make ALTER hard in a managed deployment context:

**Mutations can't be rolled back.** Once a mutation starts rewriting parts, you can't undo it. If you're halfway through a deploy and need to abort, you need a plan.

**Pending mutations block new ones.** If a previous mutation is still running, issuing another ALTER will queue behind it. In a deployment pipeline, you need to check for pending mutations before proceeding.

**Distributed DDL adds coordination.** On a replicated cluster, `ALTER TABLE ... ON CLUSTER` coordinates via ZooKeeper. Network hiccups or slow replicas can stall the entire operation.

**Column constraints limit what you can drop.** You can't drop a column that's part of the sorting key or used in a data skipping index. The deployment engine needs to validate this before attempting the ALTER.

**Not everything is alterable.** Changing a column's type, modifying the sorting key, or switching the table engine all require a full table rebuild. There's no ALTER path for these.

We had to be very precise about which changes get the ALTER path and which still need the full rebuild.

## What we built

The deployment engine now classifies every datasource change into one of four categories:

```txt
NoChange        → skip entirely
MetadataOnly    → update metadata, no DDL needed
AlterOperation  → apply ALTER TABLE in place
Schema          → full table recreation with backfill
```

The `AlterOperation` path kicks in for changes that ClickHouse® can handle as ALTER without data movement or risk. Everything else falls back to the original brute-force approach.

Here's what a deploy looks like when the ALTER path kicks in:

```bash
» tb deploy
Using dev_mode=branch. Running deploy against Tinybird Cloud main.
Running against Tinybird Cloud: Workspace test_alter

* Changes to be deployed:
-----------------------------------------------------------------------
| status   | name | type       | path                                 |
-----------------------------------------------------------------------
| modified | logs | datasource | tinybird/datasources/logs.datasource |
-----------------------------------------------------------------------
* No changes in tokens to be deployed
* No data will be copied with this deployment
△ WARNING: Datasource logs: This datasource will be updated using ALTER TABLE. No data movement or backfill required. Changes will only take effect after promotion.

Deployment URL: https://cloud.tinybird.co/gcp/europe-west2/test_alter/deployments/2
Job API URL: https://api.europe-west2.gcp.tinybird.co/v0/jobs/69a72bfb-c310-4746-b67a-30e3768f7ad5
Jobs URL: https://cloud.tinybird.co/gcp/europe-west2/test_alter/jobs

* Deployment submitted. It will be auto-promoted when ready.
» Waiting for deployment to be ready...
✓ Deployment is ready
» Waiting for deployment to be promoted...
✓ Deployment promoted
✓ Deployment #2 is live!
```

Compare that with a structural change that requires the full rebuild:

```bash
» tb deploy
Using dev_mode=branch. Running deploy against Tinybird Cloud main.
Running against Tinybird Cloud: Workspace test_alter

* Changes to be deployed:
-----------------------------------------------------------------------
| status   | name | type       | path                                 |
-----------------------------------------------------------------------
| modified | logs | datasource | tinybird/datasources/logs.datasource |
-----------------------------------------------------------------------
* No changes in tokens to be deployed

 Data that will be copied with this deployment:
----------------------------------------------------------------------------
| Datasource            | Backfill type                                    |
----------------------------------------------------------------------------
| logs                  | From live deployment using Forward Query         |
| logs_daily_timeseries | Using Materialized Pipe logs_daily_timeseries_mv |
| logs_daily_dimensions | Using Materialized Pipe logs_daily_dimensions_mv |
| logs_range_15m        | Using Materialized Pipe logs_range_15m_mv        |
----------------------------------------------------------------------------

Deployment URL: https://cloud.tinybird.co/gcp/europe-west2/test_alter/deployments/3
Job API URL: https://api.europe-west2.gcp.tinybird.co/v0/jobs/86e91bb9-9e67-461f-a81b-7af67c4fe834
Jobs URL: https://cloud.tinybird.co/gcp/europe-west2/test_alter/jobs

* Deployment submitted. It will be auto-promoted when ready.
» Waiting for deployment to be ready...
✓ Deployment is ready
» Waiting for deployment to be promoted...
✓ Deployment promoted
✓ Deployment #3 is live!
```

First one involves no data movement:

```bash
△ WARNING: Datasource logs: This datasource will be updated using ALTER TABLE. No data movement or backfill required. Changes will only take effect after promotion.
```

Second one, backfilling the edited datasource and dependent MVs.

```bash
 Data that will be copied with this deployment:
----------------------------------------------------------------------------
| Datasource            | Backfill type                                    |
----------------------------------------------------------------------------
| logs                  | From live deployment using Forward Query         |
| logs_daily_timeseries | Using Materialized Pipe logs_daily_dimensions_mv |
| logs_daily_dimensions | Using Materialized Pipe logs_daily_dimensions_mv |
| logs_range_15m        | Using Materialized Pipe logs_daily_dimensions_mv |
----------------------------------------------------------------------------
```

### Supported ALTER operations

Here's what currently goes through the ALTER path:

**TTL changes.** Adding, modifying, or removing a TTL expression:

```sql
ALTER TABLE db.my_table MODIFY TTL timestamp + INTERVAL 30 DAY
ALTER TABLE db.my_table REMOVE TTL
```

**Adding columns.** New columns get added in the correct position, including `FIRST` or `AFTER` for ordering:

```sql
ALTER TABLE db.my_table ADD COLUMN user_agent String DEFAULT '' AFTER status_code
```

**Dropping columns.** After validating the column isn't in the sorting key or an index:

```sql
ALTER TABLE db.my_table DROP COLUMN `deprecated_field`
```

**Making columns nullable:**

```sql
ALTER TABLE db.my_table MODIFY COLUMN `email` Nullable(String)
```

**Changing default values:**

```sql
ALTER TABLE db.my_table MODIFY COLUMN `status` DEFAULT 'pending'
```

**Index changes:** Adding, dropping and materializing indexes:

```sql
ALTER TABLE db.my_table ADD INDEX idx_user_id user_id TYPE minmax GRANULARITY 4
ALTER TABLE db.my_table MATERIALIZE INDEX idx_user_id
ALTER TABLE db.my_table DROP INDEX idx_user_id
```

All of these are batched into a single `ALTER TABLE` statement when possible, avoiding multiple mutations for each part and reducing operations in queue. This is especially important when you want to run several changes in a single deployment.

### The decision tree

The schema diff engine compares the old and new column signatures. It strips out "allowed modifications" (nullable and default value changes) before comparing. If what remains is identical (same column names, same types, same order), the change qualifies as an ALTER. If the structural signature differs, it's a full `Schema` change that requires table recreation.

The deployment task generator then checks two conditions: are there ALTER queries to run, and does the table need recreation for other reasons? A datasource gets a new table when it's new, when it's the upstream origin of a breaking change, or when it's downstream of one. ALTER only applies when both conditions are met: there are alterable changes, and nothing else forces a rebuild.

What if your deploy mixes alterable and non-alterable changes? The engine evaluates each datasource independently. If you add a column to one datasource (alterable) and change a sorting key on another (not alterable), the first gets an ALTER and the second gets a full rebuild. But if a single datasource has both an alterable change and a structural change, the whole thing falls back to table recreation.

### Safety checks

Before executing any ALTER, the system runs several validations:

**Pending mutations.** If the table has unfinished mutations from a previous deploy, the ALTER is blocked. Running concurrent mutations can lead to unpredictable states.

**Column droppability.** The Forward validation checks (`tb deploy --check`) verify the column can be dropped without causing issues.

**Downstream impact.** If a downstream datasource requires a full rebuild anyway (e.g., a Materialized View target changed), the ALTER on the upstream datasource is skipped. The new table will already have the correct schema.

### Rollback

Every ALTER operation generates both forward and rollback queries. If the deployment fails after the ALTER has been applied, the rollback path restores the previous state. For TTL changes, the rollback reapplies the old TTL. For column additions, the rollback drops them. The rollback queries are executed in reverse order to maintain consistency.

### Materialized View ALTER

Pipe files now support a `DEPLOYMENT_METHOD` directive. When set to `'alter'`, query changes to Materialized Views are applied with `ALTER TABLE ... MODIFY QUERY` instead of recreating the view and its target table.

In your `.pipe` file, it looks like this:

```sql
NODE materialized_node
SQL >
    SELECT
        user_id,
        countState() AS events,
        maxState(timestamp) AS last_seen
    FROM landing_events
    GROUP BY user_id

TYPE materialized
DATASOURCE user_activity_mv
DEPLOYMENT_METHOD 'alter'
```

When you change the SQL in `materialized_node` and run `tb deploy`, the system applies `ALTER TABLE ... MODIFY QUERY` on the underlying MV instead of dropping and recreating it. The MV continues to run with the new query, and no data movement is required.

There are constraints: this only works on existing MVs, and it's incompatible with changes that require a new target table. If the deployment engine detects that the target datasource needs recreation, the ALTER is rejected with a clear error.

### Forcing a full rebuild

Since ALTER operations reuse the same table, they are applied only when promoting the deployment to live. This means ALTER changes cannot be tested in a staging deployment. If you need to test in staging, you can force the full table rebuild by specifying a FORWARD_QUERY in the datasource. As always, the recommended option for testing is your development environment (a Tinybird Branch or Tinybird Local) and CI.

```sql
SCHEMA >
    `timestamp` DateTime `json:$.timestamp`,
    `user_id` String `json:$.user_id`,
    `event_type` String `json:$.event_type`

ENGINE MergeTree
ENGINE_PARTITION_KEY toYYYYMM(timestamp)
ENGINE_SORTING_KEY timestamp

FORWARD_QUERY >
    SELECT *
```

## The trajectory

We started with the safest possible approach: rebuild everything, every time. Then we got smarter about *what* to rebuild, tracing the lineage graph to skip tables that hadn't changed. Now we're skipping the rebuild entirely when the change is simple enough for ClickHouse® to handle in place.

The ALTER path covers columns, TTLs, and indexes. That's the majority of schema changes most teams make day to day. For everything else, the full rebuild is still there, and it's still correct.

Schema changes already work like code changes in Tinybird. Edit a file, deploy, done. Now the common ones are just a lot faster.
