Manual deployment

You can deploy your data projects to Tinybird Cloud directly from the command line using the Tinybird CLI.

If your project was initialized with tb init --dev-mode local or tb init --dev-mode branch, running tb deploy without explicit environment flags targets your cloud main workspace automatically.

What is a deployment?

Deployments are versions of your project resources and data running on local or cloud infrastructure.

There are two types of deployments:

  • Staging deployments: Deployments you can use to validate your changes. You access them using the --staging flag.
  • Live deployments: Deployments that make your changes available to your users.

Each type can be deployed to Tinybird Local (--local) or Tinybird Cloud (--cloud).

Deployment status

Deployments have the following statuses:

  • In progress: The deployment is in progress. Use --wait to wait for it to finish.
  • Live: The deployment is active and has been promoted from staging.
  • Staging: The deployment is active in staging. Use --staging to access it.
  • Failed: The deployment failed. Try tb deploy --check to debug the issue.
  • Deleted: The deployment was deleted as a result of creating new deployments.

Deploy from the CLI

1

Check the deployment

Before creating the deployment, you can check the deployment with the --check flag. This runs a series of checks to ensure the deployment is ready. This is similar to a dry run.

tb --cloud deployment create --check

The --check flag validates external connections to S3, Kafka, GCS, and databases referenced via table functions. For local success, set connection secrets with tb secret set and use tb local start --use-aws-creds for S3 connections.

2

Create a staging deployment

Create a new staging deployment in Tinybird Cloud. Pass the --wait flag to wait for the deployment to finish:

tb --cloud deployment create --wait

To run commands against the staging deployment, use the --staging flag. For example:

tb --staging --cloud endpoint ls
3

Promote to live

When the staging deployment is ready, promote it to a live deployment in Tinybird Cloud:

tb --cloud deployment promote

To deploy and promote in one step, use tb deploy. If you prefer explicit flags, run tb --cloud deploy.

Staging deployments

You can write data to, and read data from, a staging deployment before promoting it to live. This is useful when you've made schema changes that might be incompatible with the current live deployment, like adding new columns.

Automatic data source changes that Tinybird applies with ALTER, such as adding a column or changing a data source TTL, are applied only when the deployment is promoted to live. Those changes aren't available in staging deployments.

Writing to staging deployments

You can use the Events API to write directly to staging deployments through the __tb__min_deployment parameter, which indicates the target deployment ID. For example:

curl \
    -H "Authorization: Bearer <import_token>" \
    -d '{"date": "2020-04-05 00:05:38", "city": "Chicago", "new_column": "value"}' \
    'https://<your_host>/v0/events?name=events_test&__tb__min_deployment=5'

In the example, if the ID of your current live deployment is 4 and you're creating deployment with an ID of 5, the data will be ingested into the staging deployment 5 only. This allows you to:

  1. Make schema changes in a staging deployment.
  2. Ingest data compatible with the new schema.
  3. Validate the changes work as expected.
  4. Promote the deployment to live when ready.

Without the parameter, data would be rejected if it doesn't match the schema of the current live deployment.

To get the deployment ID, run tb deployment ls.

Reading from staging deployments

You can query data from a staging deployment using pipe endpoints. To access a staging endpoint, add the __tb__deployment parameter to your API request:

curl \
    -H "Authorization: Bearer <query_token>" \
    'https://<your_host>/v0/pipes/my_endpoint?__tb__deployment=5'

This allows you to:

  1. Test your endpoints with the new schema changes.
  2. Validate query results before promoting to live.
  3. Ensure your application works correctly with the updated data structure.

To get the deployment ID, run tb deployment ls.

Continuous operation

Once the deployment is promoted to live, you can continue using the same API calls. The __tb__min_deployment or __tb__deployment parameters ensure compatibility both before and after promotion.

For more details on the Events API parameters, see the Events API Reference.

On-Demand Compute for Deployment Populates

When deploying changes that require populating Data Sources, such as creating materialized views or evolving schemas, Tinybird can run these operations on dedicated on-demand instances. This separates them from your main workspace infrastructure.

This isolated compute ensures that populate operations don't impact the performance of your production workloads and vice versa.

Automatic activation

On-demand compute is automatically activated for populate operations during deployment when they exceed certain thresholds:

  • Data size exceeds 50GB, or
  • Row count exceeds 100 million rows

When these thresholds are met, Tinybird automatically runs the populate on dedicated on-demand instances when there is a data migration operation.

On-demand compute for populates uses 64-core instances. You can request smaller instances for less resource-intensive workloads. Contact Tinybird support for configuration options.

Pricing

On-demand compute is billed based on the actual compute time used, measured in credits per core per minute. Pricing varies by region:

RegionPrice (Credits per core per minute)
aws-us-east-10.0029
aws-us-west-20.0029
aws-eu-west-10.0032
aws-eu-central-10.0035
aws-ap-east-10.004
gcp-us-east40.0027
gcp-europe-west30.0031
gcp-europe-west20.0031
gcp-northamerica-northeast20.0027

Example calculations:

A 2-hour populate operation during deployment in AWS US East 1:

  • Duration: 2 hours = 120 minutes
  • Instance: 64 cores
  • Cost: 0.0029 credits × 120 minutes × 64 cores = 22.3 credits

The credits are automatically deducted from your monthly invoice. Smaller instances can be configured for your workspace depending on your specific populate requirements.