---
title: Deployments
meta:
  description: "Deploy your data project to Tinybird."
---

# Deployments in Tinybird

Changing state in data infrastructure can be complex. Each state transition must ensure data integrity and consistency.

Tinybird deployments simplify this process by providing robust mechanisms for managing state changes, allowing you to validate and push updates seamlessly while minimizing the risk of data conflicts or loss.



## What is a deployment?

Deployments are versions of your project resources and data running on local or cloud infrastructure.

## Types of deployments

There are two types of deployments:

- Staging deployments: Deployments you can use to validate your changes. You access them using the `--staging` flag.
- Live deployments: Deployments that make your changes available to your users.

Each type can be deployed to Tinybird Local (`--local`) or Tinybird Cloud (`--cloud`).

## Deployment status

Deployments have the following statuses:

- `In progress`: The deployment is in progress. Use `--wait` to wait for it to finish.
- `Live`: The deployment is active and has been promoted from staging.
- `Staging`: The deployment is active in staging. Use `--staging` to access it.
- `Failed`: The deployment failed. Try `tb deploy --check` to debug the issue.
- `Deleted`: The deployment was deleted as a result of creating new deployments.

## Deployment methods

The following deployment methods are available:

- [CLI](/forward/test-and-deploy/deployments/cli).
- [CI/CD](/forward/test-and-deploy/deployments/cicd).

{% callout type="info" %}
If your project was initialized with `tb init --dev-mode local` or `tb init --dev-mode branch`, `tb deploy` without explicit environment flags deploys to your cloud main workspace.
{% /callout %}

## Staging deployments

You can write data to, and read data from, a staging deployment before promoting it to live. This is useful when you've made schema changes
that might be incompatible with the current live deployment, like adding new columns.

{% callout type="info" %}
Automatic data source changes that Tinybird applies with `ALTER`, such as adding a column or changing a data source TTL, are applied only when the deployment is promoted to live. Those changes aren't available in staging deployments.
{% /callout %}

### Writing to staging deployments

You can use the [Events API](/forward/get-data-in/events-api) to write directly to staging deployments through the `__tb__min_deployment` parameter,
which indicates the target deployment ID. For example:

```shell
curl \
    -H "Authorization: Bearer <import_token>" \
    -d '{"date": "2020-04-05 00:05:38", "city": "Chicago", "new_column": "value"}' \
    '{% user("apiHost") %}/v0/events?name=events_test&__tb__min_deployment=5'
```

In the example, if the ID of your current live deployment is 4 and you're creating deployment with an ID of 5, the data will be ingested into the staging deployment 5 only.
This allows you to:

1. Make schema changes in a staging deployment.
2. Ingest data compatible with the new schema.
3. Validate the changes work as expected.
4. Promote the deployment to live when ready.

Without the parameter, data would be rejected if it doesn't match the schema of the current live deployment.

{% callout type="tip" %}
To get the deployment ID, run `tb deployment ls`.
{% /callout %}

### Reading from staging deployments

You can query data from a staging deployment using [pipe endpoints](/forward/work-with-data/publish-data/endpoints). To access a staging endpoint, add the `__tb__deployment` parameter to your API request:

```shell
curl \
    -H "Authorization: Bearer <query_token>" \
    '{% user("apiHost") %}/v0/pipes/my_endpoint?__tb__deployment=5'
```

This allows you to:

1. Test your endpoints with the new schema changes.
2. Validate query results before promoting to live.
3. Ensure your application works correctly with the updated data structure.

{% callout type="tip" %}
To get the deployment ID, run `tb deployment ls`.
{% /callout %}

### Continuous operation

Once the deployment is promoted to live, you can continue using the same API calls. In the previous example, calls using the `__tb__min_deployment=5`
or `__tb__deployment=5` parameters will keep working without interruption. The parameters ensure compatibility both before and after promotion.

For more details on the Events API parameters, see the [Events API Reference](/api-reference/events-api).

## On-Demand Compute for Deployment Populates

When deploying changes that require populating data sources, such as creating materialized views or evolving data source schemas, Tinybird offers compute-compute separation through on-demand instances. This feature allows you to run populate operations on dedicated, isolated compute resources, separate from your main workspace infrastructure.

This isolated compute ensures that populate operations don't impact the performance of your production workloads and vice versa.

### Automatic activation

On-demand compute is automatically activated for populate operations during deployment when they exceed certain thresholds:

- Data size exceeds 50GB, or
- Row count exceeds 100 million rows

When these thresholds are met, Tinybird automatically runs the populate on dedicated on-demand instances when there is a data migration operation.

{% callout type="info" %}
On-demand compute for populates uses 64-core instances. You can request smaller instances for less resource-intensive workloads. Contact Tinybird support for configuration options.
{% /callout %}

### Pricing

On-demand compute is billed based on the actual compute time used, measured in credits per core per minute. Pricing varies by region:

{% table %}
* Region
* Price (Credits per core per minute)
---
* aws-us-east-1
* 0.0029
---
* aws-us-west-2
* 0.0029
---
* aws-eu-west-1
* 0.0032
---
* aws-eu-central-1
* 0.0035
---
* aws-ap-east-1
* 0.004
---
* aws-ap-southeast-2
* 0.0036
---
* gcp-us-east4
* 0.0027
---
* gcp-europe-west3
* 0.0031
---
* gcp-europe-west2
* 0.0031
---
* gcp-northamerica-northeast2
* 0.0027
{% /table %}

**Example calculations**:

A 2-hour populate operation during deployment in AWS US East 1:
- Duration: 2 hours = 120 minutes
- Instance: 64 cores
- Cost: 0.0029 credits × 120 minutes × 64 cores = **22.3 credits**

The credits are automatically deducted from your monthly invoice. You can choose smaller instances depending on your specific populate requirements.

## Next steps

- See how to [deploy your project using the CLI](/forward/test-and-deploy/deployments/cli).
- See how to [deploy your project using CI/CD](/forward/test-and-deploy/deployments/cicd).
