Migrate from Tinybird Classic¶
Tinybird Forward introduces a new way of working with your data projects, with changes to APIs and CLI that may be incompatible with Classic. If you're starting a new project, see the Get started guide.
Considerations before migrating¶
Before migrating your workspace from Tinybird Classic, understand these key differences in Forward:
- Development happens locally using the Tinybird Local container, not in the UI.
- Before starting the migration, remove all the branches that are not the main one. That can be done from the UI or using the CLI.
- Contact the Tinybird support team to remove any existing rollback releases. Only the live release must remain to proceed with the migration.
- The Tinybird support team will need to enable a feature flag to complete the migration to Forward.
- The following features have limitations or require changes:
| Feature | Status | Solution/Alternative |
|---|---|---|
| DynamoDB connector | Not supported | No alternative available yet. Pause migration if you depend on DynamoDB connectors. |
| BI Connector | Not supported | Use the ClickHouse HTTP Interface instead. Most BI tools support ClickHouse HTTP connections. |
| Shared data sources | Partially supported | Data source sharing is supported, but you cannot create Materialized Views from shared data sources in the destination workspace. Create Materialized Views in the source workspace instead. |
| Include files | Not supported | Use tb secret for connector credentials and generic pipes to reuse query logic. See Replace include files with secrets for migration steps. |
VERSION tag in datafiles | Not supported | Remove any VERSION tags from your datafiles before migrating. |
| CI/CD workflows | Different commands | Forward uses different CLI commands. See CI/CD for details. |
| Testing strategy | Different approach | Regression tests and data quality tests are not supported in Forward. Fixture tests have been enhanced for easier test creation and management. See Test your project for details. |
Resource-scoped tokens with :sql_filter | Not supported | Remove all tokens using the :sql_filter suffix (e.g., DATASOURCES:READ:datasource_name:sql_filter) before migrating. Use JWTs instead. |
| AWS External IDs (S3 connectors/Sinks) | Breaking change | External IDs change from workspace ID to connection name. Update AWS Trust Policies before migrating. See External ID changes for AWS integrations for details. |
TYPE endpoint to the .pipe files | Breaking change | Add TYPE endpoint parameter to the .pipe files to publish them as API endpoints |
If these changes work for your use case, continue reading to learn how to migrate.
Migration is permanent and cannot be reversed. After deploying with Forward, you cannot switch your workspace back to Classic.
External ID changes for AWS integrations¶
If you use AWS integrations (S3 connectors or S3 Sinks), you must update your AWS Trust Policies before migrating to Forward.
In Classic, Tinybird uses the workspace ID as the seed for generating External IDs, while in Forward it uses the connection name. This means the same connection will have a different External ID after migration.
Update AWS Trust Policy¶
To get the new External ID for your connection, access:
https://<your_host>/v0/integrations/s3/policies/trust-policy \
?external_id_seed={CONNECTION_NAME} \ # Replace with your connection name
&token={YOUR_ADMIN_TOKEN} # Replace with your admin token
This returns a Trust Policy with the new External ID. Add this new External ID to your existing Trust Policy's sts:ExternalId array to maintain access during and after migration.
Additional S3 Sinks permission¶
If you use S3 Sinks, add the s3:GetBucketLocation permission to your AWS Access Policy. This requirement allows connections to work with buckets across multiple regions without specifying the region when creating the connection, making it more flexible for multi-region deployments.
Migrate your workspace¶
Install the Tinybird Forward CLI¶
Run the following command to install the Tinybird Forward CLI and the Tinybird Local container:
curl https://tinybird.co | sh
See install Tinybird Forward for more information.
Managing CLI Versions: Having both Tinybird Classic and Forward CLIs installed can cause version conflicts since both use the tb command. To avoid these conflicts, consider:
- Using the uv Python package manager to keep both CLIs completely isolated (recommended):
# For Classic CLI uvx --from tinybird-cli@latest tb # For Forward CLI uvx --from tinybird@latest tb
- Creating aliases in your shell configuration:
# Add to .bashrc or .zshrc alias tb-classic="path/to/classic/tb" alias tb-forward="path/to/forward/tb"
- Using separate virtual environments for each CLI version.
This ensures you use the correct CLI version for each operation during migration.
The following steps use the uv Python package.
Authenticate to your workspace¶
Authenticate to your workspace using the Classic CLI:
uvx --from tinybird-cli@latest tb auth --interactive
Follow the prompts to complete authentication.
Pull your project¶
If you already have the latest version of your datafiles locally (e.g. from your Git repo), skip to the next step.
If you don't have your datafiles locally, pull them from Tinybird using the Forward CLI:
uvx --from tinybird@latest tb --cloud pull
Check deployment compatibility¶
Validate your project's compatibility with the Forward CLI:
uvx --from tinybird@latest tb --cloud deploy --check
You should see:
* No changes to be deployed * No changes in tokens to be deployed
If you encounter any errors, it's recommended to fix them in your Classic workspace so you can have a "clean" first Forward deployment. See common migration errors for information about common errors and fixes.
Fix all of the errors, repull your workspace (if necessary), and rerun the deployment check until there are no changes detected.
Contact support to enable the Forward feature flag¶
Once your project passes the compatibility check, contact Tinybird support (support@tinybird.co) to enable the Forward feature flag for your workspace.
Trigger a deployment¶
Once the feature flag is enabled, it's time to trigger a deployment.
To create a simple first deployment, generate a dummy endpoint as the only change:
forward_dummy_endpoint.pipe
NODE n
SQL >
SELECT 'forward'
TYPE endpoint
There are two ways to deploy your project:
Option 1: CI/CD (recommended)¶
In an empty directory outside of your existing project, generate default CI/CD workflows by running the following command:
uvx --from tinybird@latest tb create
tb create creates the scaffolding for a new project, including the GitHub/GitLab YAML files. Review the workflows, edit them as desired, and add the files to the root of your project.
Finally, trigger the deployment by committing your project to Git and creating a merge/pull request.
Option 2: CLI¶
If you don't have CI/CD configured, you can deploy manually:
uvx --from tinybird@latest tb --cloud deploy
Open the project in Tinybird Cloud¶
After the deployment succeeds, open the project in Tinybird Cloud by running the following command:
uvx --from tinybird@latest tb --cloud open
The migration is complete! Your project will continue working as expected; you do not need to change your tokens, endpoint URLs, or anything else.
Common migration errors¶
Common errors and changes include (but are not limited to):
Missing connection files¶
In Forward, .connection files are used to store your connector details.
You need to create .connection files to enable your connections to Kafka, S3, or GCS. If you manually pulled your datafiles, the .connection files were created, but they are empty.
See Connectors for more information about the syntax.
Kafka settings have been deprecated¶
Some settings in the Kafka connector have been deprecated. You need to update your Kafka .connection file to use the most up-to-date Kafka settings.
Replace include files with secrets¶
Include files are not supported in Forward. The fix depends on your use of include files:
- If you use include files to store secrets, use tb secret to set secrets in your local and cloud environments.
- If you use include files to reuse query logic, you can create generic pipes and reference them in your endpoint pipes. For example:
reusable_filters.pipe
NODE apply_params
SQL >
%
SELECT * FROM my_datasource
WHERE
tenant_id = {{ String(tenant) }}
AND date BETWEEN {{ Date(start_date) }} AND {{ Date(end_date) }}
my_endpoint.pipe
NODE endpoint
SQL >
%
SELECT * FROM reusable_filters
TYPE endpoint
Add TYPE endpoint to your .pipe files¶
You need to add TYPE endpoint to your .pipe files so they can be published as API endpoints.
If you omit the TYPE instruction, the pipe will be a generic pipe that is not publicly exposed.
example.pipe
NODE my_node
SQL >
SELECT * FROM my_datasource
TYPE endpoint
Next steps¶
- Learn about working with Forward in the Forward documentation.