---
title: "dbt in real-time"
excerpt: "dbt in real-time transforms your batch models into streaming pipelines. Same SQL, completely different performance profile."
authors: "Javi Santana"
categories: "Scalable Analytics Architecture"
createdOn: "2025-04-24 00:00:00"
publishedOn: "2025-04-24 00:00:00"
updatedOn: "2025-04-24 00:00:00"
status: "published"
---

<p>If you're in the data world, or you were 10 years ago, you know that <a href="https://www.getdbt.com/" rel="noreferrer">dbt</a> really was a "game changer" (I hate that phrase and 99% of the time it's not true, but with dbt it was). dbt gave data engineers and analysts a better way to organize and process data in the warehouse. It started as a consultancy, and now it's a billion-dollar startup because so many data engineers reach for dbt to build transformations in the warehouse.</p><p>Tinybird is a lot like dbt, but for a totally different use case. dbt is for building batch analytics in the data warehouse. Tinybird is for building real-time analytics for applications.</p><p>This blog post should be useful for people already familiar with dbt who are exploring real-time analytics and/or low-latency API use cases, but it will also be good if you're looking for a better way to keep your data projects well organized.</p><h2 id="why-bother-migrating-from-dbt-to-tb">Why bother migrating from dbt to tb?&nbsp;</h2><p>Tinybird isn't just "dbt but real-time." Tinybird has a different philosophy and is built around a different core engine optimized for speed and freshness. <strong>Tinybird is an engineered system, not just engineered parts assembled into a system.</strong></p><p>Some specific reasons you might want to migrate…</p><h3 id="built-for-real-time-processing">Built for real-time processing</h3><p>dbt was designed for batch processing mostly. You can indeed run real-time workloads in dbt if the database you use under the hood supports it, but in Tinybird everything is designed to work for real time. There is also batch processing in Tinybird if you need it, but, to be honest, it's not as complete as dbt (and it isn't meant to be).</p><h3 id="apis-are-first-class-citizens">APIs are first-class citizens</h3><p>dbt models data <em>for</em> something else – a BI tool or another process. Building an API usually means adding <em>another</em> layer: a Python service (Flask/FastAPI), maybe another database cache, all querying the warehouse where dbt ran. More moving parts, more latency, more code to manage.</p><p>In Tinybird, pipes <em>are</em> APIs. Any SQL query (pipe node) can be published as a secure, parameterized, monitored REST endpoint with a single command (<code>tb deploy</code>). This radically simplifies building data-intensive applications or features.</p><h3 id="simplifies-the-stack">Simplifies the stack</h3><p>dbt is master of the "T" in ELT. You still need separate tools for ingestion (E), loading (L), orchestration (Airflow, Dagster, Prefect), API serving, and often specialized monitoring.</p><p>And, if your goal is fresh data powering fast APIs, the typical dbt stack (Kafka -&gt; Flink/Spark -&gt; Warehouse -&gt; dbt -&gt; API Framework -&gt; Monitoring) is complex and expensive.&nbsp;</p><p>Tinybird offers a potentially much leaner alternative; it handles ingestion (Connectors, API), real-time transformation (SQL pipes, materialized views), API publishing, and observability (service data sources) in one workflow, managed via the <code>tb</code> CLI and git. For certain use cases, <strong>this dramatically simplifies the stack</strong>.</p><h3 id="raw-speed">Raw speed</h3><p>In dbt, performance depends entirely on your data warehouse (Snowflake, BigQuery, Redshift, etc.). These are powerful tools, but they're often optimized for broader analytical workloads, not necessarily p99 millisecond API responses.</p><p>Tinybird is built on ClickHouse®. ClickHouse® is <em>fast</em> for the types of analytical queries (filtering, aggregating, time-series) that power dashboards and APIs, especially when data is structured correctly (sorting keys!).</p><h2 id="mapping-dbt-concepts-to-tinybird-a-new-way-of-thinking">Mapping dbt concepts to Tinybird: A new way of thinking</h2><p>Migrating from dbt to Tinybird requires a mental shift. Here's a rough translation guide:</p>
<!--kg-card-begin: html-->
<table>
  <thead>
    <tr>
      <th>dbt Concept</th>
      <th>Tinybird Equivalent</th>
      <th>Notes</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>dbt Project</td>
      <td>Tinybird Data Project</td>
      <td>Git-managed folder with configuration files.</td>
    </tr>
    <tr>
      <td><code>sources.yml</code></td>
      <td><code>.datasource</code> file</td>
      <td>Defines schema, ClickHouse® engine, partition/sort keys. <em>Crucial</em> for performance. Can include ingestion config (Kafka, API schema).</td>
    </tr>
    <tr>
      <td>Model (<code>.sql</code> file)</td>
      <td>Pipe (<code>.pipe</code> file) node</td>
      <td>A SQL transformation. Pipes chain nodes. Think <code>stg_*.sql</code> -> <code>intermediate_*.sql</code> -> <code>fct_*.sql</code> maps to nodes in one or more <code>.pipe</code> files.</td>
    </tr>
    <tr>
      <td><code>ref('model_name')</code></td>
      <td><code>FROM pipe_name</code></td>
      <td>Referencing upstream dependencies.</td>
    </tr>
    <tr>
      <td><code>source('src', 'tbl')</code></td>
      <td><code>FROM datasource_name</code></td>
      <td>Referencing a base table defined in <code>datasources/</code>.</td>
    </tr>
    <tr>
      <td>Materialization (table, incremental)</td>
      <td>Materialized view (<code>TYPE materialized</code> in pipe)</td>
      <td><em>Key concept.</em> Processes data incrementally on ingest. Targets an <code>AggregatingMergeTree</code> typically.</td>
    </tr>
    <tr>
      <td>Materialization (view)</td>
      <td>Standard pipe node</td>
      <td>Just a query definition, run on demand.</td>
    </tr>
    <tr>
      <td>Materialization (ephemeral)</td>
      <td>Intermediate pipe node</td>
      <td>A node used by others but not directly queryable/materialized.</td>
    </tr>
    <tr>
      <td>Jinja (<code>{{ }}</code>, <code>{% %}</code>)</td>
      <td>Tinybird Template Functions (<code>{{ }}</code>, <code>{% %}</code>)</td>
      <td>Similar syntax, different functions. Primarily used for API endpoint parameters, less for dynamic SQL generation than in dbt.</td>
    </tr>
    <tr>
      <td>dbt Tests</td>
      <td>Tinybird Tests (<code>tb test</code>, <code>.yml</code>)</td>
      <td>Primarily focus on testing API endpoint responses. Data quality is often built into pipes.</td>
    </tr>
    <tr>
      <td><code>dbt run</code>, <code>dbt build</code></td>
      <td><code>tb deploy</code>, materialized views, copy pipes</td>
      <td><code>tb deploy</code> pushes <em>definitions</em>. MVs update automatically. Copy pipes (<code>TYPE COPY</code>) for scheduled batch runs/snapshots.</td>
    </tr>
    <tr>
      <td>dbt DAG</td>
      <td>Implicit via <code>FROM</code> clauses & MVs</td>
      <td>Tinybird manages dependencies based on references.</td>
    </tr>
    <tr>
      <td>Seeds</td>
      <td>Fixtures (<code>fixtures/</code>), <code>tb datasource append</code></td>
      <td>Load static data locally with fixtures, or append via CLI/API.</td>
    </tr>
  </tbody>
</table>
<!--kg-card-end: html-->
<p>The biggest shift from dbt to Tinybird? Thinking about materialized views for anything incremental or aggregated, and designing data source schemas (especially <code>ENGINE_SORTING_KEY</code>) for query performance from the start.</p><h2 id="step-by-step-migration-strategy">Step-by-step migration strategy</h2><p>Assume you have the <code>tb</code> CLI installed and logged in (<code>tb login</code>), and you've initialized a project (<code>tb create --folder my_tb_project &amp;&amp; cd my_tb_project</code>).</p><p>Make sure you have Tinybird local running for testing: <code>tb local start</code></p><h3 id="1-migrate-sourcesdatasource">1. Migrate sources -&gt; <code>.datasource</code></h3><p>For each dbt source table needed, create a file like <code>datasources/my_source_table.datasource</code>.</p><p>Some notes:</p><ul><li><strong>Schema</strong>: Translate data types carefully. Tinybird uses ClickHouse® types (e.g., <code>String</code> not <code>VARCHAR</code>, <code>DateTime64</code> not <code>TIMESTAMP</code>). See <a href="https://docs.tinybird.co/sql-reference/data-types"><u>Tinybird Data Types</u></a>.</li><li><strong>Engine &amp; keys</strong>: This is critical. <code>MergeTree</code> is common. <code>ReplacingMergeTree</code> if you need updates based on a key. <code>AggregatingMergeTree</code> for MV targets. Choose <code>ENGINE_PARTITION_KEY</code> (often time-based like <code>toYYYYMM(timestamp_col)</code>) and <code>ENGINE_SORTING_KEY</code> based on common query filters. <em>Don't skip this.</em> Poor sorting keys kill performance.&nbsp;</li><li><strong>Ingestion config</strong>: If Tinybird will ingest data from a connected source (e.g., via Kafka), add the connector settings here. If it's populated by another pipe (or via <a href="https://www.tinybird.co/docs/get-data-in/ingest-apis/events-api"><u>Events API</u></a> / <a href="https://www.tinybird.co/docs/api-reference/datasource-api"><u>Data Sources API</u></a>, you only need the schema and engine.</li></ul><p>An example:</p><h4 id="dbt">dbt</h4>
<!--kg-card-begin: html-->
<iframe width="100%" src="https://snippets.tinybird.co/XQAAAAIPAQAAAAAAAABBKUqGk9nLKwQ-qcwh-5dLtRYFcdRk75dyA-JC8ilfHGx13h54tQ_cgs9OElkasdfk8IiHbxWSh53G7yx37SglkbDgnLQABJX7mHgfyAFksZ7H5P6WkokCNKzYJ_hHJ2DQ-7EK4W7aucvcqP7K92MeF3BCGbU6t1aqFTqGEqGnk1QsffphZ5dW-8PzPE9NDX8cT7ORMhGis9q96al2YCQF2DAUP__35SAA/embed"></iframe>
<!--kg-card-end: html-->
<h4 id="tinybird">Tinybird</h4>
<!--kg-card-begin: html-->
<iframe width="100%" src="https://snippets.tinybird.co/XQAAAALPAQAAAAAAAABBKUqGk9nLKzhQ5fTupyKqpJ56MJd1a8RKEs9XIzYUCOOMMUD9ie2qJwqxqC5rMeeJ7Q-_iCYrfcwQP74HdxaLA4FNzW0_aiqesg8HmVKuasp1EvgMkfVWJCC-Z0PYYSeOnM7YGN2AX5xqsUSVZw7QkNJID3BfxOyhwt-ZL1Sr_DV3shejekTA6mU-bHfCGsksRDwu5P7Lo3zSmUJe9GlUn8_ejNMwUSFJ8wHPmxu5_GnNdockyVqjCEuBs--5l3o7aaEvq7yMAgv7EdcL_G6CpThzAHv_lstA9xiggOxrNP209dPKoh37EG2jPSs0jLn0HEd484Rb4r8xs-qDwt5wPSoz7z7yGm7PqmWlMnd6V4JuPJSUmn-VIzwIAhSkuwl-Hc6W8iJY3CIvu_nwGGnd31cuYz2xos2YNE7jY2p-6MDTLfBj-4DaAA/embed"></iframe>
<!--kg-card-end: html-->
<h3 id="2-migrate-modelspipe">2. Migrate models -&gt; <code>.pipe</code></h3><p>Convert dbt <code>.sql</code> files into <code>.pipe</code> files (e.g., <code>pipes/stg_pageviews.pipe</code>).</p><p>Notes:</p><ul><li><strong>Basic Transformations</strong>: A dbt model often becomes a node in a <code>.pipe</code>. Use <code>FROM previous_node</code> or <code>FROM datasource_name</code> or <code>FROM other_pipe</code>.</li><li><strong>SQL Dialect</strong>: Common changes, depending on your current database provider:<ol><li>Date functions: <code>toDate</code>, <code>toStartOfDay</code>, <code>addMinutes</code>, etc.</li><li>JSON: <code>JSONExtractString</code>, <code>JSONExtractInt</code>, etc.</li><li>String functions might differ slightly.</li><li>Check the <a href="https://docs.tinybird.co/sql-reference/"><u>SQL Reference</u></a>. You <em>will</em> spend time here.</li></ol></li><li><strong>Materialized views (the incremental magic)</strong>: If your dbt model is incremental use a Tinybird materialized view.<ol><li>Define a target <code>.datasource</code> (e.g., <code>datasources/user_daily_summary.datasource</code>) with an appropriate engine (<code>AggregatingMergeTree</code> for sums/counts, <code>ReplacingMergeTree</code> for latest state). Schema should include aggregate state columns (e.g., <code>AggregateFunction(sum)</code>, <code>AggregateFunction(uniq)</code>).</li><li>Create a <code>.pipe</code> file (e.g., <code>materializations/mv_user_daily_summary.pipe</code>) containing the transformation SQL. Use aggregate <em>state</em> functions (<code>sumState</code>, <code>uniqState</code>, <code>argMaxState</code>).</li><li>Add <code>TYPE materialized</code> and <code>DATASOURCE target_datasource_name</code> to the final node of this pipe.</li></ol></li><li><strong>Copies: </strong>If you use a pre-aggregated table in dbt (<code>materialized='table'</code>), you should use copy pipes in Tinybird.<ol><li>Define a target <code>.datasource</code> (e.g., <code>datasources/user_daily_summary.datasource</code>) with an appropriate engine (<code>MergeTree</code>, <code>ReplacingMergeTree</code>...)&nbsp;</li><li>Create a <code>.pipe</code> file (e.g., <code>copies/daily_summary.pipe</code>) containing the transformation SQL.&nbsp;</li><li>Add <code>TYPE copy</code> &nbsp;and <code>DATASOURCE target_datasource_name</code> to the final node of this pipe.</li><li>Optionally set the <code>schedule</code> and <code>copy_mode</code> (append or replace) </li></ol></li></ul><p>Example:</p><h4 id="dbt-1">dbt</h4>
<!--kg-card-begin: html-->
<iframe width="100%" src="https://snippets.tinybird.co/XQAAAAJTAQAAAAAAAABBKUqGk9nLKzhNi_shp7vLEiMj-XjP9ueepTtvkaL4ydfVoYE4pNSleStoVKVVx3Fb-lT-qbDCivkT9irnSaxfpXUqvp7AndMjAzKLK4LtvT-v2m1dhWsd7sJlG9gY7Oa0FB8dqGfJUVDSQ2Y7BDqgVyYcilyDrZYNADIU4xBJqTSPTDFAslwsoVHWmsVB6kPPXhyIeDDDGggUvxbS_Pk5rCYvbDj1UQm6EZotVhZ75ZbDxgLG47jI5MaDdtgIsoHZE_1FXCZcq4yg6nDo_QR-T4NudqPABFjO6ZBov323EEQo9EgkIZORx_J4qDSzVf85PlYA/embed"></iframe>
<!--kg-card-end: html-->
<h4 id="tinybird-1">Tinybird</h4>
<!--kg-card-begin: html-->
<iframe width="100%" src="https://snippets.tinybird.co/XQAAAAJ-AQAAAAAAAABBKUqGk9nLKwQ7yuBMaGSxpcJ0jXW8jyCnalExoqY1vmatUUWAGRK7UgXkWDq92UHkGCX8MxY6DP_RuMwWlMg3v4Hch7wZ6LqJ22tiJ4Zrv0mRkfnnHlOLQAsnz-dmhyRV1wuAOU75G-QoLhC7H3YXaaICQPAarnDgihiLYWi76qqzk_tZijozLGuvR236DdeVOlBnOd2pghQ7QVEKMiMAJwBCw3H8bKzPgTiB4U6MuSBv7ufHI5Y6NkHB5Xbr4Tkpp7svE3RKIX4E7_puI1rR3gj3EXMCLitDIkmzdAwFOwYmR7KOcCKt5g51SChM_4G0yMl_wwugGSaKmCTWX19p_V6l0uuqplsUEaHff__3sUcC/embed"></iframe>
<!--kg-card-end: html-->
<h4 id="dbt-incremental-concept">dbt (Incremental concept)</h4>
<!--kg-card-begin: html-->
<iframe width="100%" src="https://snippets.tinybird.co/XQAAAALmAQAAAAAAAABBKUqGk9nLKzhdipQGoIIV2ijlt4ZflAxFrrkDGnb9kryv_u4iwt-OKzI_EZsXfo1oV9WyI-zePF_286aMiFtHOsUJpsVpZp0NtPUdGtQ4Njd8pjxAPI8sfOWaMgklPAi_wRuGnyJTeUcFOcMeC0Z4fhK4oPsTm75RtpmtiJIl1kSbK3C5ma7FL3kTbmksnLQrn26QNLdGoQd4R_INOvwEAlMlp-fcKQAlH3eR0JvzkK1ICkk3FZRHu-l4OVUUBI4h9K9TPUT1XicUY6InMuuvezP42gDLqZHDA5qvzJNMv8dKUh5h8DyqTV8p-Yv3mGFpyauH9ws29WB8UEDPn8d-l_WsFOma_hQN84Eq_oToXq1MXdbb6GRa52T3vPxJ7X3h97XhQTNniaQlryW-avh_OH7PIAevfPdWQ2ZjBEPeuNcEdssxuTnYUeOv_e_6cg/embed"></iframe>
<!--kg-card-end: html-->
<h4 id="tinybird-materialized-view-approach">Tinybird (Materialized view approach)</h4><p><strong>Target datasource:</strong></p>
<!--kg-card-begin: html-->
<iframe width="100%" src="https://snippets.tinybird.co/XQAAAAJiAQAAAAAAAABBKUqGk9nLKzhWyn2_DVCqpJ56MJd1bI-z6z0IGuOx5-nB1EcBCc-J9sZSt8BEmmrFbaMY1Q7rtByYHUaQudEE9KDZnhBOqS_lcDbw2Te0j235UGjCPrPdstLBCkDUN4OxGso2yp1KtliBJosp_CON2jMWdiuBEdtekyumg68O-4oPte8sTtbE_2zVQiaph4F9Qn4VVwAJmFofys7z5jkWlZAE_SwmnX6D7KlF4vxI2ElrFN6S0Cmp1qYKTwL6cc1KtD_iQ9KwApUgOWsnaIot_wXRrTb9dPnP97LDK_QzzD01vAjlIDG7mhEu0p66kHeywnlb1f1wudr88f6cs4E/embed"></iframe>
<!--kg-card-end: html-->
<p><strong>Materializing pipe:</strong></p>
<!--kg-card-begin: html-->
<iframe width="100%" src="https://snippets.tinybird.co/XQAAAALqAQAAAAAAAABBKUqGk9nLKzhYfYc7c2vYHQuiT2yyKREqpLlcXo-MTA1py1Qb0YlDUd3jit_dIXhMsKpX6CG1OadCHsRclWbaRfEILCGSZZcU7Ggn_kE_WaF_hIpRGhKKMTwxjNQ8M7jOhUGUYf_F_8BkwOqjDUV5WnlGfJQAd89srfAFQtmrzkPzirVjHHuw-kESmZBNY8A5p7PdFRTe70xUmM2AziHunfJXAGmfwWDrBoa-Uq9RJ34gMVH6U3yh7j5KN_c-FNjYqD3zeUyR0rZuxjqO5B-QTUzqnd585LUhX8kRqPEPQwRVY9GlbULlLEndfFtqpoCp7Ucw-TT0anGVK56-p8A_3UI5P8iLyrOX2lVAIqjZxJuDb5Z9L4AhtPYm9q7JrVHdj2HU7u5P7b5CoaSGCSL16SxiEWUKwUru6eXt_ej-xS5CadqQRYN_mXUpfvHDyCbqvP_SYwIA/embed"></iframe>
<!--kg-card-end: html-->
<p><strong>Querying the MV:</strong></p>
<!--kg-card-begin: html-->
<iframe width="100%" src="https://snippets.tinybird.co/XQAAAAJKAQAAAAAAAABBKUqGk9nLKw2_7uBMaGSwMIyKknNNzteo7tlYjn7r-9BDcQRWrb-_uzzNiLUVSaD85gDMEvrhizRFUjrFMfxnQKL16X1vDZnM6Wox9zPaM7pKOghaspEbAwlVqlR8DTWm95KDmac7Bswcki-EXOARuyZ4ZgqYyNuY4-wlRm1kiX4lcBl5nSA9rHifUbob9Hji9Nv1NtIFnHWkqlawVPs83LnHg-KAJbM9UtcOjIpeyif4xHBE2RISZoIR5yLi5UqhlPa_4_DvV3qRDAoMVZ3w1wsWGLfGfcKbYIt-fkSBBk2rOFb0V8OFtbf_3o3LoA/embed"></iframe>
<!--kg-card-end: html-->
<h3 id="3-publish-apistype-endpoint">3. Publish APIs -&gt; <code>TYPE endpoint</code></h3><p>This is often the goal. Make the final node of your query pipe an endpoint:</p><ul><li>Add <code>TYPE endpoint</code>.</li><li>Define URL parameters using <code>{{ DateType(param_name, default_value) }}</code>.</li></ul>
<!--kg-card-begin: html-->
<iframe width="100%" src="https://snippets.tinybird.co/XQAAAALLAQAAAAAAAABBKUqGk9nLKzhPN2fCVzXwKssbHnC_SMHrOHisdZYL7IPYPPxUq3ccLeyOxjrMXIY6K4ic9F9IcnaiX7x3OrMBK7DOj0H5BJh2jSQvt8DUNYbyOKYbJejOcDNZgThbmBvfUZRiREelOlW1bTLR0MG87tMW5lSz7hE_NRWXwdhR1hl0mutJ0wXeeRyIxsjr7pyPB2PpvprcXi4E0YdmV1-CI35ht9COmX_IEm9XpsDVLjoN8sunPX0b_oXKENwYd4Bviwkh8wHboDLC0GhCkZEoJ2Iy9yfUhA-kt8DcbNdoeQ_YTaU7n7srEo8kgop7xHK59sHae3bTjE9w1eTXiY5XvYQgb9__DnYsLZ9Qywm_mT_ChCT6PEy90LauSfoRxKRlWzL-IQfZpOUt2_HNzX__4QNwbQ/embed"></iframe>
<!--kg-card-end: html-->
<p>Deploy (<code>tb --cloud deploy</code>) and your API is live.</p><h3 id="4-migrate-teststb-test">4. Migrate tests -&gt; <code>tb test</code></h3><p>Translate dbt tests to Tinybird tests:</p><ul><li><strong>Endpoint tests (most common)</strong>: If your Pipe ends in <code>TYPE endpoint</code>, use <code>tb test create &lt;pipe_name&gt;</code> to create a <code>.yml</code> test file in <code>tests/</code>. Run the endpoint with parameters (e.g., via <code>curl</code> or <code>tb endpoint</code>) and use <code>tb test update &lt;pipe_name&gt;</code> to capture the output as the expected result. See <a href="https://docs.tinybird.co/forward/datafiles/test-files"><u>Test Files</u></a>.</li><li><strong>Data quality checks</strong>: Often embedded directly in the pipe logic. Use <code>throwIf( count() &gt; 0)</code> in a node, or create specific nodes to filter/flag bad data. You can also create dedicated <code>.pipe</code> files that run checks and assert results in a test.</li></ul>
<!--kg-card-begin: html-->
<iframe width="100%" src="https://snippets.tinybird.co/XQAAAAKAAQAAAAAAAABBKUqGk9nLKvEn6WfZGOBa3wrvnNRL4D4LxpSDEqCDENHN6pmhAOYu8mOBrYinFwSjxYuFQ2IP7M1dKsUZ9b88lda9svIYXI10QTZ2hG12dCD49HAETyXEoYTL2PHD6wcRyDzteb_rYqPCl1tYI6I8A8_JCmN_d3aQEcA1K9e5eV_7S3WH-kbREOdiuTKJD0apzBRziAwgCNAoNvtDMbn-uCFUjJhNdIWN0sXr1EXxnJQAyeleDPl8fI-su7cJKJekG5DIvE9vgeB7XNLFvVrnnggfSLZv2XgVKZnWWidf5fiAROmfwAxfXXsDMU5p_V22cg/embed"></iframe>
<!--kg-card-end: html-->
<h3 id="5-orchestrationmvs-copy-pipes-deployment">5. Orchestration -&gt; MVs, copy pipes, deployment</h3><ul><li><strong>Deployment</strong>: <code>tb deploy</code> pushes the <em>definitions</em> to Tinybird.</li><li><strong>Real-time</strong>: Materialized views handle incremental updates automatically. No external scheduler needed for this continuous flow.</li><li><strong>Scheduled batch</strong>: For jobs that <em>should</em> run periodically (like dbt runs or snapshots), use copy pipes. Add <code>TYPE copy</code> and <code>COPY_SCHEDULE 'cron syntax'</code> (e.g., <code>'0 * * * *'</code> for hourly) to a pipe node. See <a href="https://docs.tinybird.co/forward/work-with-data/copy-pipes"><u>Copy Pipes</u></a>.</li><li><strong>External triggers</strong>: Need more complex logic? Trigger a Tinybird job (an on-demand copy pipe) via its API from Airflow, GitHub Actions, Trigger.dev, etc.</li></ul><h2 id="potential-pitfalls">Potential pitfalls&nbsp;</h2><ul><li><strong>SQL dialect hell</strong>: Budget time for translating functions, especially complex date logic, array/JSON manipulation, or window functions (ClickHouse® support is good, but syntax differs). Test thoroughly.</li><li><strong>Materialized view mindset</strong>: Thinking incrementally is key. Designing the MV target schema (<code>AggregatingMergeTree</code>, states) and the transformation logic takes practice. Debugging MVs can be trickier than batch jobs.</li><li><strong>Sorting key design</strong>: Forgetting to define or choosing poor <code>ENGINE_SORTING_KEY</code> in your <code>.datasource</code> files will lead to slow queries, especially as data grows. This is more a database thing than a framework one, but it’s important to take it into account.</li><li><strong>Complexity creep in pipes</strong>: While pipes allow chaining SQL nodes, overly complex, multi-hundred-line pipes become hard to debug and manage. Break things down logically.</li></ul><h3 id="monitoring-is-a-little-bit-different">Monitoring is a little bit different</h3><p>Forget just checking if the <code>dbt run</code> succeeded. In Tinybird, you need to monitor the flow continuously:</p><ul><li><code>datasources_ops_log</code>: Monitor ingestion rates, errors for API/Kafka sources.</li><li><code>pipe_stats_rt</code>: Check endpoint latency (p50, p95, p99), rows read, errors. <em>Essential</em> for API performance.</li><li><code>jobs_log</code>: Monitor scheduled Copy Pipe runs.</li></ul><p>Learn to query these service data sources (<code>FROM tinybird.ds_name</code>) and create endpoints (<a href="https://www.tinybird.co/docs/forward/publish-your-data/api-endpoints/guides/consume-api-endpoints-in-prometheus-format"><u>Prometheus format</u></a> is especially useful here). They are your eyes and ears.</p><h2 id="final-thoughts">Final thoughts</h2><p>Migrating from dbt to Tinybird isn't a simple lift-and-shift. It involves rethinking data flow for real-time and API-centric use cases, learning the SQL nuances, and embracing materialized views.</p><p>But if you have real-time needs, and you want to have everything in the same place, Tinybird is a good alternative/complement to dbt.</p>
