---
title: "clickhouse integration fivetran — 3 Ways to Connect in 2026"
excerpt: "Load data into ClickHouse® from Fivetran via the native ClickHouse® destination, a custom webhook destination, or a Tinybird intermediary. Pick the right clickhouse integration fivetran path."
authors: "Tinybird"
categories: "AI Resources"
createdOn: "2026-05-02 00:00:00"
publishedOn: "2026-05-02 00:00:00"
updatedOn: "2026-05-02 00:00:00"
status: "published"
---

These are the main options for a **clickhouse integration fivetran** setup:

1. Fivetran → ClickHouse® via **Fivetran native ClickHouse® destination connector**
2. Fivetran → ClickHouse® via **Fivetran custom webhook destination**
3. Fivetran → **Tinybird** via S3/GCS intermediate storage and Tinybird connector

**Fivetran** is the most widely adopted **managed ELT platform**, automating data movement from hundreds of sources (Salesforce, Postgres, Stripe, Google Analytics, and more) into analytical destinations. ClickHouse® is a columnar OLAP [database](https://www.oracle.com/database/what-is-database/) that handles billions of rows in sub-second queries. Connecting the two lets data teams use Fivetran's managed replication to land source data in ClickHouse® for fast analytical queries, dashboards, and APIs.

A **clickhouse integration fivetran** pipeline automates the extraction and loading of source data into ClickHouse® on a managed schedule — no custom ingestion code, no connection maintenance, and automatic schema drift handling on the source side.

Before you pick a path, consider these questions:

- Do you need **fully managed schema sync** (Fivetran handles source schema changes automatically)?
- Does your ClickHouse® instance need to be **publicly accessible**, or is it on a private network?
- Do you also need to **serve loaded data as REST APIs** for applications downstream of the pipeline?

## **Three ways to implement clickhouse integration fivetran**

This section covers the three main integration paths, with configuration details and code for each.

### **Option 1: Fivetran → ClickHouse® — native destination connector**

The most direct approach. Fivetran has a **native ClickHouse® destination connector** that writes extracted data directly into your ClickHouse® instance. Fivetran manages table creation, schema sync, and incremental loading. You configure the destination once in the Fivetran UI and Fivetran handles the rest.

**How it works:** Fivetran connects to ClickHouse® via the HTTP interface. For each synced source table, Fivetran creates a corresponding ClickHouse® table using a `ReplacingMergeTree` engine (for sources with CDC or update support) or a plain `MergeTree` (for append-only sources). Fivetran handles schema drift — if a source adds a column, Fivetran runs the `ALTER TABLE ADD COLUMN` on the ClickHouse® side automatically.

**Fivetran destination configuration (ClickHouse® connection settings):**

```text
Host:       your-clickhouse-host
Port:       8443
Database:   fivetran_dest
Username:   fivetran_user
Password:   <secret>
SSL:        Enabled
```

**Minimum ClickHouse® user permissions for Fivetran:**

```sql
-- Create a dedicated user for Fivetran
CREATE USER fivetran_user IDENTIFIED BY 'strong_password';

-- Grant necessary permissions
GRANT CREATE DATABASE ON *.* TO fivetran_user;
GRANT CREATE TABLE, DROP TABLE, ALTER, INSERT, SELECT
    ON fivetran_dest.* TO fivetran_user;

-- Fivetran also needs to create its own metadata schema
GRANT CREATE DATABASE ON fivetran_metadata.* TO fivetran_user;
```

**What Fivetran creates in ClickHouse® for a source table:**

```sql
-- Fivetran creates tables similar to this for CDC-enabled sources:
CREATE TABLE fivetran_dest.salesforce_opportunities (
    id              String,
    name            String,
    amount          Float64,
    close_date      Date,
    stage           LowCardinality(String),
    account_id      String,
    created_date    DateTime,
    last_modified   DateTime,
    _fivetran_synced DateTime,
    _fivetran_deleted Boolean
)
ENGINE = ReplacingMergeTree(_fivetran_synced)
ORDER BY id;
```

**Querying Fivetran-loaded data (excluding soft-deleted rows):**

```sql
SELECT
    stage,
    count()             AS total_opportunities,
    sum(amount)         AS total_pipeline,
    avg(amount)         AS avg_deal_size
FROM fivetran_dest.salesforce_opportunities FINAL
WHERE _fivetran_deleted = false
  AND close_date >= today() - INTERVAL 90 DAY
GROUP BY stage
ORDER BY total_pipeline DESC
```

Note the `FINAL` modifier — required with `ReplacingMergeTree` to get deduplicated results before ClickHouse® merges happen in the background.

**When this fits:**

- You want a **fully managed, zero-code ELT pipeline** into ClickHouse® with automatic schema sync
- Your source connectors are already in Fivetran (Salesforce, HubSpot, Postgres, Stripe, etc.)
- You need Fivetran's **data quality guarantees** — exactly-once delivery, CDC tracking, and soft-delete handling

**Trade-offs:** ClickHouse® is a destination in Fivetran's catalog but with some limitations compared to Fivetran's primary targets (BigQuery, Snowflake, Redshift). Schema management for complex ClickHouse® types (nested arrays, `AggregateFunction`) is not supported. Fivetran uses `ReplacingMergeTree` for all tables — you cannot customise the engine per table from the Fivetran UI. Heavy use of `FINAL` on large tables can be slow; schedule `OPTIMIZE TABLE` runs to reduce the background merge lag.

**Prerequisites:** Fivetran account with destination access, ClickHouse® instance reachable from Fivetran's IP ranges on port 8123 or 8443, dedicated ClickHouse® user with the permissions above.

### **Option 2: Fivetran → ClickHouse® — custom webhook destination**

For teams with **private ClickHouse® infrastructure** or who need custom loading logic (custom engines, sort keys, partitioning), Fivetran's **custom webhook destination** sends sync events to an HTTP endpoint you control. Your endpoint receives the event payload and writes to ClickHouse® using your own logic.

**How it works:** configure a Fivetran **custom destination** pointing at your webhook URL. For each sync batch, Fivetran sends a POST request with the operation type (`UPSERT`, `DELETE`, `TRUNCATE`) and the row data as JSON. Your endpoint deserialises the payload and writes to ClickHouse® using `clickhouse-connect` or the ClickHouse® HTTP interface.

**Fivetran custom destination configuration:**

```text
Destination type:   Custom Webhook
Endpoint URL:       https://your-loader.example.com/fivetran
Secret header:      X-Fivetran-Secret: <token>
```

**Python webhook receiver (FastAPI):**

```python
from fastapi import FastAPI, Header, HTTPException, Request
import clickhouse_connect
import os

app = FastAPI()

CH_HOST     = os.environ["CLICKHOUSE_HOST"]
CH_PASSWORD = os.environ["CLICKHOUSE_PASSWORD"]
FIVETRAN_SECRET = os.environ["FIVETRAN_WEBHOOK_SECRET"]

def get_client():
    return clickhouse_connect.get_client(
        host=CH_HOST, port=8443, secure=True,
        username="fivetran_writer", password=CH_PASSWORD
    )

@app.post("/fivetran")
async def receive_fivetran_event(
    request: Request,
    x_fivetran_secret: str = Header(None)
):
    if x_fivetran_secret != FIVETRAN_SECRET:
        raise HTTPException(status_code=401, detail="Unauthorized")

    payload = await request.json()
    schema  = payload["schema"]
    table   = payload["table"]
    op      = payload["op"]           # UPSERT | DELETE | TRUNCATE
    rows    = payload.get("rows", [])

    client = get_client()

    if op == "UPSERT" and rows:
        columns = list(rows[0].keys())
        data    = [[row[c] for c in columns] for row in rows]
        client.insert(f"{schema}.{table}", data, column_names=columns)

    elif op == "DELETE" and rows:
        ids = [row["id"] for row in rows]
        client.command(
            f"ALTER TABLE {schema}.{table} DELETE WHERE id IN %(ids)s",
            parameters={"ids": ids}
        )

    elif op == "TRUNCATE":
        client.command(f"TRUNCATE TABLE {schema}.{table}")

    return {"status": "ok", "rows_processed": len(rows)}
```

**Custom table DDL (full control over engine and sort key):**

```sql
-- You control the engine and sort key — not Fivetran's default
CREATE TABLE fivetran_dest.salesforce_opportunities (
    id              String,
    name            String,
    amount          Float64,
    close_date      Date,
    stage           LowCardinality(String),
    account_id      String,
    created_date    DateTime,
    _fivetran_synced DateTime,
    _fivetran_deleted Boolean
)
ENGINE = ReplacingMergeTree(_fivetran_synced)
PARTITION BY toYYYYMM(close_date)
ORDER BY (account_id, stage, id);
```

**When this fits:**

- Your ClickHouse® is on a **private network** and cannot be reached directly by Fivetran
- You need **custom ClickHouse® table design** — specific engines, sort keys, partitioning, or pre-aggregation on load
- You want to **transform data on ingest** (type casting, field renaming, enrichment) before it lands in ClickHouse®

**Trade-offs:** requires deploying and operating a webhook receiver service. You own the reliability — retries, dead-letter handling, and monitoring. More engineering investment than Option 1. Best for teams with specific ClickHouse® schema requirements that Fivetran's native destination cannot satisfy.

**Prerequisites:** Fivetran account with custom destination support (Business Critical or Enterprise plan), HTTPS endpoint reachable by Fivetran, `clickhouse-connect` or ClickHouse® HTTP interface access.

### **Option 3: Fivetran → Tinybird — via S3/GCS intermediate storage**

For teams that need **both Fivetran's managed replication and Tinybird's REST API serving**, use Fivetran to land data in object storage (S3 or GCS) and Tinybird's **S3 connector** or **BigQuery connector** to ingest from there into Tinybird's ClickHouse®-backed platform.

**How it works:** configure Fivetran to use S3 or GCS as its destination (file-based landing). Tinybird's data sources can pull from S3 using a `tinybird datasource` definition with a `TYPE s3` connector. You define the ingestion schedule and Tinybird handles loading, deduplication, and serving via REST API endpoints.

**Fivetran destination — S3 configuration:**

```text
Destination type:   Amazon S3
S3 bucket:          fivetran-landing-bucket
S3 path prefix:     fivetran/
File format:        Parquet
Compression:        SNAPPY
```

**Tinybird datasource pulling from S3 (`fivetran_opportunities.datasource`):**

```sql
SCHEMA >
    id              String,
    name            String,
    amount          Float64,
    close_date      Date,
    stage           LowCardinality(String),
    account_id      String,
    created_date    DateTime,
    _fivetran_synced DateTime

ENGINE "ReplacingMergeTree"
ENGINE_PARTITION_KEY "toYYYYMM(close_date)"
ENGINE_SORTING_KEY "account_id, stage, id"

[s3_connector]
bucket = fivetran-landing-bucket
prefix = fivetran/salesforce/opportunities/
format = Parquet
schedule = @hourly
```

**Tinybird pipe serving loaded data as a REST API endpoint:**

```sql
NODE opportunities_query
SQL >
    SELECT
        stage,
        count()         AS total_opps,
        sum(amount)     AS total_pipeline,
        avg(amount)     AS avg_deal_size
    FROM fivetran_opportunities
    WHERE _fivetran_deleted = 0
      AND close_date >= {{ Date(start_date, '2026-01-01') }}
    GROUP BY stage
    ORDER BY total_pipeline DESC

TYPE endpoint
```

**Calling the published endpoint:**

```bash
curl -s \
  -H "Authorization: Bearer YOUR_TINYBIRD_TOKEN" \
  "https://api.tinybird.co/v0/pipes/opportunities_query.json?start_date=2026-01-01"
```

**When this fits:**

- You want Fivetran's managed replication **plus** Tinybird's REST API serving from a single pipeline
- Your ClickHouse® needs are managed (no self-hosting) and you want **both ELT and API serving** in managed services
- You're already using Fivetran with a cloud data warehouse and want to add a **low-latency API layer** without migrating

**Trade-offs:** the S3 intermediate adds latency (Fivetran sync cadence + Tinybird S3 pull schedule). Not suitable for sub-minute freshness requirements. For very high-frequency data, use Fivetran's direct destinations and separately push to Tinybird via the Events API.

**Prerequisites:** Fivetran account with S3/GCS destination support, S3 or GCS bucket, Tinybird account with S3 connector access.

### **Summary: picking the right option**

| Criterion | Native ClickHouse® destination | Custom webhook | Tinybird via S3 |
| --- | --- | --- | --- |
| **Setup complexity** | Low (Fivetran UI) | High (webhook service) | Medium (S3 + Tinybird config) |
| **Schema auto-sync** | Yes | Manual | Manual |
| **Private ClickHouse®** | No | Yes | Yes (Tinybird-managed) |
| **Custom engine / sort key** | No | Yes | Yes |
| **REST API serving** | No | No | Yes (TYPE endpoint) |
| **Real-time freshness** | Near-real-time (Fivetran cadence) | Near-real-time | Hourly (S3 pull) |
| **Ops burden** | Low | High | Low–Medium |

## **Decision framework: what to choose for clickhouse integration fivetran**

Pick based on your **network access**, **schema requirements**, and **downstream consumers**:

- **Native ClickHouse® destination** if your ClickHouse® is publicly accessible and you want a zero-code managed pipeline. Best for teams already using Fivetran who want to add ClickHouse® as a fast analytical destination without custom engineering.
- **Custom webhook** if your ClickHouse® is on a private network, or if you need custom engines, sort keys, or transformation logic on load. Best for teams with specific ClickHouse® schema requirements that Fivetran's native destination cannot satisfy.
- **Tinybird via S3** if you want both Fivetran's managed replication and REST API serving from the loaded data, without managing ClickHouse® infrastructure. Best for teams building [user-facing analytics](https://www.tinybird.co/blog/user-facing-analytics) on top of CRM or SaaS data loaded by Fivetran.

**Bottom line:** for simple managed pipelines into a public ClickHouse® instance, Option 1 is the fastest path. For private networks or custom schema control, Option 2 gives full flexibility at an operational cost. For teams that need API access to Fivetran-loaded data, Option 3 eliminates a separate serving layer.

## **What does clickhouse integration fivetran mean (and when should you care)?**

A **clickhouse integration fivetran** setup uses Fivetran's managed ELT platform to replicate data from operational sources (CRMs, databases, SaaS tools) into ClickHouse® as the analytical destination. Fivetran handles the extraction and loading; ClickHouse® handles the fast analytical queries downstream.

You should care when you need **analytical queries on operational data** — CRM pipeline analysis, SaaS product usage, financial reporting — without building and maintaining custom extraction code. Fivetran automates source connectors (Salesforce, HubSpot, Stripe, MySQL, Postgres, and hundreds more), leaving your team to focus on the analytical layer.

The integration also matters when your operational data currently lands in a cloud data warehouse (BigQuery, Snowflake) and query performance or cost is a constraint. Redirecting Fivetran to ClickHouse® gives you the same managed replication with ClickHouse®'s [low latency](https://www.tinybird.co/blog/real-time-analytics-a-definitive-guide) query engine.

## **Schema and pipeline design**

### **Practical schema rules for Fivetran-loaded ClickHouse® tables**

Fivetran's native destination manages schema creation, but for the webhook and S3 paths you own the DDL. ClickHouse® schema design choices made at table creation significantly affect query performance — especially on Fivetran-loaded operational data with many small rows.

**Rule 1: choose `ORDER BY` columns that match analytical query patterns.** Fivetran loads operational data (CRM records, transactions) — common queries filter by account, date range, or status. Include those columns in the `ORDER BY`.

**Rule 2: use `ReplacingMergeTree(updated_at)` for sources with updates.** CRM and transactional data changes frequently. `ReplacingMergeTree` deduplicates on the latest `updated_at`, matching Fivetran's upsert semantics.

**Rule 3: partition by the most common date filter.** For CRM data, `toYYYYMM(close_date)` or `toYYYYMM(created_date)` partitioning ensures date-range filters scan only relevant partitions.

**Rule 4: filter `_fivetran_deleted = false` in all queries.** Fivetran uses soft deletes. Always include this filter or build it into a ClickHouse® view or Tinybird Pipe to avoid surfacing deleted records to downstream consumers.

### **Example: analytics-friendly schema for Fivetran-loaded data**

```sql
CREATE TABLE fivetran_dest.salesforce_opportunities (
    id              String,
    name            String,
    amount          Float64,
    close_date      Date,
    stage           LowCardinality(String),
    account_id      String,
    owner_id        String,
    created_date    DateTime,
    last_modified   DateTime,
    _fivetran_synced DateTime,
    _fivetran_deleted Boolean
)
ENGINE = ReplacingMergeTree(_fivetran_synced)
PARTITION BY toYYYYMM(close_date)
ORDER BY (account_id, stage, id);
```

**Pipeline view (excluding deleted rows, collapsing duplicates):**

```sql
CREATE VIEW fivetran_dest.v_opportunities AS
SELECT
    id, name, amount, close_date, stage, account_id,
    owner_id, created_date, last_modified, _fivetran_synced
FROM fivetran_dest.salesforce_opportunities FINAL
WHERE _fivetran_deleted = false;
```

**Materialized view for fast dashboard aggregations:**

```sql
CREATE MATERIALIZED VIEW fivetran_dest.opportunities_by_stage_mv
ENGINE = AggregatingMergeTree()
PARTITION BY toYYYYMM(close_date)
ORDER BY (stage, toYYYYMM(close_date))
AS SELECT
    stage,
    toYYYYMM(close_date)        AS close_month,
    count()                     AS total_opps,
    sumState(amount)            AS total_amount_state,
    uniqState(account_id)       AS unique_accounts_state
FROM fivetran_dest.salesforce_opportunities FINAL
WHERE _fivetran_deleted = false
GROUP BY stage, close_month;
```

### **Failure modes**

1. **`FINAL` performance on large tables.** `ReplacingMergeTree` with `FINAL` forces a deduplication scan on read. On large unmerged tables this is slow. Mitigation: schedule regular `OPTIMIZE TABLE FINAL` runs (e.g., nightly via Airflow or a ClickHouse® scheduled mutation) to reduce background merge lag and speed up `FINAL` queries.

2. **Fivetran IP allowlisting for ClickHouse® Cloud.** Fivetran's connectors originate from a documented set of IP ranges. ClickHouse® Cloud's network access controls may block Fivetran. Mitigation: add Fivetran's IP ranges to the ClickHouse® Cloud allowlist in the Cloud console under **Security → Allowed IP addresses**.

3. **Schema drift breaking downstream models.** When a Fivetran source adds a column, the native destination auto-adds it in ClickHouse®. Downstream dbt models, BI tool connections, or Tinybird Pipes that use `SELECT *` may surface unexpected columns. Mitigation: use explicit column lists in all downstream SQL; monitor Fivetran sync logs for schema change events.

4. **Soft-delete rows surfacing in queries.** Forgetting `WHERE _fivetran_deleted = false` returns deleted records. Mitigation: create a ClickHouse® view or Tinybird Pipe with the filter built in as the standard access layer; never query Fivetran-loaded tables directly from BI tools.

5. **Webhook receiver downtime causing sync failures.** For Option 2, if the webhook service is unavailable, Fivetran retries but may eventually mark the destination as failed. Mitigation: deploy the webhook receiver with high-availability (multiple replicas, health checks), implement idempotent processing (duplicate row inserts are safe with `ReplacingMergeTree`), and set up Fivetran webhook failure alerts.

## **Why ClickHouse® for Fivetran analytics**

ClickHouse® is a **columnar OLAP database** optimised for the aggregation-heavy analytical patterns that Fivetran-loaded operational data generates. Fivetran replicates row-oriented operational data (CRM records, transactions, events) — ClickHouse®'s columnar storage and vectorized execution turn that data into sub-second analytical queries.

For Fivetran users, this means CRM pipeline reports, SaaS usage dashboards, and financial aggregations that previously ran in seconds on BigQuery or Snowflake now respond interactively. ClickHouse®'s `ReplacingMergeTree` matches Fivetran's upsert semantics natively, and the `_fivetran_deleted` flag maps directly to ClickHouse®'s soft-delete pattern.

For teams with large Fivetran sync volumes across many sources, ClickHouse®'s **columnar compression** significantly reduces storage cost compared to cloud data warehouses, while maintaining or improving query performance. Combined with Tinybird's API layer, Fivetran-loaded data can serve both analytical queries and [user-facing analytics](https://www.tinybird.co/blog/user-facing-analytics) REST endpoints from the same ClickHouse® backend.

## **Security and operational monitoring**

- **Authentication:** dedicated Fivetran ClickHouse® user with minimum necessary grants (`CREATE TABLE, DROP TABLE, ALTER, INSERT, SELECT` on target database only). Never use the `default` admin user.
- **TLS:** enforce HTTPS on all connections. Port 8443 for Cloud; `https_port` for self-managed. Enable SSL verification.
- **IP allowlisting:** add Fivetran's documented IP ranges to ClickHouse® Cloud network access controls. Rotate IP allowlist when Fivetran announces IP range changes.
- **Webhook secret rotation:** for Option 2, rotate the `X-Fivetran-Secret` header token on schedule; store it in a secrets manager (AWS SSM, HashiCorp Vault), not in environment variables committed to source control.
- **Token hygiene:** scoped Tinybird API tokens per pipeline; rotate on schedule; restrict to `READ`-only access for query endpoints.

## **Latency, caching, and freshness considerations**

**Native destination sync cadence** determines data freshness — Fivetran syncs on a schedule (5 minutes to 24 hours depending on plan and source). ClickHouse® receives new rows as each sync completes. Background merges in `ReplacingMergeTree` may create a short deduplication lag; use `FINAL` for consistent results.

**Tinybird Pipe caching** (Option 3) provides configurable TTL freshness for REST API endpoints. Consumers querying Tinybird endpoints between S3 pull cycles receive cached responses without a ClickHouse® query. Best for dashboards where the Fivetran sync cadence (hourly) matches acceptable data freshness.

For **sub-minute freshness**, bypass Fivetran for the hot path — use Fivetran for historical and batch data, and Tinybird's Events API for real-time event streams. Combine both in the same ClickHouse®-backed Tinybird datasource for a unified query layer.

## **Why Tinybird is a strong fit for clickhouse integration fivetran**

Most teams using Fivetran to load operational data into a warehouse eventually need the same data in product features — a **customer-facing usage dashboard**, an **API for a SaaS product**, or an **embedded analytics widget**. Managing a separate API layer means duplicated query logic and more infrastructure to operate.

Tinybird solves this by combining a **ClickHouse®-powered analytics platform**, **SQL-based Pipes**, and **instant REST API publishing** in one managed service. Fivetran loads the data, Tinybird serves it via REST endpoints, and your product consumes those endpoints directly. No separate Flask service, no additional ClickHouse® cluster to manage. This is the [real-time data ingestion](https://www.tinybird.co/blog/real-time-data-ingestion) and serving pattern for teams that want to focus on the analytical product, not the infrastructure stack.

Next step: identify the Fivetran-loaded table your team queries most frequently, model it as a Tinybird Pipe with the `_fivetran_deleted` filter and appropriate aggregations, and validate API response time versus your current BI tool query time.

## **Frequently Asked Questions (FAQs)**

### **Does Fivetran support ClickHouse® as a destination?**

Yes. Fivetran has a **native ClickHouse® destination connector** that writes to ClickHouse® via the HTTP interface. It handles table creation, schema sync, and incremental loading using `ReplacingMergeTree`. Check Fivetran's destination catalog for the current feature support matrix and supported ClickHouse® versions.

### **How does Fivetran handle updates and deletes in ClickHouse®?**

Fivetran uses **soft deletes** — a `_fivetran_deleted` boolean column marks deleted rows rather than removing them. For sources with CDC (change data capture), Fivetran upserts rows using the `_fivetran_synced` timestamp as the `ReplacingMergeTree` version column. Use `SELECT … FINAL WHERE _fivetran_deleted = false` for deduplicated, non-deleted results.

### **How do I allow Fivetran to connect to my ClickHouse® instance?**

Add Fivetran's documented IP ranges to your ClickHouse® network allowlist. For ClickHouse® Cloud, add IPs in the Cloud console under **Security → Allowed IP addresses**. For self-managed ClickHouse®, update your firewall or security group rules to allow inbound TCP on port 8123 (HTTP) or 8443 (HTTPS) from Fivetran's IP ranges.

### **Can I use Fivetran to load into a private ClickHouse® instance?**

Not with the native destination connector (Fivetran requires a publicly reachable endpoint). Use the **custom webhook destination** (Option 2) — your webhook receiver runs inside your private network and pushes data to ClickHouse®. Alternatively, use Fivetran's **S3/GCS destination** and pull data into Tinybird from object storage (Option 3).

### **How do I handle the _fivetran_deleted flag in downstream queries?**

Always filter `WHERE _fivetran_deleted = false` in any query that should return live records. The safest approach is to create a ClickHouse® view or Tinybird Pipe with this filter built in, and direct all downstream consumers (BI tools, APIs, dbt models) to the view rather than the raw Fivetran table.

### **What are the main limitations of clickhouse integration fivetran?**

The native destination does not support custom ClickHouse® engines, sort keys, or partitioning — all tables use Fivetran's default `ReplacingMergeTree` with `ORDER BY` on the primary key. Complex ClickHouse® types (nested arrays, `AggregateFunction`) are not supported. Heavy use of `SELECT … FINAL` on large, unmerged tables can be slow without regular `OPTIMIZE TABLE` runs. The custom webhook option adds significant operational overhead.
