---
title: Include files
meta:
  description: 'Include files help you organize settings so that you can reuse them across .datasource and .pipe files.'
---

# Include files (.incl)

Include files (.incl) help separate connector settings and reuse them across multiple .datasource files or .pipe templates.

Include files are referenced using `INCLUDE` instruction.

## Connector settings

Use .incl files to separate connector settings from .datasource files. 

For example, the following .incl file contains Kafka Connector settings:

```tb {% title="tinybird/datasources/connections/kafka_connection.incl" %}
KAFKA_CONNECTION_NAME my_connection_name
KAFKA_BOOTSTRAP_SERVERS my_server:9092
KAFKA_KEY my_username
KAFKA_SECRET my_password
```

While the .datasource file only contains a reference to the .incl file using `INCLUDE`:

```tb {% title="tinybird/datasources/kafka_ds.datasource" %}
SCHEMA >
`value` String,
`topic` LowCardinality(String),
`partition` Int16,
`offset` Int64,
`timestamp` DateTime,
`key` String

ENGINE "MergeTree"
ENGINE_PARTITION_KEY "toYYYYMM(timestamp)"
ENGINE_SORTING_KEY "timestamp"

INCLUDE "connections/kafka_connection.incl"

KAFKA_TOPIC my_topic
KAFKA_GROUP_ID my_group_id
```

### Pipe nodes

You can use .incl datafiles to [reuse node templates](/classic/cli/advanced-templates#reusing-templates).

For example, the following .incl file contains a node template:

```tb {% title="tinybird/includes/only_buy_events.incl" %}
NODE only_buy_events
SQL >
    SELECT
        toDate(timestamp) date,
        product,
        color,
        JSONExtractFloat(json, 'price') as price
    FROM events
    where action = 'buy'
```

The .pipe file starts with the `INCLUDE` reference to the template:

```tb {% title="tinybird/endpoints/sales.pipe" %}
INCLUDE "../includes/only_buy_events.incl"

NODE endpoint
DESCRIPTION >
    return sales for a product with color filter
SQL >
    %
    select date, sum(price) total_sales
    from only_buy_events
    where color in {{Array(colors, 'black')}}
    group by date
```

A different .pipe file can reuse the sample template:

```tb {% title="tinybird/pipes/top_per_day.pipe" %}
INCLUDE "../includes/only_buy_events.incl"

NODE top_per_day
SQL >
SELECT date,
        topKState(10)(product) top_10,
        sumState(price) total_sales
from only_buy_events
group by date

TYPE MATERIALIZED
DATASOURCE mv_top_per_day
```

### Include with variables

You can templatize .incl files. For instance you can reuse the same .incl template with different variable values:

```tb {% title="tinybird/includes/top_products.incl" %}
NODE endpoint

DESCRIPTION >
    returns top 10 products for the last week

SQL >
    %
    select
        date,
        topKMerge(10)(top_10) as top_10
    from top_product_per_day

    {\% if '$DATE_FILTER' == 'last_week' %}
        where date > today() - interval 7 day
    {\% else %}
        where date between {{Date(start)}} and {{Date(end)}}
    {\% end %}

    group by date
```

The `$DATE_FILTER` parameter is a variable in the .incl file. The following examples show how to create two separate endpoints by injecting a value for the `DATE_FILTER` variable.

The following .pipe file references the template using a `last_week` value for `DATE_FILTER`:

```tb {% title="tinybird/endpoints/top_products_last_week.pipe" %}
INCLUDE "../includes/top_products.incl" "DATE_FILTER=last_week"
```

Whereas the following .pipe file references the template using a `between_dates` value for `DATE_FILTER`:

```tb {% title="tinybird/endpoints/top_products_between_dates.pipe" %}
INCLUDE "../includes/top_products.incl" "DATE_FILTER=between_dates"
```

### Include with environment variables

Because you can expand `INCLUDE` files using the Tinybird CLI, you can use environment variables.

For example, if you have configured the `KAFKA_BOOTSTRAP_SERVERS`, `KAFKA_KEY`, and `KAFKA_SECRET` environment variables, you can create an .incl file as follows:

```tb {% title="tinybird/datasources/connections/kafka_connection.incl" %}
KAFKA_CONNECTION_NAME my_connection_name
KAFKA_BOOTSTRAP_SERVERS ${KAFKA_BOOTSTRAP_SERVERS}
KAFKA_KEY ${KAFKA_KEY}
KAFKA_SECRET ${KAFKA_SECRET}
```

You can then use the values in your .datasource datafiles:

```tb {% title="tinybird/datasources/kafka_ds.datasource" %}
SCHEMA >
`value` String,
`topic` LowCardinality(String),
`partition` Int16,
`offset` Int64,
`timestamp` DateTime,
`key` String

ENGINE "MergeTree"
ENGINE_PARTITION_KEY "toYYYYMM(timestamp)"
ENGINE_SORTING_KEY "timestamp"

INCLUDE "connections/kafka_connection.incl"

KAFKA_TOPIC my_topic
KAFKA_GROUP_ID my_group_id
```

Alternatively, you can create separate .incl files per environment variable:

```tb {% title="tinybird/datasources/connections/kafka_connection_prod.incl" %}
KAFKA_CONNECTION_NAME my_connection_name
KAFKA_BOOTSTRAP_SERVERS production_servers
KAFKA_KEY the_kafka_key
KAFKA_SECRET ${KAFKA_SECRET}
```

```tb {% title="tinybird/datasources/connections/kafka_connection_stg.incl" %}
KAFKA_CONNECTION_NAME my_connection_name
KAFKA_BOOTSTRAP_SERVERS staging_servers
KAFKA_KEY the_kafka_key
KAFKA_SECRET ${KAFKA_SECRET}
```

And then include both depending on the environment:

```tb {% title="tinybird/datasources/kafka_ds.datasource" %}
SCHEMA >
`value` String,
`topic` LowCardinality(String),
`partition` Int16,
`offset` Int64,
`timestamp` DateTime,
`key` String

ENGINE "MergeTree"
ENGINE_PARTITION_KEY "toYYYYMM(timestamp)"
ENGINE_SORTING_KEY "timestamp"

INCLUDE "connections/kafka_connection_${TB_ENV}.incl"

KAFKA_TOPIC my_topic
KAFKA_GROUP_ID my_group_id
```

Where `$TB_ENV` is one of `stg` or `prod`.

See [deploy to staging and production environments](/classic/work-with-data/organize-your-work/staging-and-production-workspaces) to learn how to leverage environment variables. 