---
title: Tinybird Local container
meta:
   description: Learn how to run Tinybird locally using the local container.
---

# Tinybird Local container

You can run your own Tinybird instance locally using the `tinybird-local` container. This is useful for testing and development. For example, you can test Data Sources and Pipes in your data project before deploying them to production.

{% callout type="info" %}
Tinybird Local doesn't include the following features:

- Tinybird UI
- Connectors (except Kafka)
- Scheduled operations
- Batch operations
{% /callout %}

## Prerequisites

To get started, you need a container runtime, like [Orbstack](https://orbstack.dev/) or [Docker Desktop](https://www.docker.com/products/docker-desktop/).

## Run Tinybird Classic locally

To run Tinybird Classic locally, you need to run the following command:

```bash
docker run --platform linux/amd64 -p 7181:7181 --name tinybird-classic-local -e COMPATIBILITY_MODE=1 -d tinybirdco/tinybird-local:latest
```

By default, Tinybird Local runs on port 7181, although you can expose it locally using any other port.  
The command sets the environment variable `COMPATIBILITY_MODE=1` to let the image know it runs Tinybird Classic.

## Local authentication

To authenticate with Tinybird Local, run the following command to retrieve the Workspace admin token and pass it through the CLI:

```bash
TOKEN=$(curl -s http://localhost:7181/tokens | jq -r ".workspace_admin_token")
tb auth --host http://localhost:7181 --token $TOKEN
```

After you've authenticated, you can get the default Workspace with the `tb workspace ls` CLI command. For example:

```bash
tb workspace ls

** Workspaces:
--------------------------------------------------------------------------------------------
| name                   | id                                   | role  | plan   | current |
--------------------------------------------------------------------------------------------
| Tinybird_Local_Testing | 7afc6330-3aae-4df5-8712-eaad216c5d7d | admin | Custom | True    |
--------------------------------------------------------------------------------------------
```

## Development flow

Once the container is running, you can just work normally with your Classic CLI and to `tb push`, `tb datasource append`..., and use the API. E.g.:

```bash
curl \
-H "Authorization: Bearer $TOKEN" \
-d $'{"date": "2025-04-05", "city": "Chicago"}\n{"date": "2025-04-05", "city": "Madrid"}\n' \
'http://localhost:7181/v0/datasources?name=data_source_test&mode=create'
```

## Kafka data sources

The Tinybird Local container supports Kafka connections and data sources.

You can create connections with the CLI:

```bash
tb connection create kafka \
    --connection-name <CONN_NAME> \
    --bootstrap-servers <SERVER:PORT> \
    --security-protocol <PROTOCOL> \
    --key <KEY> \
    --secret <SECRET>
```

And then push your project that depends on the connections normally.

## Connect to a local Kafka with Docker Compose

If you're running Kafka in Docker, the recommended approach is to use Docker Compose. This allows you to define both services and a shared network to ensure connectivity between Tinybird and Kafka.

Follow these steps to connect Tinybird Local with a local Kafka instance:

{% steps %}

### Set up Kafka and Tinybird Local with Docker Compose

Create a `docker-compose.yml` file with the following content:

```yaml
networks:
  kafka_network:
    driver: bridge

volumes:
  kafka-data:

services:
  kafka:
    image: apache/kafka:latest
    hostname: broker
    container_name: broker
    ports:
      - 9092:9092
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_PROCESS_ROLES: "broker,controller"
      KAFKA_CONTROLLER_QUORUM_VOTERS: "1@broker:29093"
      KAFKA_CONTROLLER_LISTENER_NAMES: "CONTROLLER"
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT,CONTROLLER:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:29092,PLAINTEXT_HOST://0.0.0.0:9092,CONTROLLER://0.0.0.0:29093
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
    volumes:
      - kafka-data:/var/lib/kafka/data
    networks:
      - kafka_network

  tinybird-local-classic:
    image: tinybirdco/tinybird-local:latest
    container_name: tinybird-local-classic
    platform: linux/amd64
    ports:
      - "7181:7181"
    networks:
      - kafka_network
    environment:
      - COMPATIBILITY_MODE=1

  tinybird-cli:
    image: tinybirdco/tinybird-cli-docker:latest
    container_name: tinybird-cli
    networks:
      - kafka_network
    environment:
      - TINYBIRD_HOST=http://tinybird-local-classic:7181
    volumes:
      - ./tinybird:/mnt/data
    working_dir: /mnt/data
    stdin_open: true
    tty: true
```

This `docker-compose.yml` file defines three services (containers):

1. **Kafka**: This service runs an Apache Kafka instance, which is a distributed streaming platform. The configuration includes setting up a Kafka broker with a specific ID, roles, and listeners for communication. It also mounts a volume for data persistence.
2. **Tinybird Local Classic**: This service runs the Tinybird Local Classic image, which is a self-contained environment for developing and testing data pipelines. It exposes port 7181 for API access and sets an environment variable for compatibility mode.
3. **Tinybird CLI**: This service runs the Tinybird Classic CLI Docker image, which provides a command-line interface for interacting with Tinybird. It sets an environment variable for the Tinybird host and mounts a volume for data access. This volume allows the CLI to access the datafiles inside your ./tinybird folder.

These three services are connected through a bridge network named `kafka_network`, allowing them to communicate with each other.

## Create the resources from Docker containers

Here you can see an example of creating a topic, a connection, pushing a data source, and reading events from the topic:

```bash
## Create topic
» docker exec -it broker /opt/kafka/bin/kafka-topics.sh --create --topic sample-topic --bootstrap-server localhost:9092

Created topic sample-topic.

## Auth
» TOKEN=$(curl -s http://localhost:7181/tokens | jq -r ".workspace_admin_token")
» docker exec -it tinybird-cli tb auth --host http://tinybird-local-classic:7181 --token $TOKEN

** Auth successful! 
** Configuration written to .tinyb file, consider adding it to .gitignore
** Remember to use http://tinybird-local-classic:7181 in all your API calls.

## Create Kafka Connection
» docker exec -it tinybird-cli tb connection create kafka \
  --connection-name kafka_local \
  --bootstrap-servers kafka:29092 \
  --security-protocol PLAINTEXT \
  --key key \
  --secret secret

** Connection 4f48741a-07e5-4f16-ba15-da375c21aca1 created successfully!

## Check data source
» cat tinybird/datasources/kafka_ds.datasource 

SCHEMA >
    `data` String `json:$`

ENGINE "MergeTree"
ENGINE_PARTITION_KEY ""
ENGINE_SORTING_KEY "tuple()"

KAFKA_CONNECTION_NAME kafka_local
KAFKA_TOPIC sample-topic
KAFKA_GROUP_ID my_group_id

## Push data source
» docker exec -it tinybird-cli tb push datasources/kafka_ds.datasource
** Processing datasources/kafka_ds.datasource
** Using connection 'kafka_local'
** Building dependencies
** Running 'kafka_ds' 
** 'kafka_ds' created
** Not pushing fixtures

## Send data to topic:
» echo '{"data": "test"}' | docker exec -i broker /opt/kafka/bin/kafka-console-producer.sh --topic sample-topic --bootstrap-server localhost:9092

# Query data source
» docker exec -it tinybird-cli tb sql "select * from kafka_ds"
----------------------------------------------------------------------------------------------------
| __value | __topic      | __partition | __offset | __timestamp         | __key | data             |
----------------------------------------------------------------------------------------------------
|         | sample-topic |           0 |        1 | 2025-06-20 11:01:33 |       | {"data": "test"} |
----------------------------------------------------------------------------------------------------
```

{% /steps %}

## Next steps

- Learn about datafiles and their format. See [Datafiles](/classic/cli/datafiles).
- Learn how advanced templates can help you. See [Advanced templates](/classic/cli/advanced-templates).
- Browse the full CLI reference. See [Command reference](/classic/cli/command-ref).
