🎮Free Training: Build a Real-Time Gaming SystemRegister Now

Apache Kafka

Turn your Kafka Streams into actionable API Endpoints your teams can consume

Instead of building a new consumer every time you want to make sense of your Data Streams, write SQL queries and expose them as API endpoints. Easy to maintain. Always up-to-date. Fast as can be.
Get started for free
No credit card required
Easy integration
Connect to Kafka and start building APIs right away. Choose a topic, what fields you are interested in and ingest millions of rows per second.
SQL based
Transform or enrich your Kafka topics with JOINs using our serverless Data Pipes.
Automatic APIs
All your Data APIs in one place, automatically documented and scaled. Consistent results for your Data/Dev Teams.
Secure
Use Auth tokens to control access to API endpoints. Implement access policies as you need. Support for row-level security.

Turn Data Streams into answers in minutes with SQL.

Every new use case over your Kafka Data Streams is just one SQL query away. Store the raw data or materialize roll-ups in realtime at any scale. Enrich with SQL JOINs. We will worry about performance so you can focus on enabling your teams.
$ tb connection create kafka --bootstrap-servers pkc-a1234.europe-west2.gcp.confluent.cloud:9092 --key CK2AS3 --secret “19EfGz34t“
Connection name (optional, current: pkc-a1234.europe-west2.gcp.confluent.cloud:9092) [pkc-a1234.europe-west2.gcp.confluent.cloud:9092]:
** Connection 34250dcb-4e51-4d9b-9481-8db673c6a590 created successfully!

$ tb datasource connect 34250dcb-4e51-4d9b-9481-8db673c6a590 sales
We‘ve discovered the following topics:
   sales_prod
   sales_staging
Kafka topic:
sales_prod
Kafka group: tb-prod
Kafka doesn‘t seem to have prior commits on this topic and group ID
Setting auto.offset.reset is required. Valid values:
 latest          Skip earlier messages and ingest only new messages
 earliest        Start ingestion from the first message
Kafka auto.offset.reset config: latest
Proceed? [y/N]:
y
** Data Source ‘t_07047b1547c64d5a882a97c2885f761e‘ created
** Kafka streaming connection configured successfully!
1
One topic, one data source
Tinybird consumes your topics in realtime into Data Sources that can be queried individually via SQL.
2
Enrich and Transform your Data Streams
As data comes in, you can enrich it with additional business relevant data via our Data Pipes and prepare it for consumption.
3
Publish API endpoints
Share secure access to your data in a click and get full OpenAPI and Postman documentation for your APIs.

We accelerate your data, no matter where it is.

Connect data from Relational Databases, Data Warehouses and Data Streams.
Apache Kafka
Confluent
Google BigQuery
Snowflake
Amazon S3
GCS
Amazon Kinesis
Google Pub/Sub

Build fast data products, faster.

Try Tinybird and bring your data sources together and enable engineers to build with data in minutes. No credit card required, free to get started.
Need more? Contact sales for Enterprise support.