Enriching Kafka streams for real-time queries


If you are using Kafka to capture large quantities of events or transactional data, you are probably also looking for ways to enrich that data in real-time.
Let’s say you are pushing e-commerce transactions that look like the following to a Kafka topic:
In order to find out things like:
- what your top selling products are,
- what your total revenue per product category is,
- who your most successful supplier is, or
- where do your customers come from
you are going to want to enrich that {% code-line %}partkey{% code-line-end %} with information about the product, that {% code-line %}suppkey{% code-line-end %} with the Supplier’s information and that {% code-line %}custkey{% code-line-end %} with relevant information about the Customer.
There are two ways to go about this with Tinybird; for both of them, we are going to need those additional dimensions within the database. So let’s first generate a lot of Customers, Suppliers and Parts and ingest them.
We will use a modified version of the Star Schema Benchmark {% code-line %}dbgen{% code-line-end %} tool to generate fake data quickly and in large quantities. Like so:
Now we can just post these CSVs to Tinybird’s Datasources API and it will figure out the types and ingest the data automatically:
Just like that, we have our dimension tables as data sources in Tinybird:

Lastly, we’ll need the actual line orders for every purchase - we will create an empty data source for that, as we will be pushing those records via Kafka.
In order to push line orders to a Kafka Topic, we will use a simple Kafka producer in python that will read the individual line orders that we will generate via dbgen. Here is how the producer looks like:
The producer receives the file name with the line orders and the Kafka topic name ('orders') to which it will push all of them one by one.
In the following snippet, we create a Python environment and start the producer (we assume you have Kafka and zookeeper running already):
This is now pumping line orders into Kafka, but nobody is consuming them yet. We will use a consumer (source code) that will read those orders in chunks of 20000 and send them to Tinybird, so that they get ingested directly into the “lineorders” datasource we created earlier.
When running the consumer, we specify what Kafka topic it needs to read from (again, 'orders') and what datasource in Tinybird it needs to populate, as well as the API endpoint. Like this:
The consumer starts reading all those Kafka events at a rate of approximately 20K records per second and pushing them in chunks to Tinybird, and it will keep going while there are lineorders to consume. Let’s look at how the data is shaping up via the Tinybird UI:

Looking good!
Enriching the classic way
Now that we have the e-commerce transactions (line orders) coming in as well as all the required dimension tables (Customers, Parts and Suppliers) we can start enriching content with regular SQL joins.
Let’s say we wanted to extract how many parts of each category are sold per year for each country, and limit the results to years 1995 to 1997. We can create a Pipe and write an SQL query like this one:

In our entry level account, with over 60M line orders in total, that query can take almost 6 seconds to run; this is fine if you are only performing it every once in a while, but if you want to expose multiple API endpoints and hit them with multiple requests per second from live applications, those seconds will add up. And things would only get slower as data grows.
We can obviously make that faster by throwing a bigger Tinybird account at you, which would parallelize the query amongst many more CPU cores and make it faster; however, we can also speed it up with a different approach.
Enriching at Ingestion Time
One of the best things about Clickhouse, the columnar database that powers Tinybird, is that it is extremely efficient at storing repetitive data. That means that, unlike in transactional databases, denormalizing data won’t have a huge impact on the amount of data you have to read when performing queries.
In Tinybird, you can create “Ingestion” pipes that materialize the result of a query into another datasource. This helps us enrich data as it comes into Tinybird; rather than performing JOINS every time you query the data, you perform those JOINS at ingestion time and the resulting data is available for you to query in a different datasource.
Here is an example of one of those Ingestion pipes through our UI.

What this Pipe is essentially doing is materializing the result of that query from the “lineorders” datasource to the “sales” datasource, and it happens every time new data gets ingested.
As you can see, it is adding every column from “lineorders” plus a number of other columns from the Parts, Category and Supplier dimensions, enabling us to have everything we need for one or more analytics use-cases in a single place.

It uses “joinGet”, a Clickhouse function that enables you to extract data from a table as if you were extracting it from a dictionary; it is extremely fast and it requires that the tables you extract from to be created with a specific Clickhouse engine: that is why in the query you see those {% code-line %}part_join_by_partkey{% code-line-end %} or {% code-line %}supplier_join_by_suppkey{% code-line-end %} datasources - we create them automatically in these scenarios to enable fast joins at ingestion.
If we build a query to extract the same results as before but directly through the denormalized “sales” datasource, it would look like this:

If we hit that endpoint again, we get the same results but now in 161ms (vs almost 6 seconds), which is about 37 times faster.
The beauty of this is that:
- we can enrich data as soon as it hits Tinybird,
- we can do it at a rate of hundreds of thousands of requests per second, whether this data comes through Kafka or any other means,
- every time new data gets ingested, only the new rows need to be materialized,
- while all that data is coming in, you can keep hitting your Tinybird real-time endpoints with abandon and we ensure that results are always up to date, with all the data you require for analysis
What about you? Do you use Kafka to capture events? Drop us a line or sign up to our waiting list for a guided onboarding session with us.