Getting started with the UI¶
This section covers ingesting, querying and publishing your first data API using the Tinybird UI. Prefer the CLI? Get started with the CLI here.
Example use case¶
For this guide, we are going to build an API that tells us what the top 10 most searched products are in an eCommerce website.
We’ll ingest some eCommerce events track the actions of our users, such as viewing an item, adding items to their cart or going through checkout. This data is available as a CSV file with 50 million rows.
Once we’ve ingested the data, we’ll write some queries to filter, aggregate & transform the data into the top 10 list.
Finally, we’ll publish that top 10 result as an HTTP API.
Your first Workspace¶
Wondering how to create an account? It’s free, start here.
After you create your account, you’ll be prompted to choose a region to work in. There are two regions, EU and US, and you’re welcome to pick the region that works best for you.
You’ll then be asked to set your Workspace name; you can call it whatever you want, generally we see people naming their Workspace after the project they are working on.
For this tutorial, you can ignore the box that asks if you want to use a Starter Kit, just leave it with the default None option selected.
The GIF below shows what this step looks like.

Creating a Data Source¶
Tinybird is very flexible and can import data from many different sources, but let’s start off simple with a CSV file that Tinybird has published for you.
Step 1. Add the new Data Source¶
In your workspace, you’ll find the Data Sources section at the bottom of the left side navigation.
Click the Plus (+) icon to add a new Data Source (see Mark 1 below).

In the modal that opens, click on the Remote URL connector (see Mark 1 below).

In the next window, ensure that csv
is selected (see Mark 1 below), and then paste the following URL into the text box (see Mark 2 below).
https://storage.googleapis.com/tinybird-assets/datasets/guides/events_50M_1.csv
Finally, click the Add button to finish (see Mark 3 below).

On the next screen you can give the Data Source a name & description (see Mark 1 below). You can also see a preview of the schema & data (see Mark 2 below).
Let’s change the name to something more descriptive. Click on the name and enter the name shopping_data
.

Step 2. Start the data import¶
When you’re happy, click Continue to start importing the data (see Mark 1 below).

You’ll be taken back to your Workspace where your Data Source has been created. It will probably be empty to start with, but if you refresh the window, you’ll start to see data appear.
Loading the data doesn’t take very long, but you don’t need to wait for it to finish!
You can now move on to creating your first Pipe.

Creating a Pipe¶
In Tinybird, we write all of our SQL queries inside Pipes. One Pipe can be made up of many individual SQL queries, and we call these individual queries Nodes; each Node is simply a SQL SELECT statement. A Node can query the result of another Node in the same Pipe. This means that you can break large queries down into a few, smaller queries and chain them together, making it much easier to build & maintain in the future.
Step 1. Add the new Pipe¶
Let’s add a new Pipe by clicking on the Plus (+) icon next to the Pipes category in the left side navigation (see Mark 1 below).

This will add a new Pipe with an auto-generated default name. Just like with a Data Source, you can click on the name & description to change it (see Mark 1 below).
Let’s call this pipe top_10_searched_products
.

Step 2. Filter the data¶
At the top of your new Pipe is the first Node, which is pre-populated with a simple SELECT over the data in your Data Source (see Mark 1 below). Before we start modifying the query in the Node, click the Run button (see Mark 2 below).

Clicking Run executes the query in the Node, and will show us a preview of the query result (see Mark 1 below). You can individually execute any Node in your Pipe to see the result, so it’s a great way to check that your queries are doing what you expect.

Now, let’s modify the query to do something more interesting. In this Pipe, we want to create a list of the top 10 most searched products. If you take a look at the data, you’ll notice an event
column, which describes what kind of event happened. This column has various values, including view
, search
, buy
and more. We are only interested in rows where the event
is search
so let’s modify the query to filter our rows.
Use the following query:
SELECT * FROM shopping_data
WHERE event == 'search'
Click Run again when you’re done. This Node is now applying a filter to the data, so we only see the rows that we care about.

At the top of the Node, you’ll notice that it has been named after the Pipe. Just as before, we can rename the Node & give it a description, to help us remember what the Node is doing (see Mark 1 below).
Call this node search_events
.

Step 2. Aggregate the data¶
Next, we want to work out how many times each individual product has been searched for. This means we’re going to want to do a count and aggregate by the product id. To keep our queries clean, let’s create a second Node to do this aggregation.
Scroll down a little, and you’ll see a second Node is suggested at the bottom of the page (see Mark 1 below).

The really cool thing here is that this second Node can query the result of the search_events
Node we just created, meaning that we do not need to duplicate our WHERE filter in this next query, as we’re already querying the filtered rows.
We’ll use the following query for our next Node:
SELECT product_id, count() as total FROM search_events
GROUP BY product_id
ORDER BY total DESC
Click Run again to see the results of the query.
Don’t forget to rename this node, your future self will thank you! Let’s call it aggregate_by_product_id
.

Step 3. Transform the result¶
Finally, let’s create the last Node that we will use to publish as an API and limit the results to the top 10 products.
We’ll use the following query:
SELECT product_id, total FROM aggregate_by_product_id
LIMIT 10
Name this Node endpoint
so you know it’s the Node that will be published.
Click Run to preview the results.

With that, we have built the logic required to show the top 10 most searched products.
Publishing & using an API¶
Now, let’s say we have an application that wants to be able to get the top 10 result. The magic of Tinybird is that we can choose any query and instantly publish the results as a REST API. Any applications can simply hit the API and get the very latest result.
Step 1. Publish the API¶
To publish a query as an API, click the Create API Endpoint button in the top right corner of the Workspace (see Mark 1 below). Then, click the Create API Endpoint option.

You will then see a list of your Nodes, and you can select the Node you want to publish. From the list, click the endpoint Node (see Mark 1 below).

That’s really all there is to it! You’ll see a page that gives you details about your new API including some observability charts, links to auto-generated API docs, and some code snippets to help you integrate it with your application.

Step 2. Test the API¶
Scroll down to the Sample usage section, and copy the HTTP URL from the snippet box (see Mark 1 below).

Open this URL in a new tab in your browser.
Hitting the API triggers your Pipe to execute, and you get a nice JSON formatted response with the results.

🎉 Celebrate 🎉¶
Congrats! You’ve finished creating your first API in Tinybird!
If you’d like to try doing this in the CLI, follow the Get started with the CLI guide.
Ready for something new? Tinybird is great for working with streaming event data & you can integrate Tinybird’s Events API directly into your application to ingest events with no additional infrastructure. Check out this guide for the Events API.