Intro to exposing data¶
Typically, the process we’re describing would require a data engineering and a backend team. After finishing ingesting and transforming your data, the backend team would start building an API to enable third party applications to consume and query it. Among other things, they’d have to take care of user authentication, permissions, scalability, security and so on.
That is a process that can take days to weeks, and that most of the times leads to complex data stacks and a lack of agility.
With Tinybird, the same can be accomplished in minutes and without needing specific teams to develop, support and maintain your APIs. Let’s see how.
Creating API endpoints using the UI¶
For every Pipe, you can create an API endpoint to expose the result of one of its nodes (the one you decide).
If you have followed the previous two guides, you should have already a Pipe you created named
ecommerce_example with different transformation nodes on it. As we want our first API endpoint to give us information for sales per day, we will add a final node to our existing Pipe named
sales_per_day, containing the following query:
You will see this node takes a while. Remember this is working with 100M rows that are enriched on-the-fly. In later guides we will see ways to optimize its performance.
Now, to create an API endpoint that exposes this last calculation, click on the “”Create API endpoint”” button on the top right corner and select
sales_per_day as the output node.
After choosing the node you want, you will be prompted to your API endpoint view, which contains information about your API endpoint, provides you useful metrics for monitoring your API endpoint performance, gives you access to your requests log and generates documentation for your API endpoint that can be shared within or outside your team with a click.
This page will be automatically refreshed with any change you do in your Pipe, so your documentation gets never outdated.
Adding dynamic parameters¶
One of the most powerful capabilities of Tinybird is that you can define parameters as part of your queries to create dynamic API endpoints. Let’s try by changing slightly the
enriched_events_w_date node, and instead of having a fixed filter in the
WHERE clause, we will introduce a couple of dynamic ones:
The query above defines two parameters
end_date. For testing purposes, we’ve added a couple of default values too. We will use them in our API endpoint for filtering the results dynamically by selecting a desired date range.
Here are a few considerations regarding queries containing dynamic code:
Queries which contain dynamic parameters should start with
Parameters are defined using a templating language, and must be contained within double brackets. You can learn more about the templating language in our docs.
No matter in which node(s) you use dynamic parameters, they will become available in the published API endpoint. This is especially useful when performing advanced queries over tons of rows.
If you go back to the API endpoint page, you will see there is some documentation automatically generated for your parameters.
Once you have your API endpoint ready, it’s just a matter of sharing the docs on how to integrate with your development team. Just click the “”Share docs”” button in the top right corner and copy the desired link.
As the API endpoint page is OpenAPI 3.0 compatible, you can use the link provided in the share modal window to open it with swagger (or any other OpenAPI 3.0 compatible client).
Using Auth Tokens for sharing API access¶
By default, every endpoint you create will have a token with read permissions associated to the pipe that the endpoint belongs to. In the Auth tokens page, you’ll be able to manage all your auth tokens and add different scopes to them.
A common practice is to have a single Auth token per application which contains access to a set of API endpoints. This enables you to track your API usage per application (per auth token) and control the access to the different endpoints without needing to change anything in the third party applications that are using your API.
Sharing docs with the development teams:
Using Auth tokens and scopes also allow you to build a complete documentation set in a few clicks.
Using SQL filters to restrict access to certain information¶
It’s very common to have different security requirements depending on the application (or any logic on it), so depending on the Auth token you are using, you want to enable access to all or a subset of your data. SQL filters on Auth tokens are especially useful for this. Although we will not make use of them during this guide, you can always set SQL filters on your Auth tokens when editing the scope of any of them.
Tokens can also be set programmatically with our API
Specifying format types¶
API endpoints created on Tinybird can automatically generate JSON, CSV and many more response types. You just need to change the
format parameter of the URL.
Integrating your API endpoints with Google Sheets:
Using Tinybird API endpoints from Google Sheets is very easy and fast. Try adding
=IMPORTDATA(""https://api.tinybird.co/v0/pipes/<pipe_name>.csv?token=<your-token>"") in any of your cells.
Making it faster¶
We don’t do any caching on our end, so the speed of your queries will depend on their complexity and on how much work the DB has to do on each query. The more work you can do on ingestion, the faster your queries will be. Check out this guide to learn how to use materialized columns to do it.
Check out our free course on real-time analytics
We’ve put together a course on Principles of Real-Time Analytics so that you can learn more about query complexity and all the other factors to think about when working with big amounts of data.