Best practices for writing faster SQL queries

Easy
In this guide you'll learn the best practices when working with a huge amount of data. We recommend that all Tinybird users read this.

Every person working with processing huge amounts of data learns some best practices over the years to reduce the time and the memory needed to process the data. 

In Tinybird, we don’t want our users to waste any time, so we have created a pipe they can consult whenever they want help to understand these best practices and put them into practice as soon as possible.

We like to call them the main rules when working with data. These main rules are:

  • Rule № 1 ⟶ The best data is the one you don’t write.
  • Rule № 2 ⟶ The second best data is the one you don’t read. (The less data you read, the better.)
  • Rule № 3 ⟶ Sequential reads are 100x faster.
  • Rule № 4 ⟶ The less data you process (after read), the better.
  • Rule № 5 ⟶ Complex operations later in the processing pipeline.

Let’s go one by one, analyzing the improvement of the performance after the implementation of each rule. We will use the already-so-well-known NYC Taxi Trip dataset. You can get a sample here and import it directly creating a new Data Source from your dashboard.

Rule 1: The best data is the one you don’t write

This rule seems obvious but it's not always followed. There is no reason to save data that you don't need: it will impact the memory needed (and the money!) and the queries will take more time, so it only has disadvantages.

Rule 2: The second best data is the one you don’t read

To avoid reading data that you don't need, you should apply filters as soon as possible.

For this example, let's suppose we want a list of the trips whose distance is greater than 10 miles and that took place between '2017-01-31 14:00:00' and '2017-01-31 15:00:00'. Additionally, we want to get those trips ordered by date.

Let's see the difference between applying the filters at the end or at the beginning of the pipe.

First, let's start the first approach by ordering all the data by date:

Once the data is sorted, we filter it:

This first approach takes around 30-60 ms, adding the time of both steps. 

Pay attention to the statistics in the first step: number of scanned rows (139.26k) and the size of data (10.31MB) vs statistics in the second one: number of scanned rows (24.58k) and the size of data (1.82MB). Why would we scan 139.26k rows in the first place if we just really need to scan 24.58k?

It's important to be aware that these two values directly impact the query execution time and also affect other queries you may be running at the same time. IO bandwidth is also something you need to keep in mind.

Let’s see what happens if the filter is applied before the sorting:

As can be seen, if the filter is applied before the sorting, it takes only 1-10 ms. If you take a look at the size of the data read, it's 1.82MB, while the number of rows read is 24.58k. As explained, they are much smaller than the ones in the first step.

This significant difference happens because in the first approach, we are sorting all the data available (even the data that we don't need for our query) while in the second approach, we are sorting just the rows we need.

Filtering is the fastest operation, so always filter first.

Rule 3: Sequential reads are 100x faster

To be able to carry out sequential reads, it's essential to define indexes correctly. These indexes should be defined based on the queries that are going to be performed. (Although these indexes should be defined in the Data Source, we will simulate the case by ordering the data based on the columns.)

For example, if we want to query the data by date, let's compare what happens when the data is sorted by date vs when it's sorted by any other column.

In the first approach, we will sort the data by another column, for instance, "passenger_count":

Once we have the data sorted by "passenger_count", we filter it by date:

This approach takes around 5-10 ms, the number of scanned rows is 26.73k and the size of data is 1.98MB.

For the second approach, we are sorting the data by date:

Once it’s sorted by date, we filter it:

We can see that if the data is sorted by date and the query uses date for filtering, it just takes 1-2 ms, the number of scanned rows is 10.35k and the size of data is 765.53KB. Let’s remember that the first approach takes around 5-10 ms, the number of scanned rows is 26.73k and the size of data is 1.98MB.

It's important to highlight that the more data we have, the greater will be the difference between both approaches. When dealing with tons of data, sequential reads can be 100x faster.

Therefore, it's essential to define the indexes taking into account the queries that will be made.

Rule 4: The less data you process (after read), the better

So, if you just need two columns, only retrieve those.

Let's suppose that for our use case, we just need three columns: vendorid, tpep_pickup_datetime and trip_distance.

Let's analyze the difference between retrieving all the columns vs just the ones we need.

When retrieving all the columns, we need around 140-180 ms and the size of data is 718.55MB:

However, when retrieving just the columns we need, it takes around 35-60 ms:

As we mentioned before, you can check how the size of scanned data is much less, now just 155.36MB. With analytical databases, if you do not need to retrieve a column, those files are not read and it is much more efficient.

Therefore, it's strongly recommended to process just the data needed.

Rule 5: Complex operations later in the processing pipeline

Complex operations, such as joins or aggregations, should be performed as late as possible in the processing pipeline. This is because in the first steps you should filter all the data, so the number of rows at the end will be less than at the beginning and, therefore, the cost of executing complex operations will be lower.

So first, let’s aggregate the data:

Now, let's apply the filter:

If the filter is applied after aggregating the data, it takes around 50-70 ms to retrieve the data (adding both steps), the number of scanned rows is 9.71m and the size of data is 77.68MB.

Let’s see what happens if we filter before aggregating the data:

Doing it this way, it takes only 20-40 ms, although the number of scanned rows and the size of data is the same as in the previous approach.

Therefore, it's recommended to perform complex operations as late as possible in the processing pipeline.

ON THIS GUIDE