Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

You can now see how much data your endpoints scan in real-time

Break down every API request by scan size (rows or bytes) and token.
Juan Madurga
Senior Backend Engineer
May 25, 2022
 ・ 
  min read

What’s changed?

We have enriched our pipe_stats_rt service Data Source by including the data processed for every request and the token information, as well as other information related to the actual request. The new columns are:

  • {% code-line %}pipe_name{% code-line-end %}: name of the requested pipe
  • {% code-line %}read_bytes{% code-line-end %}: bytes processed for the request
  • {% code-line %}read_rows{% code-line-end %}: number of rows read in the request
  • {% code-line %}request_id{% code-line-end %}: identifies the request
  • {% code-line %}token{% code-line-end %}: the id of the token
  • {% code-line %}token_name{% code-line-end %}: the name of the token

Also, the service Data Source pipe_stats (with aggregations per day) has been extended by providing two new columns:

  • {% code-line %}pipe_name{% code-line-end %}: name of the requested pipe
  • {% code-line %}read_bytes_sum{% code-line-end %}: bytes processed for the request
  • {% code-line %}read_rows_sum{% code-line-end %}: number of rows read in the request

QueryAPI requests are included too. Just use the {% code-line %}query_api{% code-line-end %} value to filter by {% code-line %}pipe_id{% code-line-end %} or {% code-line %}pipe_name{% code-line-end %}.

Finally, since we’ve improved {% code-line %}pipe_stats_rt{% code-line-end %}, we figured we’d offer an easy way to explore that performance in the Tinybird UI. If you want to compare all your published Pipes, in the dashboard we have included the average and the total processed data:

And, if you want to check just one Pipe in detail, you’ll now find a graph showing the average processed data over time on the API endpoint’s information page. Use this graph to check usage in real-time and compare performance across time periods.

Why does this matter?

First of all, the upgrade to {% code-line %}pipe_stats_rt{% code-line-end %} means you can now create service Endpoints to monitor the amount of data processed on your Workspace. This is big news. 

In addition:

  • You can identify which endpoints and requests are consuming the most data, so you can optimize them.
  • You can spot tokens that may have been leaked.
  • You can find endpoint parameters that consume comparatively more data.
  • You can detect changes in endpoints by looking at the scan size.

Get Started with the guide

Ready to optimize your endpoints? Some of these examples are available in the guide we have prepared for analyzing the performance and consumption of your API Endpoints.

Become a better data developer

Subscribe to the tinytales newsletter for monthly tips on building better data products.