Amazon S3

Amazon S3

From files in Amazon S3 to
low-latency APIs

All that data you have in S3 you can easily exploit to build data products. Automate ingestion and publish low-latency API endpoints with Tinybird.

Amazon S3Amazon S3

Trusted by companies like...

Plytix
The Hotels Network
The Hotels Network
Feedback Loop
Stay
Audiense
Situm
Genially
The Hotels NetworkThe Hotels Network
Feedback Loop
Stay
Plytix
Audiense
Situm
Genially

No hassle

You have your CSV files in S3 buckets. You can easily Build APIs on top now.

Fast & Smart

We detect new files in the S3 buckets and ingest automatically, up to millions of rows per second.

Easy integration

SQL based

Once ingested, run serverless fast transformations using our Data Pipes.

SQL based

Secure

Use Auth tokens to control access to API endpoints. Implement access policies as you need. Support for row-level security.

Secure

Build in minutes, not weeks

Ingest, query and build APIs for your data at scale in a matter of minutes. Forget about ETLs, performance and complex security rules.

$ tb datasource append tripdata https://s3.amazonaws.com/nyc-tlc/trip+data/fhv_tripdata_2021-03.csv 
🥚 starting import process 
🐥 done

** Appended 973916 new rows 
** Total rows in tripdata: 23854144
** Data appended to data source 'tripdata' successfully! 
** Data pushed to tripdata

NODE avg_triptime_endpoint
SQL >
  SELECT
    toDayOfMonth(pickup_datetime) as day,
    avg (dateDiff('minute', pickup_datetime, dropoff_datetime)) as avg_trip_time_minutes
  FROM tripdata
    {% if defined(start_date) and defined(end_date) %}
WHERE pickup_dt BETWEEN {{Date(start_date)}} AND {{Date(end_date)}}
    {% end %}
  GROUP BY day

$ tb push endpoints/avg_triptime.pipe 
** Processing avg_triptime.pipe 
** Building dependencies 
** Creating avg_triptime 
** Token read API token not found, creating one 
=> Test endpoint with: 
$ curl https://api.tinybird.co/v0/pipes/avg_triptime.json?token=<TOKEN>&start_date=2021-01-01&end_date=2021-03-01
** 'avg_triptime' created

1

Ingest CSV files fast and easily

Automate ingestion of files from your S3 bucket through our REST API. Transform or augment on ingest if needed.

2

Create your Pipes

Filter, clean or enrich your data using Pipes, a new way of chaining SQL queries designed to ease development and maintenance.

3

Publish your API endpoints

Share access securely to your data in a click and get full OpenAPI and Postman documentation for your APIs.

We accelerate your data, no matter where it is.

Connect data from Relational Databases, Data Warehouses and Data Streams.

Amazon Redshift

Amazon S3

Google BigQuery

Apache Kafka

PostgreSQL

MySQL

Snowflake

FAQ