---
title: Send events
meta:
    description: Send JSON and NDJSON data to Tinybird by calling the Events API.
---

# Send events

You send events to a [data source](/forward/get-data-in/data-sources) using the Events API or the [tb datasource append](/forward/dev-reference/commands/tb-datasource#tb-datasource-append) CLI command with the `--events` flag. [Ingestion limits](/forward/pricing/limits#ingestion-limits) apply.

## Send JSON events

The following examples show how to append data via the Events API to a data source in Tinybird Cloud:

{% tabs initial="Events API" %}

{% tab label="Events API" %}

```shell {% title="Sending batches of JSON events" %}
curl \
-H "Authorization: Bearer <import_token>" \
-d $'{"date": "2020-04-05", "city": "Chicago"}\n{"date": "2020-04-05", "city": "Madrid"}\n' \
'https://api.tinybird.co/v0/events?name=<data_source_name>'
```

{% /tab %}

{% tab label="CLI" %}

```shell
tb --cloud datasource append <data_source_name> --events $'{"date": "2020-04-05", "city": "Chicago"}\n{"date": "2020-04-05", "city": "Madrid"}\n'
```

{% /tab %}

{% /tabs %}

Sending batches of events helps you achieve much higher total throughput than sending single events. You can send batches of JSON events to the Events API by formatting the events as NDJSON (newline delimited JSON). Each single JSON event must be separated by a newline `\n` character.

## Token

The token you use to send events must have the `DATASOURCE:APPEND` scope. This should be defined in your data source file using `TOKEN {token_name} append`. For more details, see [resource-scoped tokens](/forward/administration/tokens/static-tokens#resource-scoped-tokens).

## Limits

The Events API delivers a default capacity of:

- Up to 100 requests per second per data source
- Up to 10 MB per request per Data Source for free users and up to 100 MB on Developer and Enterprise plans

{% callout type="caution" %}
If you reach this limit you'll receive a `HTTP 413 - Request Entity Too Large` error. For larger requests, we recommend splitting them into multiple smaller ones to ensure smooth data ingestion.
{% /callout %}

### Rate limit headers

The Events API returns the following headers with the response:

{% table %}
* Header Name
* Description
---
* `X-RateLimit-Limit`
* The maximum number of requests you're permitted to make in the current limit window.
---
* `X-RateLimit-Remaining`
* The number of requests remaining in the current rate limit window.
---
* `X-RateLimit-Reset`
* The time in seconds after the current rate limit window resets.
---
* `Retry-After`
* The time to wait before making a another request. Only present on 429 responses.
{% /table %}

### Check the payload size

To avoid hitting the request limit size, you can check your payload size before sending. For example:

{% tabs initial="shell" %}

{% tab label="shell"  %}

```shell
echo '{"date": "2020-04-05", "city": "Chicago"}' | wc -c | awk '{print $1/1024/1024 " MB"}'
```

{% /tab %}

{% tab label="python"  %}

```python
payload = '{"date": "2020-04-05", "city": "Chicago"}\n' * 1000
size_in_mb = len(payload.encode('utf-8')) / (1024 * 1024)
```

{% /tab %}

{% tab label="js"  %}

```javascript
const payload = '{"date": "2020-04-05", "city": "Chicago"}\n'.repeat(1000);
const sizeInMB = new Blob([payload]).size / (1024 * 1024);
```

{% /tab %}

{% /tabs %}

## Compress the data you send

You can compress the data you send to the Events API using Gzip. Compressing events adds overhead to the ingestion process, which can introduce latency, although it's typically minimal.

Here is an example of sending a JSON event compressed with Gzip from the command line:

```shell
echo '{"timestamp":"2022-10-27T11:43:02.099Z","transaction_id":"8d1e1533-6071-4b10-9cda-b8429c1c7a67","name":"Bobby Drake","email":"bobby.drake@pressure.io","age":42,"passport_number":3847665,"flight_from":"Barcelona","flight_to":"London","extra_bags":1,"flight_class":"economy","priority_boarding":false,"meal_choice":"vegetarian","seat_number":"15D","airline":"Red Balloon"}' | gzip > body.gz 

curl \
    -X POST '{% user("apiHost") %}/v0/events?name=gzip_events_example' \
    -H "Authorization: Bearer <AUTH_TOKEN>" \
    -H "Content-Encoding: gzip" \
    --data-binary @body.gz
```

## Write operation acknowledgements {% id="write-acknowledgements" %}

When you send data to the Events API, you usually receive an `HTTP202` response, which indicates that the request was successful, although it doesn't confirm that the data has been committed into the underlying database. This is useful when guarantees on writes aren't strictly necessary. 

Typically, it takes under two seconds to receive a response from the Events API. For example:

```shell
curl \
    -X POST '{% user("apiHost") %}/v0/events?name=events_example' \
    -H "Authorization: Bearer <AUTH_TOKEN>" \
    -d $'{"timestamp":"2022-10-27T11:43:02.099Z"}'

< HTTP/2 202 
< content-type: application/json
< content-length: 42
< 
{"successful_rows":2,"quarantined_rows":0}
```

If your use case requires absolute guarantees that data is committed, use the `wait` parameter. The `wait` parameter is a boolean that accepts a value of `true` or `false`. A value of `false` is the default behavior, equivalent to omitting the parameter entirely.

Using `wait=true` with your request asks the Events API to wait for acknowledgement that the data you sent has been committed into the underlying database. You then receive an `HTTP200` response that confirms data has been committed.

Adding `wait=true` to your request can result in a slower response time. Use a timeout of at least 10 seconds when waiting for the response. For example:

```shell 
curl \
    -X POST '{% user("apiHost") %}/v0/events?name=events_example&wait=true' \
    -H "Authorization: Bearer <AUTH_TOKEN>" \
    -d $'{"timestamp":"2022-10-27T11:43:02.099Z"}'

< HTTP/2 200 
< content-type: application/json
< content-length: 42
< 
{"successful_rows":2,"quarantined_rows":0}
```

{% callout type="tip" %}
Log your requests and responses from and to the Events API. This helps you get visibility into any failures.
{% /callout %}