Service Data Sources

In addition to the data sources you upload, Tinybird provides Service Data Sources that allow you to inspect what's going on in your account.

For example, you can get real time stats about API calls or a log of every operation over your Data Sources.

It's the same as using system tables in any database but in this case contains information about the service usage.

Note: Service Data Sources can't be used in Materialized View queries.

Queries made to Service Data Sources are free of charge and do not count towards your usage. However, calls to API Endpoints that use Service Data Sources do count towards API rate limits.

Organization-level Service Data Sources can only be quereied by Organizational Admins. These are documented in the Consumption overview docs.

These are the available Data Sources:

tinybird.snapshot_views

Contains stats about your snapshot views

FieldTypeDescription
dateDateView creation date.
snapshot_idStringSnaphot id. It's the same id other users can see in the URL (https://ui.tinybird.co/snapshot/:snapshot_id)
viewsInt32Visit count.

tinybird.pipe_stats_rt

Contains information about all requests made to your API Endpoints in real time.

This Data Source has a TTL of 7 days. If you need to query data older than 7 days you must use the aggregated by day data available at tinybird.pipe_stats.

FieldType Description
start_datetimeDateTimeAPI call start date and time.
pipe_idStringPipe Id as returned in our Pipes API (query_api in case it is a Query API request).
pipe_nameStringPipe name as returned in our Pipes API (query_api in case it is a Query API request).
durationFloatAPI call duration in seconds.
read_bytesUInt64API call read data in bytes.
read_rowsUInt64API call rows read.
result_rowsUInt64Rows returned by the API call.
urlStringURL (token param is removed for security reasons).
errorUInt81 if query returned error, else 0.
request_idStringAPI call identifier returned in x-request-id header. Format is ULID string.
tokenStringAPI call token identifer used.
token_nameStringAPI call token name used.
status_codeInt32API call returned status code.
methodStringAPI call method POST or GET.
parametersMap(String, String)API call parameters used.
releaseStringSemantic version of the release.
user_agentNullable(String)User Agent HTTP header from the request.

tinybird.pipe_stats

Aggregates the request stats in tinybird.pipe_stats_rt by day.

FieldTypeDescription
dateDateRequest date and time.
pipe_idStringPipe Id as returned in our Pipes API.
pipe_nameStringName of the pipe.
view_countUInt64Request count.
error_countUInt64Number of requests with error.
avg_duration_stateAggregateFunction(avg, Float32)Average duration state in milliseconds (see Querying _state columns).
quantile_timing_stateAggregateFunction(quantilesTiming(0.9, 0.95, 0.99), Float64)0.9, 0.95 and 0.99 quantiles state. Time in milliseconds (see Querying _state columns).
read_bytes_sumUInt64Total bytes read.
read_rows_sumUInt64Total rows read.

tinybird.bi_stats_rt

Contains information about all requests to your BI Connector interface in real time.

This Data Source has a TTL of 7 days. If you need to query data older than 7 days you must use the aggregated by day data available at tinybird.bi_stats.

FieldTypeDescription
start_datetimeDateTimeQuery start timestamp.
queryStringExecuted query.
query_normalizedStringNormalized executed query. This is the pattern of the query, without literals. Useful to analyze usage patterns.
error_codeInt32Error code, if any. 0 on normal execution.
errorStringError description, if any. Empty otherwise.
durationUInt64Query duration in milliseconds.
read_rowsUInt64Read rows.
read_bytesUInt64Read bytes.
result_rowsUInt64Total rows returned.
result_bytesUInt64Total bytes returned.

tinybird.bi_stats

Aggregates the stats in tinybird.bi_stats_rt by day.

Field Type Description
dateDateStats date.
databaseStringDatabase identifier.
query_normalizedStringNormalized executed query. This is the pattern of the query, without literals. Useful to analyze usage patterns.
view_countUInt64Requests count.
error_countUInt64Error count.
avg_duration_stateAggregateFunction(avg, Float32)Average duration state in milliseconds (see Querying _state columns).
quantile_timing_stateAggregateFunction(quantilesTiming(0.9, 0.95, 0.99), Float64)0.9, 0.95 and 0.99 quantiles state. Time in milliseconds (see Querying _state columns).
read_bytes_sumUInt64Total bytes read.
read_rows_sumUInt64Total rows read.
avg_result_rows_stateAggregateFunction(avg, Float32)Total bytes returned state (see Querying _state columns).
avg_result_bytes_stateAggregateFunction(avg, Float32)Total rows returned state (see Querying _state columns).

tinybird.block_log

The Data Source contains details about how we ingested data into your Data Sources. We do ingestion in chunks.

For instance, when we ingest from a URL, if it is possible, we split the download in several requests resulting in different ingestion blocks. The same happens when the data upload happens with a multipart request: we ingest the data in several chunks as the data is arriving.

You can use this Service Data Source to spot problematic parts of your data.

FieldTypeDescription
timestampDateTimeDate and time of the block ingestion.
import_idStringId of the import operation.
job_idNullable(String)Id of the job that ingested the block of data, if it was ingested by URL. In this case, import_id and job_id must have the same value.
request_idStringId of the request that performed the operation. In this case, import_id and job_id must have the same value. Format is ULID string.
sourceStringEither the URL or stream or body keywords.
block_idStringBlock identifier. You can cross this with the blocks_ids column from the tinybird.datasources_ops_log Service Data Source.
statusStringdone | error.
datasource_idStringData Source consistent id.
datasource_nameStringData Source name when the block was ingested.
start_offsetNullable(Int64)The starting byte of the block, if the ingestion was split, where this block started.
end_offsetNullable(Int64)If split, the ending byte of the block.
rowsNullable(Int32)How many rows it ingested.
parserNullable(String)Either if we used the native block parser or we had to fallback to row by row parsing.
quarantine_linesNullable(UInt32)If any, how many rows went into the quarantine Data Source.
empty_linesNullable(UInt32)If any, how many empty lines were skipped.
bytesNullable(UInt32)How many bytes the block had.
processing_timeNullable(Float32)How long it took in seconds.
processing_errorNullable(String)Detailed message in case of error.

tinybird.datasources_ops_log

Contains all operations performed to your Data Sources. We track the following operations:

EventDescription
createA Data Source is created.
appendAppend operation.
append-hfiAppend operation using our High-frequency Ingestion API.
append-kafkaAppend operation using our Kafka Connector.
replaceA replace operation took place in the Data Source.
deleteA delete operation took place in the Data Source.
truncateA truncate operation took place in the Data Source.
renameThe Data Source was renamed.
populateview-queuedA populate operation was queued for execution.
populateviewA finished populate operation (up to 8 hours after it started).
copyA copy operation took place in the Data Source.

Materializations are logged with same event_type and operation_id as the operation that triggers them. You can track the materialization pipe with pipe_id and pipe_name.

We log all of them with extra information in this Data Source.

FieldTypeDescription
timestampDateTimeDate and time when the operation started.
event_typeStringOperation being logged.
operation_idStringGroups rows affected by the same operation. Useful for checking materializations triggered by an append operation.
datasource_idStringId of your Data Source. The Data Source id is consistent after renaming operations. You should use the id when you want to track name changes.
datasource_nameStringName of your Data Source when the operation happened.
resultStringok | error
elapsed_timeFloat32How much time the operation took in seconds.
errorNullable(String)Detailed error message if the result was error.
import_idNullable(String)Id of the import operation, if data has been ingested using one of the following operations: create, append or replace
job_idNullable(String)Id of the job that performed the operation, if any. If data has been ingested, import_id and job_id must have the same value.
request_idStringId of the request that performed the operation. If data has been ingested, import_id and request_id must have the same value. Format is ULID string.
rowsNullable(UInt64)How many rows the operations affected. This depends on event_type: for the append event, how many rows got inserted; for delete or truncate events, how many rows the Data Source had; for replace, how many rows the Data Source has after the operation.
rows_quarantineNullable(UInt64)How many rows went into the quarantine Data Source, if any.
blocks_idsArray(String)List of blocks ids that we used for the operation. See the tinybird.block_log Service Data Source for more details.
optionsNested(Names String, Values String)We store key-value pairs with extra information for some of the operations. For the replace event, we use the rows_before_replace key to track how many rows the Data Source had before the replacement happened, the replace_condition key shows what condition was used. For append and replace events we store the data source, e.g. the URL, or body/stream keywords. For rename event, old_name and new_name. For populateview you can find there the whole populate job metadata as a JSON string.
read_bytesUInt64Read bytes in the operation.
read_rowsUInt64Read rows in the operation.
written_rowsUInt64Written rows in the operation.
written_bytesUInt64Written bytes in the operation.
written_rows_quarantineUInt64Quarantined rows in the operation.
written_bytes_quarantineUInt64Quarantined bytes in the operation.
pipe_idStringIf present, materialization pipe id as returned in our Pipes API.
pipe_nameStringIf present, materialization pipe name as returned in our Pipes API.
releaseStringSemantic version of the release.

tinybird.endpoint_errors

It provides the last 30 days errors of your published endpoints. We log all of them with extra information in this Data Source.

FieldTypeDescription
start_datetimeDateTimeDate and time when the API call started.
request_idStringThe id of the request that performed the operation. Format is ULID string.
pipe_idStringIf present, pipe id as returned in our Pipes API.
pipe_nameStringIf present, pipe name as returned in our Pipes API.
paramsNullable(String)URL query params included in the request.
urlNullable(String)URL pathname.
status_codeNullable(Int32)HTTP error code.
errorNullable(String)Error message.

tinybird.kafka_ops_log

Contains all operations performed to your Kafka Data Sources during the last 30 days.

FieldTypeDescription
timestampDateTimeDate and time when the operation took place.
datasource_idStringId of your Data Source. The Data Source id is consistent after renaming operations. You should use the id when you want to track name changes.
topicStringKafka topic.
partitionInt16Partition number, or -1 for all partitions.
msg_typeString'info' for regular messages, 'warning' for issues related to the user's Kafka cluster, deserialization or materialized views, and 'error' for other issues.
lagInt64Number of messages behind for the partition. This is the difference between the high-water mark and the last commit offset.
processed_messagesInt32Messages processed for a topic and partition.
processed_bytesInt32Amount of bytes processed.
committed_messagesInt32Messages ingested for a topic and partition.
msgStringInformation in the case of warnings or errors. Empty otherwise.

tinybird.datasources_storage

Contains stats about your Data Sources storage. We log max values per hour, the same we use to calculate your storage consumption.

FieldTypeDescription
datasource_idStringId of your Data Source. The Data Source id is consistent after renaming operations. You should use the id when you want to track name changes.
datasource_nameStringName of your Data Source.
timestampDateTimeWhen storage was tracked. By hour.
bytesUInt64Max number of bytes the Data Source has, not including quarantine.
rowsUInt64Max number of rows the Data Source has, not including quarantine.
bytes_quarantineUInt64Max number of bytes the Data Source has in quarantine.
rows_quarantineUInt64Max number of rows the Data Source has in quarantine.

tinybird.releases_log

Contains operations performed to your releases. We track the following operations:

EventDescription
initFirst Release is created on Git sync.
overrideRelease commit is overriden. tb init --override-commit {{commit}}.
deployResources from a commit are deployed to a Release.
previewRelease status is changed to preview.
promoteRelease status is changed to live.
postResources from a commit are deployed to the live Release.
rollbackRollback is done a previous Release is now live.
deleteRelease is deleted.

We log all of them with extra information in this Data Source.

FieldTypeDescription
timestampDateTime64Date and time when the operation took place.
event_typeStringName of your Data Source.
semverStringSemantic version identifies a release.
commitStringGit sha commit related to the operation.
tokenStringAPI call token identifer used.
token_nameStringAPI call token name used.
resultStringok | error
errorStringDetailed error message

tinybird.sinks_ops_log

Contains all operations performed to your Sink Pipes.

FieldTypeDescription
timestampDateTime64Date and time when the operation took place.
serviceLowCardinality(String)Type of Sink (GCS, S3, ...).
pipe_idStringThe ID of the Sink Pipe..
pipe_nameStringthe name of the Sink Pipe.
token_nameStringToken name used.
resultLowCardinality(String)ok | error
errorNullable(String)Detailed error message
elapsed_timeFloat64The duration of the operation in seconds.
job_idNullable(String)ID of the job that performed the operation, if any.
read_rowsUInt64Read rows in the Sink operation.
written_rowsUInt64Written rows in the Sink operation.
read_bytesUInt64Read bytes in the operation.
written_bytesUInt64Written bytes in the operation.
outputArray(String)The outputs of the operation. In the case of writing to a bucket, the name of the written files.
parametersMap(String, String)The parameters used. Useful to debug the parameter query values.
optionsMap(String, String)Extra information.

tinybird.data_transfer

Stats of data transferred per hour by a workspace.

FieldTypeDescription
timestampDateTimeDate and time data transferred is tracked. By hour.
eventLowCardinality(String)Type of operation generated the data (ie. sink)
origin_providerLowCardinality(String)Provider data was transferred from.
origin_regionLowCardinality(String)Region data was transferred from.
destination_providerLowCardinality(String)Provider data was transferred to.
destination_regionLowCardinality(String)Region data was transferred to.
kindLowCardinality(String)intra | inter depending if the data moves within or outside the region.

Querying _state columns

Several of the Service Data Sources include columns suffixed with _state. This suffix identifies columns with values that are in an intermediate aggregated state. When reading these columns, you need to merge the intermediate states to get the final value.

To do this, wrap the column in the original aggregation function and apply the -Merge combinator.

Learn more about the -Merge combinator here.

For example, to finalize the value of the avg_duration_state column, you use the avgMerge function:

finalize the value for the avg_duration_state column
SELECT
  date,
  avgMerge(avg_duration_state) avg_time,
  quantilesTimingMerge(0.9, 0.95, 0.99)(quantile_timing_state) quantiles_timing_in_ms_array
FROM tinybird.pipe_stats
where pipe_id = 'PIPE_ID'
group by date