TOO_MANY_ROWS_OR_BYTES ClickHouse error

This error usually means your query is trying to return more data than the system allows. You need to add LIMIT clauses or optimize your query to reduce the result set size.

The TOO_MANY_ROWS_OR_BYTES error in ClickHouse (and Tinybird) happens when a query tries to return more rows or bytes than the configured limits allow. This is a safety mechanism to prevent queries from consuming excessive resources.

What causes this error

You'll typically see it when:

  • Query returns too many rows (exceeds max_result_rows limit)
  • Query returns too much data (exceeds max_result_bytes limit)
  • Missing LIMIT clause on large datasets
  • Aggregations without proper grouping produce too many results
  • Window functions generate excessive intermediate results

Example errors

Fails: no LIMIT on large table
SELECT * FROM events
Fails: too many rows in aggregation
SELECT user_id, event_type FROM events GROUP BY user_id, event_type
Fails: large result set from window function
SELECT *, ROW_NUMBER() OVER (ORDER BY timestamp) FROM events
Fails: cross join producing too many rows
SELECT * FROM users CROSS JOIN events

How to fix it

Add LIMIT clause

The most common fix is to limit the number of rows returned:

Fix: add LIMIT
SELECT * FROM events LIMIT 1000

Use more specific WHERE clauses

Filter data before processing to reduce result size:

Fix: add WHERE clause
SELECT user_id, event_type 
FROM events 
WHERE timestamp >= now() - INTERVAL 1 DAY
GROUP BY user_id, event_type

Optimize aggregations

Use more specific grouping or add LIMIT to aggregations:

Fix: limit aggregation results
SELECT user_id, event_type, COUNT(*) as count
FROM events 
GROUP BY user_id, event_type
LIMIT 10000

Use subqueries for large datasets

Break down complex queries into smaller parts:

Fix: use subquery with LIMIT
SELECT * FROM (
    SELECT user_id, COUNT(*) as count
    FROM events
    GROUP BY user_id
    LIMIT 1000
) t
WHERE count > 10

Common patterns and solutions

Pagination

Use LIMIT with OFFSET for pagination:

Pagination example
SELECT * FROM events 
ORDER BY timestamp DESC 
LIMIT 100 OFFSET 200

Top N queries

Use LIMIT for top N results:

Top 10 users by event count
SELECT user_id, COUNT(*) as event_count
FROM events
GROUP BY user_id
ORDER BY event_count DESC
LIMIT 10

Sampling large datasets

Use SAMPLE for large tables:

Sample 1% of data
SELECT * FROM events SAMPLE 0.01

Time-based filtering

Filter by time ranges to reduce data:

Last 24 hours only
SELECT * FROM events 
WHERE timestamp >= now() - INTERVAL 1 DAY
LIMIT 1000

Advanced solutions

Use approximate functions

For large datasets, consider approximate functions:

Approximate count
SELECT uniqHLL(user_id) as approximate_user_count
FROM events

Optimize JOIN operations

Use proper JOIN conditions to avoid cartesian products:

Proper JOIN instead of CROSS JOIN
SELECT u.user_id, e.event_type
FROM users u
JOIN events e ON u.user_id = e.user_id
LIMIT 1000

Use Materialized Views

For frequently accessed aggregations, use Materialized Views:

Query from Materialized View
SELECT * FROM events_daily_stats LIMIT 1000

Tinybird-specific notes

In Tinybird, this error often occurs when:

  • Querying large Data Sources without proper filtering
  • Building Pipes that generate too many intermediate results
  • Using Materialized Views that exceed size limits
  • Running queries on high-cardinality columns

To debug in Tinybird:

  1. Check your Data Source size and add appropriate filters
  2. Use the Query Builder to test with LIMIT clauses first
  3. Consider breaking large Pipes into smaller nodes
  4. Use Materialized Views for pre-aggregated data

Configuration limits

You can check and adjust limits in ClickHouse:

Check current limits
SELECT name, value 
FROM system.settings 
WHERE name LIKE '%max_result%'

Common settings to adjust:

  • max_result_rows - maximum number of rows in result
  • max_result_bytes - maximum size of result in bytes
  • max_block_size - maximum block size for processing

See also