TOO_MANY_SIMULTANEOUS_QUERIES ClickHouse error¶
This error occurs when the system has reached its limit for concurrent query execution. It's common in high-traffic scenarios or when queries are not properly managed.
The TOO_MANY_SIMULTANEOUS_QUERIES error in ClickHouse (and Tinybird) happens when the system has reached its configured limit for the number of queries that can run simultaneously. This typically occurs in high-traffic scenarios, when applications don't properly manage query connections, or when the system is under heavy load.
What causes this error¶
You'll typically see it when:
- Too many concurrent connections from applications
- Queries are not properly closed or timed out
- System is under heavy load with many users
- Connection pooling is not properly configured
- Long-running queries block new query execution
- Applications create new connections without reusing existing ones
- System limits are set too low for your workload
This error often indicates a need for better connection management rather than just increasing limits.
Example errors¶
Fails: too many concurrent connections
SELECT * FROM events WHERE timestamp > '2024-01-01' -- Error: Too many simultaneous queries
Fails: connection limit reached
-- When trying to execute a new query while many are running INSERT INTO events (user_id, event_type, timestamp) VALUES (123, 'click', '2024-01-01 10:00:00') -- Error: Too many simultaneous queries
Fails: high-traffic scenario
-- Multiple applications trying to query simultaneously SELECT COUNT(*) FROM events WHERE user_id = 123 -- Error: Too many simultaneous queries
How to fix it¶
Check current query count¶
Monitor the number of active queries:
Check active queries
SELECT
query_id,
user,
query,
state,
start_time,
elapsed
FROM system.processes
ORDER BY start_time DESC
Check system limits¶
Verify current system settings:
Check query limits
SELECT
name,
value,
description
FROM system.settings
WHERE name LIKE '%max_concurrent%' OR name LIKE '%max_connections%'
Monitor query states¶
Look for stuck or long-running queries:
Check query states
SELECT
state,
COUNT(*) as count
FROM system.processes
GROUP BY state
Check connection usage¶
Monitor connection utilization:
Check connections
SELECT
user,
COUNT(*) as active_queries
FROM system.processes
GROUP BY user
ORDER BY active_queries DESC
Common patterns and solutions¶
Connection pooling¶
Implement proper connection pooling:
Connection pool configuration
-- In your application, configure connection pooling
-- Example for Python with clickhouse-driver
from clickhouse_driver import Client
client = Client(
host='your-host',
port=9000,
database='your_database',
settings={
'max_execution_time': 300,
'max_memory_usage': 10000000000
}
)
Query timeout management¶
Set appropriate timeouts for queries:
Set query timeout
SET max_execution_time = 300; -- 5 minutes SET max_query_size = 1000000000; -- 1GB
Limit concurrent queries per user¶
Set per-user limits:
User-specific limits
-- Set limits for specific users SET max_concurrent_queries_for_user = 10; SET max_memory_usage_for_user = 5000000000; -- 5GB
Query prioritization¶
Use query priorities for important operations:
Query priority
SELECT * FROM events WHERE timestamp > '2024-01-01' SETTINGS priority = 10
Tinybird-specific notes¶
In Tinybird, TOO_MANY_SIMULTANEOUS_QUERIES errors often occur when:
- Multiple applications query the same workspace
- API endpoints receive high traffic
- Materialized Views are processing many queries
- Pipes are running multiple transformations simultaneously
To debug in Tinybird:
- Check your workspace query limits
- Monitor API endpoint usage
- Review Materialized View processing
- Implement proper rate limiting in applications
In Tinybird, consider using Materialized Views to reduce the need for repeated complex queries.
Best practices¶
Connection management¶
- Implement proper connection pooling
- Reuse connections when possible
- Set appropriate connection timeouts
- Monitor connection usage patterns
Query optimization¶
- Use Materialized Views for complex queries
- Implement proper indexing strategies
- Use query caching where appropriate
- Break down complex queries into smaller parts
Load management¶
- Implement rate limiting in applications
- Use query queues for high-traffic scenarios
- Monitor system performance metrics
- Set appropriate user limits
Configuration options¶
System-wide limits¶
Check system limits
SELECT
name,
value,
description
FROM system.settings
WHERE name IN (
'max_concurrent_queries',
'max_connections',
'max_concurrent_queries_for_user'
)
Adjusting limits¶
Adjust query limits
SET max_concurrent_queries = 100; SET max_connections = 1000; SET max_concurrent_queries_for_user = 20;
User-specific settings¶
User limits
-- Set limits for current user SET max_concurrent_queries_for_user = 15; SET max_memory_usage_for_user = 8000000000; -- 8GB
Alternative solutions¶
Use query queues¶
Implement query queuing in your application:
Query with retry logic
-- In your application, implement retry logic -- Example pseudo-code: -- -- max_retries = 3 -- retry_delay = 1 second -- -- for attempt in range(max_retries): -- try: -- result = execute_query(query) -- break -- except TooManySimultaneousQueries: -- if attempt < max_retries - 1: -- time.sleep(retry_delay) -- retry_delay *= 2 -- else: -- raise
Implement rate limiting¶
Add rate limiting to your application:
Rate-limited queries
-- In your application, implement rate limiting -- Example pseudo-code: -- -- rate_limit = 10 -- queries per second -- last_query_time = 0 -- -- current_time = time.time() -- if current_time - last_query_time < 1.0 / rate_limit: -- time.sleep(1.0 / rate_limit - (current_time - last_query_time)) -- -- execute_query(query) -- last_query_time = time.time()
Use asynchronous processing¶
Process queries asynchronously:
Async query processing
-- Use async/await patterns in your application -- Example pseudo-code: -- -- async def process_queries(): -- tasks = [] -- for query in queries: -- task = asyncio.create_task(execute_query(query)) -- tasks.append(task) -- -- results = await asyncio.gather(*tasks, return_exceptions=True) -- return results
Monitoring and prevention¶
Query monitoring¶
Monitor query patterns
-- Check for long-running queries
SELECT
query_id,
user,
query,
state,
start_time,
elapsed
FROM system.processes
WHERE state = 'Running'
ORDER BY elapsed DESC
Performance metrics¶
Performance monitoring
-- Monitor query performance
SELECT
user,
COUNT(*) as total_queries,
AVG(elapsed) as avg_duration,
MAX(elapsed) as max_duration
FROM system.processes
WHERE start_time >= now() - INTERVAL 1 HOUR
GROUP BY user
Connection tracking¶
Connection monitoring
-- Track connection usage
SELECT
user,
COUNT(*) as active_connections,
COUNT(DISTINCT query_id) as active_queries
FROM system.processes
GROUP BY user
ORDER BY active_connections DESC