Date truncation in ClickHouse means rounding datetime values down to the start of a specified time period (turning 2025-09-30 14:30:45
into 2025-09-30 00:00:00
when truncating to the day). The date_trunc()
function handles this operation cleanly, making it the go-to choice for grouping events into time buckets for analytics and reporting.
Most developers reach for date_trunc()
when building dashboards or APIs that need to aggregate data by hour, day, week, or month. This guide covers the function's syntax, supported time units, timezone handling, performance considerations, and real-world examples for building time-series analytics with ClickHouse.
What date_trunc does in ClickHouse
The date_trunc()
function in ClickHouse takes a datetime value and rounds it down to the start of whatever time period you specify. Think of it like cutting off the precision below a certain level (if you truncate 2025-09-30 14:30:45
to the hour, you get 2025-09-30 14:00:00
).
This function accepts three parameters: the time unit as a string, the datetime you want to truncate, and an optional timezone. The result is always the beginning of that time period, which makes it perfect for grouping data into time buckets.
When to use date_trunc vs toStartOfInterval
ClickHouse gives you two main ways to round datetime values, and picking the right one depends on what you're trying to accomplish. The date_trunc()
function works great for standard calendar units, while toStartOfInterval()
handles custom intervals better.
Function | Best for | Example |
---|---|---|
date_trunc() | Standard units (day, week, month, year) | date_trunc('month', datetime) |
toStartOfInterval() | Custom intervals (every 15 minutes, 3 days) | toStartOfInterval(datetime, INTERVAL 15 minute) |
Most of the time, you'll reach for date_trunc()
when building dashboards or reports that follow normal calendar boundaries. The other function becomes useful when you need something like 6-hour windows or 10-minute buckets.
Function syntax and parameters
The basic syntax looks like this:
date_trunc('unit', datetime_expression [, 'timezone'])
The first parameter takes a string literal representing your time unit (you can't use variables or column values here). The datetime expression can be a Date, DateTime, or DateTime64 column, a literal value, or the result of another date function. The optional timezone parameter accepts strings like 'UTC' or 'America/New_York' and changes how the truncation boundaries get calculated.
Supported units and aliases
ClickHouse accepts several time units for date_trunc()
, organized by how precise they are. Each unit truncates to the start of that time period, zeroing out everything smaller.
Second minute hour
Time-based units give you sub-daily precision:
'second'
- Rounds down to the current second (zeros out milliseconds)'minute'
- Rounds down to the current minute (zeros out seconds)'hour'
- Rounds down to the current hour (zeros out minutes and seconds)
Day week month
Date-based units handle daily periods and longer:
'day'
- Rounds down to midnight of the same day'week'
- Rounds down to Monday at midnight (ClickHouse starts weeks on Monday)'month'
- Rounds down to the first day of the month at midnight
Quarter year
Longer periods work well for business reporting:
'quarter'
- Rounds down to the first day of the quarter (Jan 1, Apr 1, Jul 1, Oct 1)'year'
- Rounds down to January 1st at midnight
Quick examples for common units
Here are working SQL examples for the truncation operations you'll use most often in analytics queries.
Truncate to day
SELECT date_trunc('day', '2025-09-30 14:30:45');
-- Result: 2025-09-30 00:00:00
This operation zeroes out the time components while keeping the date intact. Good for daily aggregations where you want to group all events from the same day together.
Truncate to month
SELECT date_trunc('month', '2025-09-30 14:30:45');
-- Result: 2025-09-01 00:00:00
Month truncation resets both the day and time components, giving you the first moment of the month. This works well for monthly reports where you want clean month boundaries.
Truncate to week start
SELECT date_trunc('week', '2025-09-30 14:30:45');
-- Result: 2025-09-29 00:00:00 (Monday of that week)
Week truncation in ClickHouse always returns Monday at midnight, regardless of which day you start with. This differs from databases that use Sunday as the week start, so watch out for that if you're migrating from another system.
Handling DateTime64 and time zones
ClickHouse's date_trunc()
function handles high-precision timestamps and timezone-aware operations, though there are some things to watch out for. DateTime64 columns keep their precision level after truncation, but values below the specified unit become zero.
Working with time zone arguments
The optional timezone parameter changes how truncation boundaries get calculated:
-- Same UTC timestamp, different timezone truncation
SELECT
date_trunc('day', '2025-06-15 02:30:00', 'UTC') as utc_day,
date_trunc('day', '2025-06-15 02:30:00', 'America/New_York') as ny_day;
When you specify a timezone, ClickHouse converts the datetime to that timezone first, does the truncation, then converts back to the server's timezone. Results can vary depending on daylight saving time rules.
Precision issues with sub-second values
DateTime64 columns with microsecond or nanosecond precision behave a bit differently:
-- DateTime64(3) maintains millisecond precision
SELECT date_trunc('second', toDateTime64('2025-06-15 14:30:45.123', 3));
-- Result: 2025-06-15 14:30:45.000
The fractional seconds become zero, but the column keeps its original precision level. This means you still get three decimal places, just filled with zeros.
Creating custom interval buckets
While date_trunc()
handles standard time units well, you can combine it with arithmetic operations to create custom time windows for specialized analytics.
Five-minute buckets
SELECT
toStartOfInterval(
date_trunc('minute', event_time),
INTERVAL 5 minute
) as five_minute_bucket,
COUNT(*) as event_count
FROM events
GROUP BY five_minute_bucket;
This approach first truncates to the minute, then uses toStartOfInterval()
to create 5-minute groupings. You could also use pure arithmetic with modulo operations, but this method reads more clearly.
Arbitrary N-hour windows
-- Create 6-hour windows starting from midnight
SELECT
toStartOfInterval(
date_trunc('hour', event_time),
INTERVAL 6 hour
) as six_hour_window,
AVG(response_time) as avg_response
FROM api_logs
GROUP BY six_hour_window;
Custom hour windows work well for operational monitoring where you want alignment with business hours or shift patterns rather than calendar days.
Grouping data with date_trunc in aggregations
The date_trunc()
function works well in GROUP BY clauses for time-series analytics. It lets you aggregate events into meaningful time buckets for dashboards and reports.
Daily active users
SELECT
date_trunc('day', login_time) as login_date,
COUNT(DISTINCT user_id) as daily_active_users
FROM user_sessions
WHERE login_time >= today() - INTERVAL 30 DAY
GROUP BY login_date
ORDER BY login_date;
This query creates a 30-day DAU report by grouping unique users per day. The date_trunc('day')
ensures all login times within the same day get grouped together, regardless of what hour they logged in.
Monthly revenue roll-ups
SELECT
date_trunc('month', order_date) as month,
SUM(order_total) as monthly_revenue,
COUNT(*) as order_count
FROM orders
WHERE order_date >= '2024-01-01'
GROUP BY month
ORDER BY month;
Monthly aggregations using date_trunc('month')
automatically handle varying month lengths and provide clean boundaries for financial reporting. February gets the same treatment as July—no special handling required.
Performance tips and common pitfalls
Using date_trunc()
efficiently requires understanding how ClickHouse processes the function and where performance bottlenecks typically show up.
Preventing function calls in WHERE
Applying date_trunc()
directly in WHERE clauses can prevent ClickHouse from using date-based indexes effectively:
-- Slower: function call prevents index usage
SELECT * FROM events
WHERE date_trunc('day', event_time) = '2025-06-15';
-- Faster: range comparison uses indexes
SELECT * FROM events
WHERE event_time >= '2025-06-15'
AND event_time < '2025-06-16';
Range comparisons allow ClickHouse to leverage partition pruning and skip unnecessary data blocks during query execution. The performance difference becomes dramatic with large tables.
Using materialized columns
Pre-computing truncated values in materialized columns dramatically improves query performance for frequently accessed time buckets:
-- Add materialized column for daily truncation
ALTER TABLE events
ADD COLUMN event_day Date MATERIALIZED toDate(date_trunc('day', event_time));
-- Queries using event_day are much faster
SELECT event_day, COUNT(*)
FROM events
WHERE event_day = '2025-06-15'
GROUP BY event_day;
Materialized columns store the computed values alongside your data, eliminating runtime function calls entirely. The trade-off is slightly more storage space for significantly faster queries.
Troubleshooting wrong results
Several common issues can cause date_trunc()
to produce unexpected outputs, usually related to timezone handling or week boundaries.
Off-by-one week errors
ClickHouse uses Monday as the first day of the week, which differs from databases that start weeks on Sunday:
-- Tuesday June 17, 2025
SELECT date_trunc('week', '2025-06-17');
-- Returns: 2025-06-16 (Monday), not 2025-06-15 (Sunday)
If your business logic expects Sunday-based weeks, you'll need to adjust the results with date arithmetic or use toStartOfWeek()
with explicit day parameters. This catches a lot of people when migrating from other databases.
Unexpected null outputs
The date_trunc()
function returns NULL when given invalid inputs or unsupported unit strings:
- Invalid unit strings:
date_trunc('invalid_unit', now())
returns NULL - NULL datetime inputs:
date_trunc('day', NULL)
returns NULL - Malformed datetime strings: Unparseable date strings also produce NULL
Always validate your unit parameters and handle NULL datetime inputs appropriately. Consider using COALESCE()
or conditional logic when working with potentially NULL columns.
Building ClickHouse-based, time-series analytics APIs with Tinybird
Let's create a complete example that demonstrates how to build a real-time API for date truncation using Tinybird's managed ClickHouse platform.
This walkthrough shows how to ingest time-series data, process timestamps with ClickHouse functions, and expose date-truncated analytics through a real-time API endpoint.
1. Create the data source
First, create a data source to store time-series events:
SCHEMA >
`timestamp` DateTime64(3) `json:$.timestamp`,
`user_id` String `json:$.user_id`,
`event_type` String `json:$.event_type`,
`value` Float64 `json:$.value`,
`session_id` String `json:$.session_id`
ENGINE MergeTree
ENGINE_SORTING_KEY timestamp
ENGINE_PARTITION_KEY toYYYYMM(timestamp)
Now build the project locally:
tb dev
This builds the project on Tinybird Local and creates your data source. Then ingest some sample data using the Events API:
curl -X POST "http://localhost:7181/v0/events?name=time_series_events" \
-H "Content-Type: application/json" \
-d '{"timestamp": "2025-10-03 14:30:15.123", "user_id": "user123", "event_type": "page_view", "value": 1.0, "session_id": "sess_abc"}
{"timestamp": "2025-10-03 14:30:16.456", "user_id": "user456", "event_type": "click", "value": 2.5, "session_id": "sess_def"}
{"timestamp": "2025-10-03 14:30:17.789", "user_id": "user789", "event_type": "conversion", "value": 10.0, "session_id": "sess_ghi"}'
2. Create the pipe files
Create a pipe that processes time-series data and provides analytics:
NODE time_series_events
SQL >
SELECT
timestamp,
user_id,
event_type,
value,
session_id,
date_trunc('hour', timestamp) AS hour_bucket,
date_trunc('day', timestamp) AS day_bucket
FROM time_series_events
WHERE timestamp >= now64(3) - INTERVAL 1 HOUR
NODE hourly_analytics
SQL >
%
SELECT
hour_bucket,
event_type,
COUNT(*) AS event_count,
COUNT(DISTINCT user_id) AS unique_users,
SUM(value) AS total_value,
AVG(value) AS avg_value
FROM time_series_events
WHERE
{\% if defined(event_type) %}
event_type = {{ String(event_type) }}
{\% end %}
{\% if defined(hours_back) %}
AND timestamp >= now64(3) - INTERVAL {{ Int32(hours_back) }} HOUR
{\% end %}
GROUP BY hour_bucket, event_type
ORDER BY hour_bucket DESC
LIMIT {{ Int32(limit, 100) }}
TYPE endpoint
3. Deploy and test the API
Deploy your project to Tinybird Cloud:
tb --cloud deploy
Then test your API endpoint:
# Get all hourly analytics from the last hour
curl -H "Authorization: Bearer <your_token>" \
"https://api.tinybird.co/v0/pipes/hourly_analytics.json"
# Filter by event type
curl -H "Authorization: Bearer <your_token>" \
"https://api.tinybird.co/v0/pipes/hourly_analytics.json?event_type=page_view"
# Filter by time range
curl -H "Authorization: Bearer <your_token>" \
"https://api.tinybird.co/v0/pipes/hourly_analytics.json?hours_back=2"
4. Sample API response
The API returns structured data with time-series analytics:
{
"meta": [
{
"name": "hour_bucket",
"type": "DateTime64(3)"
},
{
"name": "event_type",
"type": "String"
},
{
"name": "event_count",
"type": "UInt64"
},
{
"name": "unique_users",
"type": "UInt64"
},
{
"name": "total_value",
"type": "Float64"
},
{
"name": "avg_value",
"type": "Float64"
}
],
"data": [
{
"hour_bucket": "2025-10-03 14:00:00.000",
"event_type": "page_view",
"event_count": 1247,
"unique_users": 892,
"total_value": 1247.0,
"avg_value": 1.0
},
{
"hour_bucket": "2025-10-03 14:00:00.000",
"event_type": "click",
"event_count": 456,
"unique_users": 234,
"total_value": 1140.0,
"avg_value": 2.5
}
],
"rows": 2,
"statistics": {
"elapsed": 0.045,
"rows_read": 2000,
"bytes_read": 89000
}
}
Frequently asked questions about date_trunc in ClickHouse
Why does date_trunc return the same value as the input?
This happens when your input is already at the start of the specified unit, like truncating '2024-01-01 00:00:00' to day. The function correctly identifies that no truncation is needed since the datetime already represents the beginning of the day.
Can I use date_trunc on a Date column?
Yes, date_trunc()
works with Date columns but only units of 'day' or larger make sense since Date has no time component. Using 'hour' or 'minute' units on Date columns will simply return the date at midnight.
Is date_trunc deterministic across time zones?
Yes, date_trunc()
produces consistent results when you specify the same timezone parameter, but results vary without explicit timezone handling. The function uses the server's timezone by default, which can lead to different truncation boundaries for the same UTC timestamp.
How do I truncate to fiscal quarters?
ClickHouse quarters start in January, so use date arithmetic to shift months before applying date_trunc('quarter')
for fiscal years. For example, if your fiscal year starts in April, subtract 3 months before truncating, then add them back.
Using Tinybird as a managed ClickHouse service
Managing ClickHouse infrastructure for date truncation involves complexity—from optimizing storage and query performance to handling timezone edge cases and maintaining consistent data quality across distributed systems.
Tinybird's managed ClickHouse service reduces infrastructure overhead by providing a managed ClickHouse environment that abstracts optimization, scaling, and maintenance. Tinybird exposes ClickHouse features such as date_trunc()
and time-series functions without requiring direct infrastructure management, including database clusters, performance monitoring, or version upgrades.
This enables teams to focus on application development rather than database operations or configuring ClickHouse for production workloads. Tinybird's platform supports developer efficiency through managed scaling and workflow integrations.
Sign up for a free Tinybird account to build and test your first ClickHouse-based API in just a few minutes.