Converting dates to Unix timestamps in ClickHouse is made possible through the built in toUnixTimestamp()
function, which transforms your Date, DateTime, or String values into integers representing seconds since January 1, 1970, at midnight UTC.
We'll walk through the function's syntax, timezone handling, performance considerations, and practical examples that you can implement immediately in your ClickHouse queries.
What toUnixTimestamp does and when to use it
The toUnixTimestamp()
function takes your Date, DateTime, or String values and converts them into Unix timestamps—basically counting the seconds that have passed since January 1, 1970, at midnight UTC. Think of it as ClickHouse's way of turning human-readable dates into numbers that computers love to work with.
You'll find yourself reaching for this function when you're building APIs that expect epoch seconds, exporting data to systems that store timestamps as integers, or when you want to do math with time values. Unix timestamps are timezone-agnostic, take up less storage space than datetime strings, and make comparisons lightning-fast.
Supported input types and return value
The function accepts several input types and spits out a UInt32 value representing seconds since the Unix epoch. However, there are some important details depending on what you're converting.
Date and Date32
Converting Date values is straightforward since ClickHouse treats them as midnight UTC on whatever day you specify:
SELECT toUnixTimestamp(toDate('2024-01-15')) AS epoch_seconds;
-- Returns: 1705276800
Date32 works the same way but supports a wider range from 1900 to 2299, while regular Date only covers 1900 to 2149.
DateTime and DateTime64
DateTime conversions keep the full timestamp precision down to seconds:
SELECT toUnixTimestamp(toDateTime('2024-01-15 14:30:00')) AS epoch_seconds;
-- Returns: 1705328200
DateTime64 values get chopped down to seconds unless you use the precision-aware toUnixTimestamp64()
variants.
String with explicit format
String inputs work when they match ClickHouse's expected datetime formats, but this approach runs slower than working with proper datetime types:
SELECT toUnixTimestamp('2024-01-15 14:30:00') AS epoch_seconds;
-- Returns: 1705328200
UInt32 vs UInt64 output
Standard toUnixTimestamp()
returns UInt32, which creates a problem: it can only represent dates up to January 19, 2038. For dates beyond this "Year 2038 problem," you'll want to use toUnixTimestamp64()
or cast the result to UInt64.
Basic syntax with optional time zone
The function signature is toUnixTimestamp(date, [timezone])
, where the timezone parameter is optional but often crucial for getting consistent results.
Without explicit time zone
When you skip the timezone parameter, ClickHouse uses either the session's default timezone or treats the input as UTC:
SELECT toUnixTimestamp(now()) AS current_epoch;
This behavior can bite you if your session timezone differs from your data's intended timezone.
With explicit timezone parameter
Specifying a timezone ensures consistent conversion regardless of session settings:
SELECT toUnixTimestamp('2024-01-15 14:30:00', 'America/New_York') AS ny_epoch;
-- Converts 2:30 PM EST/EDT to UTC epoch seconds
Always use explicit timezones in production code to avoid confusion later. More database timestamps and timezone best practices here
Using default_timezone setting
You can set a session-level timezone that affects all conversions:
SET timezone = 'Europe/London';
SELECT toUnixTimestamp('2024-01-15 14:30:00') AS london_epoch;
Quick examples you can copy paste
Here are ready-to-use snippets for common conversion scenarios.
Single value conversion
-- Current timestamp
SELECT toUnixTimestamp(now()) AS current_unix;
-- Specific datetime
SELECT toUnixTimestamp('2024-01-15 10:30:00') AS specific_unix;
-- With timezone
SELECT toUnixTimestamp('2024-01-15 10:30:00', 'UTC') AS utc_unix;
Column update in place
-- Add epoch column to existing table
ALTER TABLE events ADD COLUMN epoch_time UInt32;
-- Populate from datetime column
ALTER TABLE events UPDATE epoch_time = toUnixTimestamp(created_at) WHERE epoch_time = 0;
-- Or in a SELECT query
SELECT event_id, toUnixTimestamp(created_at) AS epoch_time FROM events;
Reverse conversion with toDateTime
Verify your conversions by converting back to datetime:
SELECT
original_time,
toUnixTimestamp(original_time) AS epoch_seconds,
toDateTime(toUnixTimestamp(original_time)) AS converted_back
FROM events;
Handling time zones correctly
Timezone handling is where most timestamp conversion bugs come from. Getting this right from the start prevents data headaches later.
Working in UTC
Store all timestamps in UTC and convert to local timezones only when showing data to users:
-- Store UTC timestamps
INSERT INTO events (event_time_utc)
VALUES (toUnixTimestamp(now(), 'UTC'));
-- Convert to user timezone on read
SELECT
event_id,
toDateTime(epoch_time, 'America/Los_Angeles') AS local_time
FROM events;
This approach eliminates daylight saving time complications and makes your data portable across regions.
Dealing with daylight saving gaps
During DST transitions, some local times don't exist (spring forward) or happen twice (fall back). ClickHouse handles this by choosing the earlier occurrence for ambiguous times and skipping non-existent times:
-- This time doesn't exist during spring DST transition
SELECT toUnixTimestamp('2024-03-10 02:30:00', 'America/New_York');
-- ClickHouse automatically adjusts to a valid time
Converting multiple time zones in one query
When processing data from multiple timezones, use CASE statements or timezone lookup tables:
SELECT
event_id,
toUnixTimestamp(
event_time,
CASE user_region
WHEN 'US_EAST' THEN 'America/New_York'
WHEN 'US_WEST' THEN 'America/Los_Angeles'
WHEN 'EU' THEN 'Europe/London'
ELSE 'UTC'
END
) AS epoch_time
FROM user_events;
Converting in bulk and storing epoch seconds efficiently
Production systems often benefit from pre-computing Unix timestamps rather than converting on every query.
Materialized view strategy
Create materialized views that automatically convert timestamps during data ingestion:
CREATE MATERIALIZED VIEW events_with_epoch
ENGINE = MergeTree()
ORDER BY epoch_time
AS SELECT
event_id,
event_data,
toUnixTimestamp(event_time) AS epoch_time
FROM raw_events;
This approach trades storage space for query performance.
Insert time conversion with default expressions
Use default expressions to automatically generate epoch timestamps:
CREATE TABLE events (
event_id UInt64,
event_data String,
event_time DateTime DEFAULT now(),
epoch_time UInt32 DEFAULT toUnixTimestamp(event_time)
) ENGINE = MergeTree()
ORDER BY epoch_time;
On read conversion versus precompute
The choice depends on your query patterns:
- Precompute: When you frequently filter or sort by timestamp
- Convert on read: When storage space is limited and conversions happen infrequently
- Hybrid approach: For tables with both requirements
Common pitfalls and how to avoid them
Understanding these gotchas can save you from production issues down the line.
Overflow after 2106
UInt32 timestamps overflow on February 7, 2106. While this seems distant, it affects systems that generate future dates:
-- This will overflow
SELECT toUnixTimestamp('2107-01-01 00:00:00');
-- Use 64-bit alternative instead
SELECT toUnixTimestamp64Milli(toDateTime64('2107-01-01 00:00:00', 3)) / 1000;
Wrong data type promotions
ClickHouse sometimes promotes data types in unexpected ways. Always verify your output types:
-- These might not return what you expect
SELECT toUnixTimestamp(toString(now())); -- String conversion is slow
SELECT toUnixTimestamp(toUInt32(1705276800)); -- Treating epoch as datetime
String parsing performance traps
Converting strings runs significantly slower than working with native datetime types:
-- Slow: string parsing on every row
SELECT toUnixTimestamp(event_time_string) FROM large_table;
-- Fast: convert once during ingestion
ALTER TABLE large_table ADD COLUMN event_time DateTime;
ALTER TABLE large_table UPDATE event_time = toDateTime(event_time_string);
Performance tips and best practices
These optimization strategies can dramatically improve query performance at scale.
Indexing on epoch seconds
Integer indexes perform better than datetime indexes for range queries:
CREATE TABLE events (
event_id UInt64,
epoch_time UInt32,
event_data String
) ENGINE = MergeTree()
ORDER BY epoch_time;
-- Fast range query
SELECT * FROM events
WHERE epoch_time BETWEEN 1705276800 AND 1705363200;
Compression and disk usage
Integer timestamps compress better than datetime strings and use less disk space:
- UInt32 epoch: 4 bytes per value
- DateTime string: 19+ bytes per value
- Compression ratio: Integers compress roughly 3x better
SIMD advantages for batch conversion
ClickHouse's SIMD optimizations work best with batch operations. Convert entire columns rather than individual values when possible:
-- Efficient: batch conversion
SELECT event_id, toUnixTimestamp(event_time) FROM events;
-- Less efficient: row-by-row in complex expressions
SELECT event_id,
CASE WHEN condition THEN toUnixTimestamp(event_time) ELSE 0 END
FROM events;
Building a real-time analytics API with Tinybird
Let's walk through creating a practical API that converts timestamps on demand using Tinybird's managed ClickHouse platform.
Creating the data source
First, create a data source with datetime columns that we'll convert:
SCHEMA >
event_id UInt64,
event_name String,
created_at DateTime,
user_timezone String
ENGINE "MergeTree"
ENGINE_PARTITION_KEY "toYYYYMM(created_at)"
ENGINE_SORTING_KEY "created_at, event_id"
Writing a pipe that converts on read
Create a Tinybird pipe that converts timestamps based on user requirements:
NODE convert_timestamps
SQL >
%
SELECT
event_id,
event_name,
created_at,
toUnixTimestamp(created_at, user_timezone) AS epoch_seconds,
toUnixTimestamp(created_at, 'UTC') AS utc_epoch
FROM events_datasource
WHERE created_at >= {{DateTime(start_date, '2024-01-01 00:00:00')}}
AND created_at < {{DateTime(end_date, '2024-12-31 23:59:59')}}
Deploying as a parameterized endpoint
Publish this pipe as a REST API endpoint that accepts date range parameters:
curl "https://api.tinybird.co/v0/pipes/timestamp_converter.json?start_date=2024-01-01&end_date=2024-01-31?token=READ_TOKEN"
The API returns JSON with both original datetimes and converted Unix timestamps, letting client applications work with whichever format they prefer.
Next steps with Tinybird
Tinybird provides a managed analytics backend built on ClickHouse, with integrated API and pipeline tooling. This allows developers to focus on application logic instead of managing database infrastructure, scaling, or deployment pipelines.
Whether you're building user-facing analytics, internal dashboards, or AI-powered applications that need fast access to time-series data, Tinybird manages infrastructure and scaling, allowing developers to focus on building application features.
Build your first API for free with 1,000 requests per day included.
Additional resources
- ClickHouse toUnixTimestamp() docs
- ClickHouse toUnixTimestamp64Milli() docs
- Best practices for timestamps and time zones in databases
- Tinybird vs. ClickHouse®: What's the difference?
Frequently asked questions about toUnixTimestamp
Is toUnixTimestamp deterministic across ClickHouse versions?
Yes, the function produces consistent results across versions for the same input and timezone. The underlying algorithm follows standard Unix epoch calculations, so a specific datetime and timezone combination will always yield the same Unix timestamp regardless of your ClickHouse version.
Which integer type should I choose for future dates beyond 2038?
Use toUnixTimestamp64()
or cast results to UInt64 for dates after 2038. Standard toUnixTimestamp()
returns UInt32 which overflows in 2038, causing incorrect results for future dates. The 64-bit variants handle dates well beyond the current millennium.
Can toUnixTimestamp convert millisecond precision timestamps?
Use toUnixTimestamp64(datetime, precision)
to preserve millisecond or microsecond precision. The standard function truncates to seconds, losing sub-second information that might be critical for high-frequency event tracking or precise timing measurements.