ClickHouse's now64()
function returns the current timestamp with sub-second precision, allowing you to capture timing details down to nanoseconds rather than being limited to whole seconds like the standard now()
function. This precision is useful when you're tracking high-frequency events, measuring API performance, or building real-time analytics that need to distinguish between events happening within milliseconds of each other.
We'll walk through the syntax options, precision levels, and practical implementation strategies for using now64()
in production ClickHouse deployments, plus show you how to convert external timestamp data and optimize performance for high-throughput scenarios.
What now64 does and why you need it
The now64()
function returns the current timestamp as a DateTime64 data type, which supports fractional seconds with customizable precision. By default, it provides millisecond precision (3 digits after the decimal point), but you can specify precision ranging from 0 (seconds) to 9 (nanoseconds).
SELECT now64();
-- Returns: 2025-01-15 14:23:45.123
SELECT now64(6);
-- Returns: 2025-01-15 14:23:45.123456
This level of precision is useful when you're tracking high-frequency events, measuring performance with microsecond accuracy, or processing financial transactions where timing matters. Standard timestamp functions like now()
only provide second-level precision, which isn't sufficient for modern real-time applications that need to distinguish between events happening within the same second.
now64 syntax and precision options
The basic syntax follows two patterns: SELECT now64()
for default millisecond precision and SELECT now64(precision)
for custom precision levels.
-- Default precision (3 digits)
SELECT now64();
-- Custom precision (6 digits for microseconds)
SELECT now64(6);
-- With timezone parameter
SELECT now64(3, 'America/New_York');
The precision parameter accepts integers from 0 to 9, where each number represents the decimal places after the seconds. Higher precision values capture more granular timing information but also consume more storage space and processing power.
You can also specify a timezone as the second parameter, which applies the timezone offset to the returned timestamp. This is useful when your application serves users across multiple time zones or when you want to standardize timestamps to a specific region.
Choosing the right DateTime64 precision
Selecting the appropriate precision level depends on your specific use case and the trade-offs between accuracy and performance.
1. Nanoseconds
Nanosecond precision (now64(9)
) captures the highest level of detail available, making it suitable for high-frequency trading systems, scientific measurements, or distributed systems that need to order events with high precision. However, this level of precision comes with increased storage requirements and potential performance overhead.
2. Microseconds
Microsecond precision (now64(6)
) strikes a balance between detailed timing information and system efficiency. Most real-time analytics applications find this level sufficient for tracking user interactions, API response times, and event ordering across distributed systems.
3. Milliseconds
Millisecond precision (now64(3)
) works well for web applications, user event tracking, and general-purpose logging. This precision level provides enough granularity to distinguish between rapid user actions while maintaining optimal query performance and storage efficiency.
Storing and querying now64 values
Working with now64()
in production involves creating appropriate table schemas and understanding how to query high-precision timestamps effectively.
1. Inserting into DateTime64 columns
When creating tables, you'll want to define DateTime64 columns with the same precision you plan to use with now64()
:
CREATE TABLE events (
id UInt64,
event_time DateTime64(6),
user_id String,
action String
) ENGINE = MergeTree()
ORDER BY (user_id, event_time);
-- Using now64() in INSERT statements
INSERT INTO events VALUES
(1, now64(6), 'user123', 'click');
-- Using now64() as DEFAULT value
CREATE TABLE metrics (
metric_name String,
value Float64,
recorded_at DateTime64(3) DEFAULT now64(3)
) ENGINE = MergeTree()
ORDER BY recorded_at;
2. Selecting with sub-second filters
High-precision timestamps enable you to filter data within very narrow time windows:
-- Filter events within a specific microsecond range
SELECT * FROM events
WHERE event_time BETWEEN '2025-01-15 14:23:45.123456'
AND '2025-01-15 14:23:45.123789';
-- Find events within the last 500 milliseconds
SELECT * FROM events
WHERE event_time >= now64(3) - INTERVAL 500 MILLISECOND;
3. Aggregating at high resolution
You can group data by sub-second intervals to analyze patterns that would be invisible with standard timestamp precision:
-- Group by 100-millisecond intervals
SELECT
toStartOfInterval(event_time, INTERVAL 100 MILLISECOND) as time_bucket,
count() as event_count
FROM events
WHERE event_time >= now64(3) - INTERVAL 1 HOUR
GROUP BY time_bucket
ORDER BY time_bucket;
Converting strings to DateTime64 in ClickHouse
Real-world applications often receive timestamp data as strings from external systems, APIs, or log files that require conversion to DateTime64 format.
1. Using parseDateTime64BestEffort
The parseDateTime64BestEffort
function automatically handles various timestamp formats and extracts sub-second components:
-- Parse ISO 8601 with milliseconds
SELECT parseDateTime64BestEffort('2025-01-15T14:23:45.123Z', 3);
-- Parse with timezone offset
SELECT parseDateTime64BestEffort('2025-01-15 14:23:45.123+02:00', 3);
-- Handle various formats automatically
SELECT parseDateTime64BestEffort('2025-01-15 14:23:45.123456', 6);
2. Using toDateTime64
The toDateTime64
function provides more control when converting Unix timestamps or when you know the exact format:
-- Convert Unix timestamp with fractional seconds
SELECT toDateTime64(1735787045.123456, 6);
-- Convert string with explicit precision
SELECT toDateTime64('2025-01-15 14:23:45.123', 3);
-- Convert with timezone
SELECT toDateTime64('1735787045.123', 3, 'UTC');
Common string formats that work reliably include ISO 8601 with sub-seconds (2025-01-02T03:04:05.678Z
), RFC 3339 variants (2025-01-02 03:04:05.678+00:00
), and Unix epoch with fractional part (1735787045.678
).
now vs now64 vs toUnixTimestamp64
Understanding the differences between ClickHouse's timestamp functions helps you choose the right tool for each situation.
1. Precision comparison
Each function serves different precision requirements and returns different data types:
Function | Precision | Return Type | Use Case |
---|---|---|---|
now() | Seconds | DateTime | Coarse-grained timing, compatibility |
now64(p) | Sub-second | DateTime64(p) | High-resolution event timing |
toUnixTimestamp64(p) | Sub-second | Int64/Decimal | Numeric timestamps, interop |
2. Storage impact
Higher precision levels consume more disk space and memory. DateTime64(3) uses 8 bytes per value, while DateTime64(9) requires additional storage for the extended precision. The difference becomes significant when you're storing billions of timestamps.
3. Typical use cases
Choose now()
for general logging where second-level precision suffices. Use now64()
when you want to track rapid events or measure performance with sub-second accuracy. Opt for toUnixTimestamp64()
when interfacing with systems that expect numeric Unix timestamps with fractional components.
Handling time zones and edge cases
Production deployments often involve complex timezone handling and edge cases that can affect timestamp accuracy.
1. Setting session time zone
You can specify timezones either in the now64()
function call or at the session level:
-- Function-level timezone
SELECT now64(3, 'Europe/London');
-- Session-level timezone (affects all timestamp operations)
SET timezone = 'America/New_York';
SELECT now64(3);
Session-level timezone settings affect all timestamp operations in your query, which can be more efficient than specifying timezones in individual function calls.
2. Dealing with leap seconds
ClickHouse handles leap seconds by following the Unix timestamp convention, which doesn't account for leap seconds. This means that during leap second events, there might be slight discrepancies between ClickHouse timestamps and UTC time, though this rarely affects practical applications.
Performance tips for high-throughput timestamping
Optimizing timestamp operations becomes crucial when you're processing millions of events per second.
1. Use low-cardinality time zones
When working with multiple timezones, consider using the LowCardinality
data type for timezone strings to reduce memory usage and improve query performance:
CREATE TABLE global_events (
event_time DateTime64(3),
timezone LowCardinality(String),
user_id String
) ENGINE = MergeTree()
ORDER BY event_time;
2. Avoid repeated now64 calls in large inserts
Instead of calling now64()
for each row during bulk inserts, compute the timestamp once and reuse it, or use DEFAULT column expressions:
-- Inefficient: calls now64() for each row
INSERT INTO events
SELECT id, now64(3), user_id, action
FROM large_dataset;
-- Better: use DEFAULT column value
INSERT INTO events (id, user_id, action)
SELECT id, user_id, action
FROM large_dataset;
- Materialized columns: Use materialized columns or DEFAULT expressions to avoid repeated function calls during inserts
- Batch processing: Precompute timestamps per batch in client code when inserting large datasets
- Appropriate precision: Choose the lowest precision that meets your requirements to minimize CPU and storage overhead
- Partitioning strategy: Leverage partitioning and primary keys on time columns for faster range queries
- Session-level settings: Set timezone at session level to avoid per-row conversions when possible
Building a ClickHouse-based time series analytics API with Tinybird
Let's create a complete example that demonstrates how to build a real-time API for high-precision timestamps using Tinybird's managed ClickHouse platform.
This walkthrough shows how to ingest timestamp data, process it with ClickHouse functions, and expose it through a real-time API endpoint built with Tinybird.
1. Create the data source
First, create a data source to store timestamp events:
SCHEMA >
`timestamp` DateTime64(6) `json:$.timestamp`,
`event_type` String `json:$.event_type`,
`user_id` String `json:$.user_id`,
`precision_level` UInt8 `json:$.precision_level`,
`timezone` String `json:$.timezone`
ENGINE MergeTree
ENGINE_SORTING_KEY timestamp
ENGINE_PARTITION_KEY toYYYYMM(timestamp)
Now build the project locally:
tb dev
This builds the project on Tinybird Local and creates your data source. Then ingest some sample data using the Events API:
curl -X POST "http://localhost:7181/v0/events?name=timestamp_events" \
-H "Content-Type: application/json" \
-d '{"timestamp": "2025-01-15 14:30:15.123456", "event_type": "user_action", "user_id": "user123", "precision_level": 6, "timezone": "UTC"}
{"timestamp": "2025-01-15 14:30:15.234567", "event_type": "api_call", "user_id": "user456", "precision_level": 6, "timezone": "America/New_York"}
{"timestamp": "2025-01-15 14:30:15.345678", "event_type": "database_query", "user_id": "user789", "precision_level": 6, "timezone": "Europe/London"}'
2. Create the pipe files
Create a pipe that processes timestamp data with configurable precision:
NODE timestamp_events
SQL >
SELECT
timestamp,
event_type,
user_id,
precision_level,
timezone,
now64(precision_level) AS current_timestamp,
now64(precision_level, timezone) AS current_timestamp_tz,
toUnixTimestamp64Micro(timestamp) AS unix_micro
FROM timestamp_events
WHERE timestamp >= now64(6) - INTERVAL 1 HOUR
NODE timestamp_analytics
SQL >
%
SELECT
event_type,
timezone,
precision_level,
COUNT(*) AS event_count,
COUNT(DISTINCT user_id) AS unique_users,
MIN(timestamp) AS first_event,
MAX(timestamp) AS last_event,
AVG(toUnixTimestamp64Micro(timestamp)) AS avg_unix_timestamp
FROM timestamp_events
WHERE
{\% if defined(event_type) %}
event_type = {{ String(event_type) }}
{\% end %}
{\% if defined(timezone) %}
AND timezone = {{ String(timezone) }}
{\% end %}
{\% if defined(hours_back) %}
AND timestamp >= now64(6) - INTERVAL {{ Int32(hours_back) }} HOUR
{\% end %}
GROUP BY event_type, timezone, precision_level
ORDER BY event_count DESC
LIMIT {{ Int32(limit, 100) }}
TYPE endpoint
3. Deploy and test the API
Deploy your project to Tinybird Cloud:
tb --cloud deploy
Then test your API endpoint:
# Get all events from the last hour
curl -H "Authorization: Bearer <your_token>" \
"https://api.tinybird.co/v0/pipes/timestamp_analytics.json"
# Filter by event type
curl -H "Authorization: Bearer <your_token>" \
"https://api.tinybird.co/v0/pipes/timestamp_analytics.json?event_type=user_action"
# Filter by timezone and time range
curl -H "Authorization: Bearer <your_token>" \
"https://api.tinybird.co/v0/pipes/timestamp_analytics.json?timezone=UTC&hours_back=2"
4. Sample API response
The API returns structured data with high-precision timestamp analytics:
{
"meta": [
{
"name": "event_type",
"type": "String"
},
{
"name": "timezone",
"type": "String"
},
{
"name": "precision_level",
"type": "UInt8"
},
{
"name": "event_count",
"type": "UInt64"
},
{
"name": "unique_users",
"type": "UInt64"
},
{
"name": "first_event",
"type": "DateTime64(6)"
},
{
"name": "last_event",
"type": "DateTime64(6)"
},
{
"name": "avg_unix_timestamp",
"type": "Float64"
}
],
"data": [
{
"event_type": "user_action",
"timezone": "UTC",
"precision_level": 6,
"event_count": 1247,
"unique_users": 892,
"first_event": "2025-01-15 14:30:15.123456",
"last_event": "2025-01-15 15:45:22.789012",
"avg_unix_timestamp": 1737034215123456.0
},
{
"event_type": "api_call",
"timezone": "America/New_York",
"precision_level": 6,
"event_count": 456,
"unique_users": 234,
"first_event": "2025-01-15 14:30:15.234567",
"last_event": "2025-01-15 15:30:18.345678",
"avg_unix_timestamp": 1737034215234567.0
}
],
"rows": 2,
"statistics": {
"elapsed": 0.045,
"rows_read": 2000,
"bytes_read": 89000
}
}
Build real-time analytics APIs with ClickHouse using Tinybird
Managing ClickHouse infrastructure for high-precision timestamp operations involves complexity—from optimizing storage and query performance to handling timezone edge cases and maintaining consistent precision across distributed systems.
Tinybird's managed ClickHouse service reduces infrastructure overhead by providing a managed ClickHouse environment that abstracts optimization, scaling, and maintenance. Tinybird exposes ClickHouse features such as now64()
and DateTime64 precision without requiring direct infrastructure management, including database clusters, performance monitoring, or version upgrades.
This enables teams to focus on application development rather than database operations or configuring ClickHouse for production workloads. Tinybird's platform supports developer efficiency through managed scaling and workflow integrations.
Sign up for a free Tinybird account to build and test your first ClickHouse-based API in just a few minutes.
Additional resources
- now64 function
- DateTime64 data type
- toDateTime64 function
- parseDateTime64BestEffort function
- toUnixTimestamp64Micro function
- How to extract URL query strings without ? or # in ClickHouse
FAQs about high-precision timestamps
Can I set a global default precision for now64?
No, ClickHouse requires specifying precision in each now64()
call or using DEFAULT column expressions. You can standardize precision across your application by using consistent DEFAULT values in table schemas.
Is now64 monotonic across distributed servers?
The now64()
function reflects system time on each server, so timestamps may not be perfectly monotonic across different nodes due to clock synchronization differences. Use generateUUIDv4()
or sequence numbers if strict ordering is required.