Working with time data is common in analytics databases. Sometimes date and time values require more detail than just seconds. ClickHouse provides data types that can store dates and times with different levels of precision.
This article explains how to convert values to the DateTime64(6) type using the timestamp()
function in ClickHouse. The focus is on what DateTime64(6) stores, when it is used, and how its precision and storage compare to other similar types.
What DateTime64(6) stores and when to use it
DateTime64(6) is a data type in ClickHouse that stores both the date and the time, including microseconds. The "(6)" means it contains six digits after the decimal point to represent fractions of a second, so it can record times down to the microsecond.
This level of precision is often used when events happen very quickly, such as financial transactions, sensor readings, or logs from high-frequency systems. It helps track the exact order and timing of events that occur close together.
The storage footprint of DateTime64(6) is larger than the regular DateTime type because it keeps more information per value. DateTime64(6) values are stored as 64-bit integers representing microseconds since the Unix epoch, while DateTime uses 32-bit integers for whole seconds only. However, ClickHouse compresses columns of time data efficiently, especially when consecutive values are close together.
How the timestamp()
function works in ClickHouse
The timestamp()
function in ClickHouse converts a value into a DateTime64(6) value. This function is commonly used to convert dates, times, or timestamps to the microsecond-precision format. The function can also add an additional time value during the conversion.
The syntax for the function is:
timestamp(expr[, expr_time])
The first argument, expr
, is the value to convert. The second argument, expr_time
, is optional and adds a time interval to the result. No matter the input type, the return value is always a DateTime64(6).
For example, to convert a string to DateTime64(6):
SELECT timestamp('2025-09-18 16:30:12.123456');
-- Returns: 2025-09-18 16:30:12.123456
If a second argument is included, it adds that amount of time to the converted value:
SELECT timestamp('2025-09-18 16:30:12.123456', 2.5);
-- Returns: 2025-09-18 16:30:14.623456 (adds 2.5 seconds)
Supported input types and conversion examples
The timestamp()
function accepts different types of input values and always returns a value in the DateTime64(6) format. The three primary input categories are string values, numeric timestamps, and existing date/time columns.
String inputs are commonly used to represent dates and times in text form. The function can parse several string formats:
-- ISO 8601 format
SELECT timestamp('2025-09-18T13:45:22.123456');
-- Basic date-time format
SELECT timestamp('2025-09-18 13:45:22');
-- String with microseconds
SELECT timestamp('2025-09-18 13:45:22.654321');
Unix epoch values count seconds since January 1, 1970. The function handles both integers and floats:
-- Unix timestamp (integer)
SELECT timestamp(1758206722);
-- Unix timestamp with fractional seconds (float)
SELECT timestamp(1758206722.789012);
Existing DateTime columns can also be converted to DateTime64(6). If the source column has lower precision, the result will have additional zeros in the microsecond part:
-- Convert DateTime to DateTime64(6)
SELECT timestamp(toDateTime('2025-09-18 13:45:22'));
-- Returns: 2025-09-18 13:45:22.000000
-- Convert DateTime64(3) to DateTime64(6)
SELECT timestamp(toDateTime64('2025-09-18 13:45:22.123', 3));
-- Returns: 2025-09-18 13:45:22.123000
Adding time during conversion
The timestamp()
function has an optional second argument that adds a specific amount of time to the value being converted. This allows precise time adjustments in a single function call.
Adding whole seconds works by passing an integer as the second argument:
SELECT timestamp('2025-01-01 12:00:00', 60);
-- Returns: 2025-01-01 12:01:00.000000
Adding fractional seconds uses a decimal value for precise adjustments:
SELECT timestamp('2025-01-01 12:00:00', 30.5);
-- Returns: 2025-01-01 12:00:30.500000
For more complex time additions, interval expressions can be used:
SELECT timestamp('2025-01-01 12:00:00', INTERVAL 2 HOUR);
-- Returns: 2025-01-01 14:00:00.000000
Converting strings to DateTime64 without precision loss
ClickHouse can convert strings into DateTime64(6) while preserving microsecond detail if the string format matches supported patterns. String inputs like '2025-09-18 14:23:45.123456'
contain microsecond information that can be preserved during conversion.
ClickHouse uses specific rules to read and interpret string inputs. The date_time_input_format
setting controls how flexible the conversion process is:
'basic'
: Only allows strict, fixed formats for string parsing'best_effort'
: Allows more variations in input format and tries to extract date and time from different structures
The 'best_effort'
mode is useful when date strings come from different sources. It can recognize formats like '2025/09/18 14:23:45.123456'
, '2025-09-18T14:23:45.123456'
, or even '18 Sep 2025 14:23:45.123456'
.
To use best effort parsing in a query:
SELECT timestamp('2025/09/18 14:23:45.123456')
SETTINGS date_time_input_format = 'best_effort';
This query parses the input with slashes and preserves the microsecond precision, while the same input would produce an error with the 'basic'
setting.
Troubleshooting common conversion errors
When using the timestamp()
function, two types of issues are common: parsing errors and precision truncation.
Cannot parse input errors occur when string inputs don't match a recognized format. For example:
SELECT timestamp('2025/31/12 13:45:22');
-- Error: Cannot parse input
This fails because ClickHouse expects formats like YYYY-MM-DD HH:MM:SS
. Setting date_time_input_format = 'best_effort'
can handle more input formats:
SELECT timestamp('2025/12/31 13:45:22')
SETTINGS date_time_input_format = 'best_effort';
-- Works: 2025-12-31 13:45:22.000000
Precision truncation happens when input has more than six digits after the decimal point. ClickHouse silently removes extra digits without rounding:
SELECT timestamp('2025-09-18 13:45:22.123456789');
-- Returns: 2025-09-18 13:45:22.123456 (last 3 digits removed)
To avoid data loss, ensure the input precision matches the desired output or accept that extra precision will be truncated.
timestamp() vs toDateTime64() comparison
ClickHouse provides multiple functions for creating DateTime64 values. Here's how timestamp()
compares to toDateTime64()
:
Function | Precision Control | Timezone Control | Best Use Case |
---|---|---|---|
timestamp() | Always 6 (microseconds) | Uses server/column timezone | Quick conversion to DateTime64(6) |
toDateTime64() | Any (0-9) | Explicit timezone parameter | When you need specific precision or timezone |
The timestamp()
function is designed for converting values directly to DateTime64(6) without specifying precision. The toDateTime64()
function allows setting any precision level and optionally specifying a timezone:
-- timestamp() always returns DateTime64(6)
SELECT timestamp('2025-09-18 13:45:22.123');
-- toDateTime64() allows custom precision and timezone
SELECT toDateTime64('2025-09-18 13:45:22.123', 3, 'UTC');
For bulk conversions or high-volume data processing, both functions perform similarly and are optimized for vectorized execution on large datasets.
Building a real-time APM analytics API with high-precision timestamps
This example demonstrates how to build a production-grade Application Performance Monitoring (APM) API using Tinybird's managed ClickHouse platform. The API processes millions of high-precision timestamp events and delivers sub-100ms responses for complex time series aggregations.
Streaming high-frequency APM data
Start with real-time application performance data containing microsecond-precision timestamps. Create an NDJSON data source for streaming ingestion:
{"request_id": "req_001", "timestamp_str": "2025-09-18 14:32:45.123456", "service": "auth-api", "endpoint": "/login", "status_code": 200, "response_time_ms": 45.67, "user_agent": "mobile-app"}
{"request_id": "req_002", "timestamp_str": "2025-09-18 14:32:45.156789", "service": "auth-api", "endpoint": "/login", "status_code": 500, "response_time_ms": 1203.45, "user_agent": "web-app"}
{"request_id": "req_003", "timestamp_str": "2025-09-18 14:32:45.187234", "service": "payment-api", "endpoint": "/charge", "status_code": 200, "response_time_ms": 89.12, "user_agent": "mobile-app"}
{"request_id": "req_004", "timestamp_str": "2025-09-18 14:32:45.201567", "service": "payment-api", "endpoint": "/refund", "status_code": 200, "response_time_ms": 156.78, "user_agent": "admin-dashboard"}
tb datasource create --name apm_events --file apm_sample.ndjson
tb deploy
tb datasource append apm_events apm_sample.ndjson
Building complex time series aggregations
Create a pipe that converts timestamps and performs real-time performance analytics:
NODE performance_metrics
SQL >
%
WITH parsed_events AS (
SELECT
request_id,
timestamp(timestamp_str) AS request_time,
service,
endpoint,
status_code,
response_time_ms,
user_agent,
status_code >= 400 AS is_error
FROM apm_events
WHERE timestamp(timestamp_str) BETWEEN
{{DateTime64(start_time, '2025-09-18 14:30:00.000000')}} AND
{{DateTime64(end_time, '2025-09-18 14:35:00.000000')}}
{\% if defined(service_filter) %}
AND service = {{String(service_filter)}}
{\% end %}
{\% if defined(endpoint_filter) %}
AND endpoint = {{String(endpoint_filter)}}
{\% end %}
),
time_buckets AS (
SELECT
toStartOfMinute(request_time) AS time_bucket,
service,
endpoint,
COUNT(*) AS total_requests,
countIf(is_error) AS error_count,
countIf(NOT is_error) AS success_count,
round(countIf(is_error) / COUNT(*) * 100, 2) AS error_rate_percent,
round(avg(response_time_ms), 2) AS avg_response_time,
round(quantile(0.50)(response_time_ms), 2) AS p50_response_time,
round(quantile(0.95)(response_time_ms), 2) AS p95_response_time,
round(quantile(0.99)(response_time_ms), 2) AS p99_response_time,
min(response_time_ms) AS min_response_time,
max(response_time_ms) AS max_response_time,
uniq(user_agent) AS unique_user_agents
FROM parsed_events
GROUP BY time_bucket, service, endpoint
)
SELECT
time_bucket,
service,
endpoint,
total_requests,
error_count,
success_count,
error_rate_percent,
avg_response_time,
p50_response_time,
p95_response_time,
p99_response_time,
min_response_time,
max_response_time,
unique_user_agents,
-- Real-time health score calculation
CASE
WHEN error_rate_percent > 10 THEN 'critical'
WHEN error_rate_percent > 5 OR p95_response_time > 1000 THEN 'warning'
ELSE 'healthy'
END AS health_status
FROM time_buckets
ORDER BY time_bucket DESC, total_requests DESC
{\% if defined(limit) %}
LIMIT {{Int32(limit, 50)}}
{\% end %}
TYPE ENDPOINT
This query showcases ClickHouse's time series capabilities:
- High-precision timestamp conversion with
timestamp()
function preserving microseconds - Time bucketing for real-time aggregation windows
- Statistical functions including percentiles (
quantile(0.95)
) for SLA monitoring - Conditional aggregation with
countIf()
for error rate calculations - Multi-dimensional grouping by service and endpoint
- Dynamic filtering with parameterized service and endpoint filters
- Real-time health scoring with business logic
Deploy and query the high-performance API
Deploy the analytics pipeline:
tb --cloud deploy
Query real-time APM metrics with sub-100ms latency:
# Get performance metrics for a specific service and time range
curl "https://api.tinybird.co/v0/pipes/performance_metrics.json?start_time=2025-09-18%2014:30:00.000000&end_time=2025-09-18%2014:35:00.000000&service_filter=auth-api&limit=20&token=YOUR_TOKEN"
The API returns comprehensive performance analytics:
{
"data": [
{
"time_bucket": "2025-09-18 14:32:00",
"service": "auth-api",
"endpoint": "/login",
"total_requests": 1247,
"error_count": 23,
"success_count": 1224,
"error_rate_percent": 1.84,
"avg_response_time": 78.45,
"p50_response_time": 45.67,
"p95_response_time": 203.89,
"p99_response_time": 456.78,
"min_response_time": 12.34,
"max_response_time": 1203.45,
"unique_user_agents": 3,
"health_status": "healthy"
},
{
"time_bucket": "2025-09-18 14:32:00",
"service": "payment-api",
"endpoint": "/charge",
"total_requests": 892,
"error_count": 45,
"success_count": 847,
"error_rate_percent": 5.04,
"avg_response_time": 134.67,
"p50_response_time": 89.12,
"p95_response_time": 387.45,
"p99_response_time": 789.23,
"min_response_time": 34.56,
"max_response_time": 1567.89,
"unique_user_agents": 2,
"health_status": "warning"
}
]
}
This production-ready API can process millions of events while maintaining sub-second query latency through ClickHouse's columnar storage and Tinybird's optimized infrastructure. The high-precision timestamp handling enables accurate performance monitoring at microsecond granularity, essential for identifying performance bottlenecks in distributed systems.
FAQs about converting to DateTime64(6)
How do I migrate an existing DateTime column to DateTime64(6) in ClickHouse?
Use the ALTER TABLE
statement with MODIFY COLUMN
to change the column type in place: ALTER TABLE events MODIFY COLUMN created_at DateTime64(6);
. Alternatively, add a new DateTime64(6) column, populate it using the timestamp()
function from the original column, then drop the original column if no longer needed.
Do ClickHouse client libraries preserve microsecond precision when reading DateTime64(6)?
Most current ClickHouse client libraries support DateTime64(6) and preserve microsecond precision in queries and results. Check your specific client library's documentation to confirm DateTime64(6) support, as some older versions or certain programming languages may require updated dependencies to avoid precision loss.
Next steps to build faster with ClickHouse
Tinybird provides a managed ClickHouse platform that reduces the operational burden of setting up and managing backend infrastructure. The platform is designed to deliver sub-second query latency for real-time analytics, depending on workload and data size, and offers a developer-focused experience with data-as-code workflows and a hosted API layer.
The platform supports live schema migrations, real-time ingestion, and built-in observability features. This setup allows building and deploying analytics solutions without handling low-level configuration or scaling concerns.
Sign up for a free Tinybird plan to start querying data and building APIs.