---
title: "How to convert dates to Unix timestamps in ClickHouse®"
excerpt: "Learn how to convert dates to Unix timestamps in ClickHouse® using toUnixTimestamp() with practical examples, timezone handling, and performance tips."
authors: "Cameron Archer"
categories: "AI Resources"
createdOn: "2025-09-30 17:15:47"
publishedOn: "2025-09-30 17:15:47"
updatedOn: "2025-09-30 17:15:47"
status: "published"
---

Converting dates to Unix timestamps in ClickHouse{% sup %}®{% /sup %} is made possible through the built in `toUnixTimestamp()` function, which transforms your Date, DateTime, or String values into integers representing seconds since January 1, 1970, at midnight UTC.

We'll walk through the function's syntax, timezone handling, performance considerations, and practical examples that you can implement immediately in your ClickHouse{% sup %}®{% /sup %} queries.

## What toUnixTimestamp does and when to use it

The `toUnixTimestamp()` function takes your Date, DateTime, or String values and converts them into Unix timestamps—basically counting the seconds that have passed since January 1, 1970, at midnight UTC. Think of it as ClickHouse{% sup %}®{% /sup %}'s way of turning human-readable dates into numbers that computers love to work with.

You'll find yourself reaching for this function when you're building APIs that expect epoch seconds, exporting data to systems that store timestamps as integers, or when you want to do math with time values. Unix timestamps are timezone-agnostic, take up less storage space than datetime strings, and make comparisons lightning-fast.

## Supported input types and return value

The function accepts several input types and spits out a UInt32 value representing seconds since the Unix epoch. However, there are some important details depending on what you're converting.

### Date and Date32

Converting Date values is straightforward since ClickHouse{% sup %}®{% /sup %} treats them as midnight UTC on whatever day you specify:

```sql
SELECT toUnixTimestamp(toDate('2024-01-15')) AS epoch_seconds;
-- Returns: 1705276800
```

Date32 works the same way but supports a wider range from 1900 to 2299, while regular Date only covers 1900 to 2149.

### DateTime and DateTime64

DateTime conversions keep the full timestamp precision down to seconds:

```sql
SELECT toUnixTimestamp(toDateTime('2024-01-15 14:30:00')) AS epoch_seconds;
-- Returns: 1705328200
```

DateTime64 values get chopped down to seconds unless you use the precision-aware `toUnixTimestamp64()` variants.

### String with explicit format

String inputs work when they match ClickHouse{% sup %}®{% /sup %}'s expected datetime formats, but this approach runs slower than working with proper datetime types:

```sql
SELECT toUnixTimestamp('2024-01-15 14:30:00') AS epoch_seconds;
-- Returns: 1705328200
```

### UInt32 vs UInt64 output

Standard `toUnixTimestamp()` returns UInt32, which creates a problem: it can only represent dates up to January 19, 2038. For dates beyond this "Year 2038 problem," you'll want to use `toUnixTimestamp64()` or cast the result to UInt64.

## Basic syntax with optional time zone

The function signature is `toUnixTimestamp(date, [timezone])`, where the timezone parameter is optional but often crucial for getting consistent results.

### Without explicit time zone

When you skip the timezone parameter, ClickHouse{% sup %}®{% /sup %} uses either the session's default timezone or treats the input as UTC:

```sql
SELECT toUnixTimestamp(now()) AS current_epoch;
```

This behavior can bite you if your session timezone differs from your data's intended timezone.

### With explicit timezone parameter

Specifying a timezone ensures consistent conversion regardless of session settings:

```sql
SELECT toUnixTimestamp('2024-01-15 14:30:00', 'America/New_York') AS ny_epoch;
-- Converts 2:30 PM EST/EDT to UTC epoch seconds
```

Always use explicit timezones in production code to avoid confusion later. [More database timestamps and timezone best practices here](https://www.tinybird.co/blog-posts/database-timestamps-timezones)

### Using default_timezone setting

You can set a session-level timezone that affects all conversions:

```sql
SET timezone = 'Europe/London';
SELECT toUnixTimestamp('2024-01-15 14:30:00') AS london_epoch;
```

## Quick examples you can copy paste

Here are ready-to-use snippets for common conversion scenarios.

### Single value conversion

```sql
-- Current timestamp
SELECT toUnixTimestamp(now()) AS current_unix;

-- Specific datetime
SELECT toUnixTimestamp('2024-01-15 10:30:00') AS specific_unix;

-- With timezone
SELECT toUnixTimestamp('2024-01-15 10:30:00', 'UTC') AS utc_unix;
```

### Column update in place

```sql
-- Add epoch column to existing table
ALTER TABLE events ADD COLUMN epoch_time UInt32;

-- Populate from datetime column
ALTER TABLE events UPDATE epoch_time = toUnixTimestamp(created_at) WHERE epoch_time = 0;

-- Or in a SELECT query
SELECT event_id, toUnixTimestamp(created_at) AS epoch_time FROM events;
```

### Reverse conversion with toDateTime

Verify your conversions by converting back to datetime:

```sql
SELECT
    original_time,
    toUnixTimestamp(original_time) AS epoch_seconds,
    toDateTime(toUnixTimestamp(original_time)) AS converted_back
FROM events;
```

## Handling time zones correctly

Timezone handling is where most timestamp conversion bugs come from. Getting this right from the start prevents data headaches later.

### Working in UTC

Store all timestamps in UTC and convert to local timezones only when showing data to users:

```sql
-- Store UTC timestamps
INSERT INTO events (event_time_utc)
VALUES (toUnixTimestamp(now(), 'UTC'));

-- Convert to user timezone on read
SELECT
    event_id,
    toDateTime(epoch_time, 'America/Los_Angeles') AS local_time
FROM events;
```

This approach eliminates daylight saving time complications and makes your data portable across regions.

### Dealing with daylight saving gaps

During DST transitions, some local times don't exist (spring forward) or happen twice (fall back). ClickHouse{% sup %}®{% /sup %} handles this by choosing the earlier occurrence for ambiguous times and skipping non-existent times:

```sql
-- This time doesn't exist during spring DST transition
SELECT toUnixTimestamp('2024-03-10 02:30:00', 'America/New_York');
-- ClickHouse® automatically adjusts to a valid time
```

### Converting multiple time zones in one query

When processing data from multiple timezones, use CASE statements or timezone lookup tables:

```sql
SELECT
    event_id,
    toUnixTimestamp(
        event_time,
        CASE user_region
            WHEN 'US_EAST' THEN 'America/New_York'
            WHEN 'US_WEST' THEN 'America/Los_Angeles'
            WHEN 'EU' THEN 'Europe/London'
            ELSE 'UTC'
        END
    ) AS epoch_time
FROM user_events;
```

## Converting in bulk and storing epoch seconds efficiently

Production systems often benefit from pre-computing Unix timestamps rather than converting on every query.

### Materialized view strategy

Create [materialized views](https://www.tinybird.co/docs/forward/work-with-data/optimize/materialized-views) that automatically convert timestamps during data ingestion:

```sql
CREATE MATERIALIZED VIEW events_with_epoch
ENGINE = MergeTree()
ORDER BY epoch_time
AS SELECT
    event_id,
    event_data,
    toUnixTimestamp(event_time) AS epoch_time
FROM raw_events;
```

This approach trades storage space for query performance.

### Insert time conversion with default expressions

Use default expressions to automatically generate epoch timestamps:

```sql
CREATE TABLE events (
    event_id UInt64,
    event_data String,
    event_time DateTime DEFAULT now(),
    epoch_time UInt32 DEFAULT toUnixTimestamp(event_time)
) ENGINE = MergeTree()
ORDER BY epoch_time;
```

### On read conversion versus precompute

The choice depends on your query patterns:

- **Precompute:** When you frequently filter or sort by timestamp
- **Convert on read:** When storage space is limited and conversions happen infrequently
- **Hybrid approach:** For tables with both requirements

## Common pitfalls and how to avoid them

Understanding these gotchas can save you from production issues down the line.

### Overflow after 2106

UInt32 timestamps overflow on February 7, 2106. While this seems distant, it affects systems that generate future dates:

```sql
-- This will overflow
SELECT toUnixTimestamp('2107-01-01 00:00:00');

-- Use 64-bit alternative instead
SELECT toUnixTimestamp64Milli(toDateTime64('2107-01-01 00:00:00', 3)) / 1000;
```

### Wrong data type promotions

ClickHouse{% sup %}®{% /sup %} sometimes promotes data types in unexpected ways. Always verify your output types:

```sql
-- These might not return what you expect
SELECT toUnixTimestamp(toString(now()));  -- String conversion is slow
SELECT toUnixTimestamp(toUInt32(1705276800));  -- Treating epoch as datetime
```

### String parsing performance traps

Converting strings runs significantly slower than working with native datetime types:

```sql
-- Slow: string parsing on every row
SELECT toUnixTimestamp(event_time_string) FROM large_table;

-- Fast: convert once during ingestion
ALTER TABLE large_table ADD COLUMN event_time DateTime;
ALTER TABLE large_table UPDATE event_time = toDateTime(event_time_string);
```

## Performance tips and best practices

These optimization strategies can dramatically improve query performance at scale.

### Indexing on epoch seconds

Integer indexes perform better than datetime indexes for range queries:

```sql
CREATE TABLE events (
    event_id UInt64,
    epoch_time UInt32,
    event_data String
) ENGINE = MergeTree()
ORDER BY epoch_time;

-- Fast range query
SELECT * FROM events
WHERE epoch_time BETWEEN 1705276800 AND 1705363200;
```

### Compression and disk usage

Integer timestamps compress better than datetime strings and use less disk space:

- **UInt32 epoch:** 4 bytes per value
- **DateTime string:** 19+ bytes per value
- **Compression ratio:** Integers compress roughly 3x better

### SIMD advantages for batch conversion

ClickHouse{% sup %}®{% /sup %}'s SIMD optimizations work best with batch operations. Convert entire columns rather than individual values when possible:

```sql
-- Efficient: batch conversion
SELECT event_id, toUnixTimestamp(event_time) FROM events;

-- Less efficient: row-by-row in complex expressions
SELECT event_id,
       CASE WHEN condition THEN toUnixTimestamp(event_time) ELSE 0 END
FROM events;
```

## Building a real-time analytics API with Tinybird

Let's walk through creating a practical API that converts timestamps on demand using [Tinybird's](https://www.tinybird.co) managed ClickHouse{% sup %}®{% /sup %} platform.

### Creating the data source

First, create a [data source](https://www.tinybird.co/docs/forward/get-data-in/data-sources) with datetime columns that we'll convert:

```tinybird
SCHEMA >
    event_id UInt64,
    event_name String,
    created_at DateTime,
    user_timezone String

ENGINE "MergeTree"
ENGINE_PARTITION_KEY "toYYYYMM(created_at)"
ENGINE_SORTING_KEY "created_at, event_id"
```

### Writing a pipe that converts on read

Create a Tinybird [pipe](https://www.tinybird.co/docs/forward/work-with-data/pipes) that converts timestamps based on user requirements:

```tinybird
NODE convert_timestamps
SQL >
    %
    SELECT
        event_id,
        event_name,
        created_at,
        toUnixTimestamp(created_at, user_timezone) AS epoch_seconds,
        toUnixTimestamp(created_at, 'UTC') AS utc_epoch
    FROM events_datasource
    WHERE created_at >= {{DateTime(start_date, '2024-01-01 00:00:00')}}
      AND created_at < {{DateTime(end_date, '2024-12-31 23:59:59')}}
```

### Deploying as a parameterized endpoint

Publish this pipe as a REST [API endpoint](https://www.tinybird.co/docs/forward/work-with-data/publish-data/endpoints) that accepts date range parameters:

```bash
curl "https://api.tinybird.co/v0/pipes/timestamp_converter.json?start_date=2024-01-01&end_date=2024-01-31?token=READ_TOKEN"
```

The API returns JSON with both original datetimes and converted Unix timestamps, letting client applications work with whichever format they prefer.

## Next steps with Tinybird

Tinybird provides a managed analytics backend built on ClickHouse{% sup %}®{% /sup %}, with integrated API and pipeline tooling. This allows developers to focus on application logic instead of managing database infrastructure, scaling, or deployment pipelines.

Whether you're building user-facing analytics, internal dashboards, or AI-powered applications that need fast access to time-series data, Tinybird manages infrastructure and scaling, allowing developers to focus on building application features.

[Build your first API for free](https://cloud.tinybird.co/signup) with 1,000 requests per day included.

## Additional resources

- [ClickHouse{% sup %}®{% /sup %} toUnixTimestamp() docs](https://clickhouse.com/docs/sql-reference/functions/date-time-functions#toUnixTimestamp)
- [ClickHouse{% sup %}®{% /sup %} toUnixTimestamp64Milli() docs](https://clickhouse.com/docs/sql-reference/functions/type-conversion-functions#tounixtimestamp64milli)
- [Best practices for timestamps and time zones in databases](https://www.tinybird.co/blog-posts/database-timestamps-timezones)
- [Tinybird vs. ClickHouse{% sup %}®{% /sup %}: What's the difference?](https://www.tinybird.co/blog-posts/tinybird-vs-clickhouse)

## Frequently asked questions about toUnixTimestamp

### Is toUnixTimestamp deterministic across ClickHouse{% sup %}®{% /sup %} versions?

Yes, the function produces consistent results across versions for the same input and timezone. The underlying algorithm follows standard Unix epoch calculations, so a specific datetime and timezone combination will always yield the same Unix timestamp regardless of your ClickHouse{% sup %}®{% /sup %} version.

### Which integer type should I choose for future dates beyond 2038?

Use `toUnixTimestamp64()` or cast results to UInt64 for dates after 2038. Standard `toUnixTimestamp()` returns UInt32 which overflows in 2038, causing incorrect results for future dates. The 64-bit variants handle dates well beyond the current millennium.

### Can toUnixTimestamp convert millisecond precision timestamps?

Use `toUnixTimestamp64(datetime, precision)` to preserve millisecond or microsecond precision. The standard function truncates to seconds, losing sub-second information that might be critical for high-frequency event tracking or precise timing measurements.
