Go's concurrency model and ClickHouse's columnar speed make them a natural pair for building real-time analytics into applications. The challenge is getting the connection right: choosing the right driver, handling batches efficiently, and managing connection pools under load.
This guide walks through connecting Go to ClickHouse with working code examples, from basic queries to production-ready configurations with TLS and connection pooling. You'll see how to insert data in batches, avoid common mistakes with nullable types, and deploy without managing infrastructure yourself.
Why use Go with ClickHouse for real-time analytics
Go's built-in concurrency model makes it a natural fit for handling multiple database connections at once. ClickHouse's columnar storage can scan billions of rows in milliseconds at 839 queries per minute on 100-million row datasets, which pairs well with Go's efficient memory management when you're building high-throughput analytics systems.
You'll see this combination used for log aggregation that processes millions of events per second, time-series dashboards that query recent metrics, and recommendation engines that need sub-second response times. The pairing works particularly well when you have many concurrent users all querying large datasets without blocking your application threads. For enterprise Java applications, check out our ClickHouse Java connection guide with JDBC driver examples.
Choose and install a Go ClickHouse driver
Two main drivers exist for connecting Go to ClickHouse. The official clickhouse-go
driver uses ClickHouse's native TCP protocol and generally delivers better performance. A database/sql
wrapper provides compatibility with Go's standard sql.DB
interface.
Official clickhouse-go
driver
The native protocol driver connects directly to ClickHouse over TCP port 9000. This reduces overhead compared to HTTP-based connections and handles features like compression and streaming inserts more efficiently.
Install the driver using Go modules:
go get github.com/ClickHouse/clickhouse-go/v2
After installation, run go mod tidy
to clean up your dependencies and lock the driver version in your go.mod
file.
Alternative database/sql
wrapper
The database/sql
wrapper lets you use ClickHouse through Go's standard sql.DB
interface. This works well when you're migrating existing code or when you want to swap database backends without changing application logic.
The wrapper sits on top of the native driver but adds an abstraction layer, which introduces slight performance overhead. You'll typically choose this when standardization across multiple database types matters more than raw query speed. For Python developers, we also have a comprehensive guide on connecting to ClickHouse with Python.
Install with go get
and modules tidy
Start by initializing a Go module if you haven't already:
go mod init your-project-name
go get github.com/ClickHouse/clickhouse-go/v2
go mod tidy
The go mod tidy
command removes unused dependencies and adds missing ones based on your imports. This keeps your dependency graph clean and reproducible across different environments.
Step-by-step code to connect and query ClickHouse
Here's a complete working example that connects to ClickHouse, verifies the connection, and runs a basic query. You can copy this code and run it after adjusting the connection string for your environment.
1. Open a connection with DSN
The Data Source Name (DSN) tells the driver how to connect to your ClickHouse server. For a local instance, the DSN looks like clickhouse://localhost:9000
. Cloud instances require authentication credentials.
package main
import (
"context"
"fmt"
"log"
"time"
"github.com/ClickHouse/clickhouse-go/v2"
)
func main() {
conn, err := clickhouse.Open(&clickhouse.Options{
Addr: []string{"localhost:9000"},
Auth: clickhouse.Auth{
Database: "default",
Username: "default",
Password: "",
},
DialTimeout: 5 * time.Second,
})
if err != nil {
log.Fatal(err)
}
defer conn.Close()
}
For Tinybird or ClickHouse Cloud, replace localhost:9000
with your cluster hostname and add your credentials to the Auth
struct. The DialTimeout
prevents your application from hanging if the server is unreachable.
2. Ping and verify server version
After opening a connection, ping the server to catch configuration errors early. This confirms the connection works before you attempt more complex operations.
ctx := context.Background()
if err := conn.Ping(ctx); err != nil {
log.Fatal("Failed to ping ClickHouse:", err)
}
var version string
if err := conn.QueryRow(ctx, "SELECT version()").Scan(&version); err != nil {
log.Fatal("Failed to query version:", err)
}
fmt.Printf("Connected to ClickHouse version: %s\n", version)
The Ping
method sends a lightweight request to confirm the server responds. Querying SELECT version()
returns the ClickHouse server version, which helps debug compatibility issues if certain SQL features don't work as expected.
3. Select rows with context timeout
Use Go's context
package to set query timeouts. The context enables graceful cancellation when users navigate away or when your application shuts down.
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
rows, err := conn.Query(ctx, "SELECT name, value FROM system.settings LIMIT 5")
if err != nil {
log.Fatal("Query failed:", err)
}
defer rows.Close()
for rows.Next() {
var name, value string
if err := rows.Scan(&name, &value); err != nil {
log.Fatal("Scan failed:", err)
}
fmt.Printf("%s: %s\n", name, value)
}
This example queries the system.settings
table, which contains ClickHouse configuration parameters. Always call defer rows.Close()
to release database resources even if an error occurs during iteration.
4. Scan results into structs
Instead of scanning into individual variables, you can map query results directly to Go structs. This reduces boilerplate and makes your code easier to maintain as schemas evolve.
type Setting struct {
Name string `ch:"name"`
Value string `ch:"value"`
}
var settings []Setting
rows, err := conn.Query(ctx, "SELECT name, value FROM system.settings LIMIT 5")
if err != nil {
log.Fatal(err)
}
defer rows.Close()
for rows.Next() {
var s Setting
if err := rows.ScanStruct(&s); err != nil {
log.Fatal(err)
}
settings = append(settings, s)
}
The ch
struct tags tell the driver which ClickHouse column maps to each struct field. If your Go field names match the column names exactly (case-insensitive), you can omit the tags.
Insert data efficiently with batch and streaming writes
Most applications need to write data to ClickHouse, not just read from it. The driver provides multiple insertion methods optimized for different throughput requirements.
Batch insert with prepared statements
The PrepareBatch
method groups multiple rows into a single network request. This significantly improves throughput compared to individual inserts and works well for periodic uploads or ETL jobs.
batch, err := conn.PrepareBatch(ctx, "INSERT INTO events (timestamp, user_id, event_type)")
if err != nil {
log.Fatal(err)
}
for i := 0; i < 1000; i++ {
err := batch.Append(
time.Now(),
fmt.Sprintf("user_%d", i),
"page_view",
)
if err != nil {
log.Fatal(err)
}
}
if err := batch.Send(); err != nil {
log.Fatal(err)
}
Each Append
call adds a row to the batch buffer in memory. The Send
method transmits all buffered rows to ClickHouse in one operation, which reduces network overhead and improves write throughput.
HTTP streaming with buffered writer
For extremely high-volume ingestion, the HTTP protocol with streaming can sometimes outperform the native protocol. This method keeps an HTTP connection open and continuously streams rows without waiting for acknowledgments.
conn, err := clickhouse.Open(&clickhouse.Options{
Addr: []string{"localhost:8123"},
Protocol: clickhouse.HTTP,
})
The HTTP protocol uses port 8123 by default instead of 9000. While streaming HTTP inserts can achieve higher throughput, they trade off immediate error feedback since the server processes data asynchronously.
Handling nullable and array columns
ClickHouse supports nullable columns and array types, which require special handling in Go:
- Nullable columns: Use pointer types in your Go structs. Pass a nil pointer to insert NULL values.
- Array columns: Use Go slices. Pass an empty slice
[]string{}
to insert an empty array instead of NULL.
type Event struct {
Timestamp time.Time
UserID string
Tags []string // Array(String) in ClickHouse
Metadata *string // Nullable(String) in ClickHouse
}
batch, err := conn.PrepareBatch(ctx, "INSERT INTO events")
metadata := "sample metadata"
err = batch.AppendStruct(&Event{
Timestamp: time.Now(),
UserID: "user_123",
Tags: []string{"web", "mobile"},
Metadata: &metadata,
})
Secure connections with TLS, auth tokens, and environment variables
Production deployments require encrypted connections and secure credential management. The driver supports multiple authentication and encryption options.
Configure TLS certificates
Enable TLS by adding the TLS
option to your connection configuration. For self-signed certificates or custom certificate authorities, provide the CA certificate.
conn, err := clickhouse.Open(&clickhouse.Options{
Addr: []string{"secure.clickhouse.example.com:9440"},
Auth: clickhouse.Auth{
Database: "default",
Username: "user",
Password: "password",
},
TLS: &tls.Config{
InsecureSkipVerify: false,
},
})
Set InsecureSkipVerify: false
in production to validate the server's certificate against trusted certificate authorities. Port 9440 is the conventional secure port for ClickHouse native protocol, while 8443 is used for HTTPS.
Read creds from env or Vault
Avoid hardcoding credentials in source code. Read them from environment variables or a secrets management system like HashiCorp Vault instead.
import "os"
conn, err := clickhouse.Open(&clickhouse.Options{
Addr: []string{os.Getenv("CLICKHOUSE_HOST")},
Auth: clickhouse.Auth{
Database: os.Getenv("CLICKHOUSE_DB"),
Username: os.Getenv("CLICKHOUSE_USER"),
Password: os.Getenv("CLICKHOUSE_PASSWORD"),
},
})
This pattern lets you change credentials without rebuilding your application. For Kubernetes deployments, mount secrets as environment variables or files that your application reads at startup.
Rotate tokens without redeploy
For applications that run continuously, implement a credential refresh mechanism that reloads secrets periodically:
func getConnection() (driver.Conn, error) {
password := readPasswordFromVault() // Your secret fetching logic
return clickhouse.Open(&clickhouse.Options{
Addr: []string{os.Getenv("CLICKHOUSE_HOST")},
Auth: clickhouse.Auth{
Password: password,
},
})
}
Call getConnection()
each time you need a fresh connection rather than reusing a single global connection. The driver's connection pool handles the underlying connection lifecycle efficiently.
Optimize performance for high-concurrency Go services
Production systems serving many concurrent users require careful tuning of connection pools and query settings. The driver exposes several parameters that affect throughput and latency under load.
Connection pooling settings
The MaxOpenConns
parameter controls how many simultaneous connections your application can open to ClickHouse. Set this based on your expected concurrent query load and available database resources.
conn, err := clickhouse.Open(&clickhouse.Options{
Addr: []string{"localhost:9000"},
MaxOpenConns: 20,
MaxIdleConns: 10,
ConnMaxLifetime: time.Hour,
})
MaxIdleConns
determines how many idle connections stay open between requests. Keeping some connections idle reduces the latency of subsequent queries since establishing new connections takes time.
ConnMaxLifetime
forces connections to close after a certain duration, which helps recover from network issues. A value between 30 minutes and several hours typically works well.
Compression and query settings
Enable compression to reduce network bandwidth, especially when transferring large result sets:
- LZ4 compression: Fast compression with moderate ratio, good for most use cases
- ZSTD compression: Higher compression ratio but more CPU intensive
- Query-level settings: Pass ClickHouse configuration like
max_execution_time
to prevent runaway queries
conn, err := clickhouse.Open(&clickhouse.Options{
Addr: []string{"localhost:9000"},
Compression: &clickhouse.Compression{
Method: clickhouse.CompressionLZ4,
},
Settings: clickhouse.Settings{
"max_execution_time": 60,
},
})
Observability with OpenTelemetry
Instrument your ClickHouse queries with OpenTelemetry to track query latency, error rates, and throughput in production. The driver supports OpenTelemetry tracing out of the box.
import "go.opentelemetry.io/otel"
conn, err := clickhouse.Open(&clickhouse.Options{
Addr: []string{"localhost:9000"},
TracerProvider: otel.GetTracerProvider(),
})
Once configured, each query generates a trace span that appears in your observability platform. This helps identify slow queries and understand how database performance affects overall application latency.
Run local tests with Docker or Tinybird CLI
Developing against a local ClickHouse instance lets you test queries and schema changes without affecting production data. Two common approaches are running ClickHouse in Docker or using Tinybird's local development environment.
docker-compose for single node
Create a docker-compose.yml
file to run ClickHouse locally with persistent storage, using the same image that has over 100 million Docker pulls:
version: '3.8'
services:
clickhouse:
image: clickhouse/clickhouse-server:latest
ports:
- "9000:9000"
- "8123:8123"
volumes:
- ./data:/var/lib/clickhouse
Run docker-compose up -d
to start the server in the background. Your Go application can connect to localhost:9000
just like it would connect to a production cluster.
Mock data generators for benchmarks
For performance testing, generate realistic data volumes locally. The clickhouse-client
includes a generateRandom
table function that creates synthetic data:
INSERT INTO events SELECT * FROM generateRandom(
'timestamp DateTime, user_id String, event_type String',
1, 10, 2
) LIMIT 1000000;
This creates one million rows of random data matching your schema. Adjust the seed parameter (the second argument) to generate different datasets for repeatability in benchmarks.
Deploy ClickHouse without managing infrastructure
Once you've tested locally, you'll need to deploy your ClickHouse database to production. Managing ClickHouse clusters requires expertise in distributed systems, replication, and performance tuning.
Tinybird provides a managed ClickHouse service that handles infrastructure so you can focus on building features. The platform includes automatic scaling, backup management, and monitoring built in.
Deploy queries as APIs with tb deploy
Tinybird lets you define your entire data pipeline as code using .datasource
and .pipe
files. Deploy everything with a single command:
tb --cloud deploy
This command creates ClickHouse tables, materialized views, and API endpoints in your Tinybird workspace. The deployment is atomic, so either all changes apply successfully or none do.
Create parameterized API endpoints
Instead of connecting directly to ClickHouse from your application, Tinybird generates REST API endpoints from your SQL queries. This adds authentication, rate limiting, and caching without additional code.
DESCRIPTION >
Get events by user
TOKEN events_endpoint READ
NODE events_by_user
SQL >
%
SELECT timestamp, event_type
FROM events
WHERE user_id = {{String(user_id)}}
ORDER BY timestamp DESC
LIMIT {{Int32(limit, 100)}}
TYPE endpoint
The {{String(user_id)}}
syntax creates a required query parameter. {{Int32(limit, 100)}}
creates an optional parameter with a default value of 100. After deployment, your Go application calls the API instead of querying ClickHouse directly.
Sign up for a free Tinybird plan
Tinybird offers a free tier that includes 10 GB of storage and 1,000 API requests per month. Sign up for a free account to start building without entering payment information.
The free tier includes all core features like data sources, pipes, and API endpoints. You can upgrade to a paid plan later as your data volume and request rate grow.
FAQs about Go and ClickHouse integration
Does the clickhouse-go driver support context cancellation?
Yes, the official driver supports Go context for query cancellation and timeouts. Use context.WithTimeout
to prevent long-running queries from blocking your application, and the driver will send a cancellation request to ClickHouse when the context expires.
How do I handle ClickHouse schema migrations in Go applications?
ClickHouse doesn't have built-in migrations, but you can execute DDL statements through the driver. Consider using a migration library like golang-migrate
with ClickHouse support, or write custom migration scripts that your application runs at startup.
Can I use the standard Go database/sql interface with ClickHouse?
Yes, there's a database/sql
wrapper for clickhouse-go that provides the familiar sql.DB
interface. This allows compatibility with existing Go database tooling and makes it easier to switch between different database backends.
What connection pool limits work well for ClickHouse in Go?
Start with MaxOpenConns
set to your expected concurrent query load and MaxIdleConns
around half that value. A typical web application might use 20 open connections and 10 idle connections. Monitor connection usage and adjust based on your query patterns.
Does Tinybird support the same ClickHouse SQL syntax as open-source ClickHouse?
Tinybird uses ClickHouse as the underlying engine, so it supports the same SQL syntax and functions. You can migrate existing ClickHouse queries without modification, and new ClickHouse features become available in Tinybird as they're released upstream./