PricingDocs
Bars

Data Platform

Managed ClickHouse
Production-ready with Tinybird's DX
Streaming ingestion
High-throughput streaming ingest
Schema iteration
Safe migrations with zero downtime
Connectors
Plug and play Kafka, S3, and GCS

Developer Experience

Instant SQL APIs
Turn SQL into an endpoint
BI & Tool Connections
Connect your BI tools and ORMs
Tinybird Code
Ingest and query from your terminal

Enterprise

Tinybird AI
AI resources for LLMs and agents
High availability
Fault-tolerance and auto failovers
Security and compliance
Certified SOC 2 Type II for enterprise
Sign inSign up
Product []

Data Platform

Managed ClickHouse
Production-ready with Tinybird's DX
Streaming ingestion
High-throughput streaming ingest
Schema iteration
Safe migrations with zero downtime
Connectors
Plug and play Kafka, S3, and GCS

Developer Experience

Instant SQL APIs
Turn SQL into an endpoint
BI & Tool Connections
Connect your BI tools and ORMs
Tinybird Code
Ingest and query from your terminal

Enterprise

Tinybird AI
AI resources for LLMs and agents
High availability
Fault-tolerance and auto failovers
Security and compliance
Certified SOC 2 Type II for enterprise
PricingDocs
Resources []

Learn

Blog
Musings on transformations, tables and everything in between
Customer Stories
We help software teams ship features with massive data sets
Videos
Learn how to use Tinybird with our videos

Build

Templates
Explore our collection of templates
Tinybird Builds
We build stuff live with Tinybird and our partners
Changelog
The latest updates to Tinybird

Community

Slack Community
Join our Slack community to get help and share your ideas
Open Source Program
Get help adding Tinybird to your open source project
Schema > Evolution
Join the most read technical biweekly engineering newsletter

Our Columns:

Skip the infra work. Deploy your first ClickHouse
project now

Get started for freeRead the docs
A geometric decoration with a matrix of rectangles.

Product /

ProductWatch the demoPricingSecurityRequest a demo

Company /

About UsPartnersShopCareers

Features /

Managed ClickHouseStreaming IngestionSchema IterationConnectorsInstant SQL APIsBI & Tool ConnectionsTinybird CodeTinybird AIHigh AvailabilitySecurity & Compliance

Support /

DocsSupportTroubleshootingCommunityChangelog

Resources /

ObservabilityBlogCustomer StoriesTemplatesTinybird BuildsTinybird for StartupsRSS FeedNewsletter

Integrations /

Apache KafkaConfluent CloudRedpandaGoogle BigQuerySnowflakePostgres Table FunctionAmazon DynamoDBAmazon S3

Use Cases /

User-facing dashboardsReal-time Change Data Capture (CDC)Gaming analyticsWeb analyticsReal-time personalizationUser-generated content (UGC) analyticsContent recommendation systemsVector search
All systems operational

Copyright © 2025 Tinybird. All rights reserved

|

Terms & conditionsCookiesTrust CenterCompliance Helpline
Tinybird wordmark
PricingDocs
Bars

Data Platform

Managed ClickHouse
Production-ready with Tinybird's DX
Streaming ingestion
High-throughput streaming ingest
Schema iteration
Safe migrations with zero downtime
Connectors
Plug and play Kafka, S3, and GCS

Developer Experience

Instant SQL APIs
Turn SQL into an endpoint
BI & Tool Connections
Connect your BI tools and ORMs
Tinybird Code
Ingest and query from your terminal

Enterprise

Tinybird AI
AI resources for LLMs and agents
High availability
Fault-tolerance and auto failovers
Security and compliance
Certified SOC 2 Type II for enterprise
Sign inSign up
Product []

Data Platform

Managed ClickHouse
Production-ready with Tinybird's DX
Streaming ingestion
High-throughput streaming ingest
Schema iteration
Safe migrations with zero downtime
Connectors
Plug and play Kafka, S3, and GCS

Developer Experience

Instant SQL APIs
Turn SQL into an endpoint
BI & Tool Connections
Connect your BI tools and ORMs
Tinybird Code
Ingest and query from your terminal

Enterprise

Tinybird AI
AI resources for LLMs and agents
High availability
Fault-tolerance and auto failovers
Security and compliance
Certified SOC 2 Type II for enterprise
PricingDocs
Resources []

Learn

Blog
Musings on transformations, tables and everything in between
Customer Stories
We help software teams ship features with massive data sets
Videos
Learn how to use Tinybird with our videos

Build

Templates
Explore our collection of templates
Tinybird Builds
We build stuff live with Tinybird and our partners
Changelog
The latest updates to Tinybird

Community

Slack Community
Join our Slack community to get help and share your ideas
Open Source Program
Get help adding Tinybird to your open source project
Schema > Evolution
Join the most read technical biweekly engineering newsletter

Skip the infra work. Deploy your first ClickHouse
project now

Get started for freeRead the docs
A geometric decoration with a matrix of rectangles.

Product /

ProductWatch the demoPricingSecurityRequest a demo

Company /

About UsPartnersShopCareers

Features /

Managed ClickHouseStreaming IngestionSchema IterationConnectorsInstant SQL APIsBI & Tool ConnectionsTinybird CodeTinybird AIHigh AvailabilitySecurity & Compliance

Support /

DocsSupportTroubleshootingCommunityChangelog

Resources /

ObservabilityBlogCustomer StoriesTemplatesTinybird BuildsTinybird for StartupsRSS FeedNewsletter

Integrations /

Apache KafkaConfluent CloudRedpandaGoogle BigQuerySnowflakePostgres Table FunctionAmazon DynamoDBAmazon S3

Use Cases /

User-facing dashboardsReal-time Change Data Capture (CDC)Gaming analyticsWeb analyticsReal-time personalizationUser-generated content (UGC) analyticsContent recommendation systemsVector search
All systems operational

Copyright © 2025 Tinybird. All rights reserved

|

Terms & conditionsCookiesTrust CenterCompliance Helpline
Tinybird wordmark
PricingDocs
Bars

Data Platform

Managed ClickHouse
Production-ready with Tinybird's DX
Streaming ingestion
High-throughput streaming ingest
Schema iteration
Safe migrations with zero downtime
Connectors
Plug and play Kafka, S3, and GCS

Developer Experience

Instant SQL APIs
Turn SQL into an endpoint
BI & Tool Connections
Connect your BI tools and ORMs
Tinybird Code
Ingest and query from your terminal

Enterprise

Tinybird AI
AI resources for LLMs and agents
High availability
Fault-tolerance and auto failovers
Security and compliance
Certified SOC 2 Type II for enterprise
Sign inSign up
Product []

Data Platform

Managed ClickHouse
Production-ready with Tinybird's DX
Streaming ingestion
High-throughput streaming ingest
Schema iteration
Safe migrations with zero downtime
Connectors
Plug and play Kafka, S3, and GCS

Developer Experience

Instant SQL APIs
Turn SQL into an endpoint
BI & Tool Connections
Connect your BI tools and ORMs
Tinybird Code
Ingest and query from your terminal

Enterprise

Tinybird AI
AI resources for LLMs and agents
High availability
Fault-tolerance and auto failovers
Security and compliance
Certified SOC 2 Type II for enterprise
PricingDocs
Resources []

Learn

Blog
Musings on transformations, tables and everything in between
Customer Stories
We help software teams ship features with massive data sets
Videos
Learn how to use Tinybird with our videos

Build

Templates
Explore our collection of templates
Tinybird Builds
We build stuff live with Tinybird and our partners
Changelog
The latest updates to Tinybird

Community

Slack Community
Join our Slack community to get help and share your ideas
Open Source Program
Get help adding Tinybird to your open source project
Schema > Evolution
Join the most read technical biweekly engineering newsletter
Back to Blog
Share this article:
Back
Apr 17, 2025

Using LLMs to generate user-defined real-time data visualizations

UIs are changing. Here's how to use LLMs and real-time analytics APIs to build allow your users to generate their own data visualizations.
AI x Data
Cameron Archer
Cameron ArcherTech Writer

Tinybird is an analytics backend for software applications, and as LLM usage and AI features become more commonplace, developers are increasingly using Tinybird to track LLM usage, costs, and performance, both internally and in user-facing features.

We recently open sourced an app template, the LLM Performance Tracker, that includes a frontend + backend to capture LLM calls from your AI apps and analyze your LLM usage in real time.

The template is full of cool features (kudos to my coworker @alrocar), but I want to focus on one in particular because I think it's going to be the new normal for real-time data visualization.

If you check out the live demo of the app, you'll notice a button in the top right corner: AI Cost Calculator.

Clicking this button opens a modal where you can define how you want to visualize your LLM costs. For example:

You can see it in action here:

A quick summary of what is happening under the hood:

  1. The user input is passed to an API
  2. The API uses an LLM to generate structured parameters for Tinybird data API
  3. The component determines what kind of chart to show by analyzing the user input
  4. The component fetches the Tinybird API with the LLM-supplied filters and hydrates the chart

Let me walk you through how we built this feature. If you're interested in building dynamic, user-generated data visualizations in your application, you can use this as inspiration.

By the way, all of the code snippets I share below are gleaned from the open source LLM Performance Tracker repo.

The components

There are 4 core components to this feature:

  1. A Tinybird datasource called llm_events.datasource
  2. A Tinybird pipe called llm_usage.pipe
  3. A React component called CostPredictionModal.tsx
  4. An extract-cost-parameters API route

In addition, there are some utilities and services to simplify fetching the Tinybird APIs from the frontend.

Let's take a look at each of these components sequentially to understand how to create user-generated real-time data visualizations.

Ship real-time data features faster!
Tinybird turns raw data into real-time APIs so companies like Vercel, Canva and Framer can deliver instant insights at scale.
Get started for free

Storing and processing LLM calls with Tinybird

The basic primitives in Tinybird are data sources and pipes. Data sources store data, pipes transform it.

The llm_events data source in this project is designed to store time series data: LLM call events and all the metadata associated with the call.

Here's the table schema:

The llm_usage pipe defines a SQL query to select from the llm_events table. This pipe gets deployed as an API Endpoint, with query parameters defined using the Tinybird templating language:

A quick explanation of what is happening in this pipe definition:

  • Aggregates LLM usage data (e.g. cost, tokens, requests, etc.) by date and, optionally, by a specified category (e.g. model)
  • Is secured by a read_pipes token
  • Includes dynamic filtering for optionally supplied parameters like model, organization, project, environment, etc.

Once deployed (tb --cloud deploy), we can access this API via HTTP and supply parameters in the URL, for example:

This will return a JSON object with time series data containing all of the aggregate metrics grouped by model, filtered only on OpenAI calls.

This API is designed for scalability and speed, and should easily respond in milliseconds even as the number of LLM calls logged grows into the millions.

The time series chart in our dynamic UI fetches data from this API.

Defining an API route to generate structured parameters from user input

The extract-cost-parameters API route is the key piece of AI functionality. The LLM's job is to take the free-text user input, analyze it, and produce a set of structured parameters that can be passed to the Tinybird API.

To do that, it implements the following logic.

First, it fetches the pipe definition for the llm_usage pipe and the available dimensions (from another Tinybird API endpoint, llm_dimensions):

The available dimensions are used to define the system prompt for the LLM, so it knows which dimensions are available for filtering:

Then, the request is made to the LLM provider (in this case OpenAI using gpt-3.5-turbo model):

Note that we're using a wrapped model, which is how we instrument the Vercel AI SDK to send LLM call events to Tinybird for usage (so we're both using this app to analyze LLM calls, and also analyzing calls made from this AI app :mindblown:)

Finally, the backend does some type checking and applies defaults for missing parameters, returning the structured parameters in the API response:

Gathering user input and displaying the chart in the UI

The core UI component is CostPredictionModal.tsx, which handles receiving user input, getting structured parameters from the LLM, fetching data from Tinybird with the structured parameters, and defining the type of chart to use based on the query.

First, the component handles the user input:

On submit, it determines the type of query based on heuristics (this could easily be handled by an additional LLM for more complex use cases):

And determines what type of chart to use (AreaChart vs BarChart, multiple categories, etc.) based on this analysis:

It then passes the user input to the extract-cost-parameters API route and sets the parameters based on the response:

Once it receives the parameters, it parses and cleans those parameters, fetches the Tinybird API, and runs the calculateCosts() function with the result:

The calculateCosts() function is responsible for setting the React states for the data and categories. For example, to process non-grouped data:

Finally, the component renders the chart with the data/categories stored in state (using chart components from the Tremor charts library). 

The result: A dynamic chart that matches user intent

This is a relatively simple implementation of a dynamic, user-generated data visualization. It uses heuristic analysis to define the type of chart to generate, but this could easily be outsourced to LLMs as well for a more flexible/dynamic implementation.

Here are the important takeaways:

  1. We can use LLMs to generate structured data snippets from free-text input.
  2. We need a performant analytics backend (e.g. Tinybird) to parse those structured data snippets and return the data we need to visualize in real time
  3. We can define the type of visualization to create heuristically (as we did here) or using LLMs.

This pattern opens up a bunch of possibilities to allow end users to generate their own data visualizations. All we must do is give LLMs contextual understanding of the underlying data to be able to create structured filters, aggregations, and groupings.

Discussion: Why not use LLMs for everything?

In this demo, we used the LLM to take a free-text user input and return structured parameters that we could pass to my real-time analytics API.

Things we didn't use LLMs for:

  1. Determining what type of chart to produce
  2. Generating a SQL query to fetch the data

Why didn't I use LLMs?

Well, for #1, we certainly could have. The use case was simple enough that it didn't seem necessary, but it could easily be augmented. You simply add something to the LLM system prompt and ask it to determine what kind of query it is, and add it to the structured response of the LLM output. Easy.

#2 is a little more nuanced. Yes, we could ask the LLM to generate the SQL for us, and then ask the LLM to generate the chart component based on the results of the SQL.

Here's why we used a dynamic API endpoint instead:

  1. Encapsulated logic and best practices. If we're repeatedly delivering the same kind of analysis, having a somewhat static endpoint (with dynamic parameters) can both simplify and improve performance. We can encapsulate good data engineering practices into our query, rather than relying on the LLM to produce something good.
  2. Authentication, security, and multi-tenancy. Instructing an LLM to query a raw table of multi-tenant data carries a significant security risk. What if the wrong customer's data gets exposed? We could isolate each customer's data into a separate table, but that isn't always feasible. Using an API secured by tokens/JWTs guarantees security and data privacy in multi-tenant architectures.
  3. Rate limiting. Related to the above. We can add rate limits to user tokens for the API to ensure it isn't abused.
  4. Better observability. If LLMs are generating SQL queries willy nilly, it becomes much more challenging to monitor performance and debug. While LLM observability is getting better, this scenario would add a lot of complexity we don't want to deal with.
  5. More deterministic output and resource usage. LLMs are great. But they do not supply deterministic responses with deterministic resource usage. As a SaaS/product builder, I would be wary of outsourcing too much functionality to LLMs, especially that which can use considerable compute resources.

LLMs will get better and better at writing good SQL. For now, we're sticking with tried-and-true APIs that give us speed, simplicity, and predictability in our analysis.

Subscribe to SCHEMA > Evolution
We are Tinybird and we manage data for companies like Vercel and Canva. Plus, write a newsletter covering Data, AI and everything that matters in between. Join us.

Get started

If you want to see the full implementation of this feature, check out the components mentioned in the LLM Performance Tracker repo.

If you're new to Tinybird, you can sign up for free (no time limit) and create real-time LLM analysis API endpoints in a few minutes using the template:

Do you like this post? Spread it!

Skip the infra work. Deploy your first ClickHouse
project now

Get started for freeRead the docs
A geometric decoration with a matrix of rectangles.
Tinybird wordmark

Product /

ProductWatch the demoPricingSecurityRequest a demo

Company /

About UsPartnersShopCareers

Features /

Managed ClickHouseStreaming IngestionSchema IterationConnectorsInstant SQL APIsBI & Tool ConnectionsTinybird CodeTinybird AIHigh AvailabilitySecurity & Compliance

Support /

DocsSupportTroubleshootingCommunityChangelog

Resources /

ObservabilityBlogCustomer StoriesTemplatesTinybird BuildsTinybird for StartupsRSS FeedNewsletter

Integrations /

Apache KafkaConfluent CloudRedpandaGoogle BigQuerySnowflakePostgres Table FunctionAmazon DynamoDBAmazon S3

Use Cases /

User-facing dashboardsReal-time Change Data Capture (CDC)Gaming analyticsWeb analyticsReal-time personalizationUser-generated content (UGC) analyticsContent recommendation systemsVector search
All systems operational

Copyright © 2025 Tinybird. All rights reserved

|

Terms & conditionsCookiesTrust CenterCompliance Helpline

Related posts

AI x Data
Apr 10, 2025
Build natural language filters for real-time analytics dashboards
Cameron Archer
Cameron ArcherTech Writer
1Build natural language filters for real-time analytics dashboards
AI x Data
May 16, 2025
We graded 19 LLMs on SQL. You graded us.
Víctor Ramirez
Víctor RamirezSoftware Engineer
1We graded 19 LLMs on SQL. You graded us.
AI x Data
Jun 23, 2025
Chat with your data using the Birdwatcher Slack App
Alberto Romeu
Alberto RomeuSoftware Engineer
1Chat with your data using the Birdwatcher Slack App
AI x Data
May 08, 2025
Which LLM writes the best analytical SQL?
Víctor Ramirez
Víctor RamirezSoftware Engineer
1Which LLM writes the best analytical SQL?
Scalable Analytics Architecture
Mar 07, 2025
How to run load tests in real-time data systems
Ana Guerrero
Ana GuerreroData Engineer
1How to run load tests in real-time data systems
AI x Data
Apr 02, 2025
Hype v. Reality: 5 AI features that actually work in production
Alberto Romeu
Alberto RomeuSoftware Engineer
1Hype v. Reality: 5 AI features that actually work in production
AI x Data
Mar 27, 2025
Instrument your LLM calls to analyze AI costs and usage
Alberto Romeu
Alberto RomeuSoftware Engineer
1Instrument your LLM calls to analyze AI costs and usage
Scalable Analytics Architecture
Apr 24, 2025
dbt in real-time
Javi Santana
Javi SantanaCo-founder
1dbt in real-time
I Built This!
Apr 04, 2025
How Inbox Zero uses Tinybird for real-time analytics
Elie Steinbock
Elie SteinbockFounder - Inbox Zero
1How Inbox Zero uses Tinybird for real-time analytics
AI x Data
Jul 21, 2025
Why LLMs struggle with analytics
Jorge Sancha
Jorge SanchaCo-founder
1Why LLMs struggle with analytics