PricingDocs
Bars

Data Platform

Managed ClickHouse
Production-ready with Tinybird's DX
Streaming ingestion
High-throughput streaming ingest
Schema iteration
Safe migrations with zero downtime
Connectors
Plug and play Kafka, S3, and GCS

Developer Experience

Instant SQL APIs
Turn SQL into an endpoint
BI & Tool Connections
Connect your BI tools and ORMs
Tinybird Code
Ingest and query from your terminal

Enterprise

Tinybird AI
AI resources for LLMs and agents
High availability
Fault-tolerance and auto failovers
Security and compliance
Certified SOC 2 Type II for enterprise
Sign inSign up
Product []

Data Platform

Managed ClickHouse
Production-ready with Tinybird's DX
Streaming ingestion
High-throughput streaming ingest
Schema iteration
Safe migrations with zero downtime
Connectors
Plug and play Kafka, S3, and GCS

Developer Experience

Instant SQL APIs
Turn SQL into an endpoint
BI & Tool Connections
Connect your BI tools and ORMs
Tinybird Code
Ingest and query from your terminal

Enterprise

Tinybird AI
AI resources for LLMs and agents
High availability
Fault-tolerance and auto failovers
Security and compliance
Certified SOC 2 Type II for enterprise
PricingDocs
Resources []

Learn

Blog
Musings on transformations, tables and everything in between
Customer Stories
We help software teams ship features with massive data sets
Videos
Learn how to use Tinybird with our videos
ClickHouse for Developers
Understand ClickHouse with our video series

Build

Templates
Explore our collection of templates
Tinybird Builds
We build stuff live with Tinybird and our partners
Changelog
The latest updates to Tinybird

Community

Slack Community
Join our Slack community to get help and share your ideas
Open Source Program
Get help adding Tinybird to your open source project
Schema > Evolution
Join the most read technical biweekly engineering newsletter

Our Columns:

Skip the infra work. Deploy your first ClickHouse
project now.

Get started for freeRead the docs
A geometric decoration with a matrix of rectangles.

Product /

ProductWatch the demoPricingSecurityRequest a demo

Company /

About UsPartnersShopCareers

Features /

Managed ClickHouseStreaming IngestionSchema IterationConnectorsInstant SQL APIsBI & Tool ConnectionsTinybird CodeTinybird AIHigh AvailabilitySecurity & Compliance

Support /

DocsSupportTroubleshootingCommunityChangelog

Resources /

ObservabilityBlogCustomer StoriesTemplatesTinybird BuildsTinybird for StartupsRSS FeedNewsletter

Integrations /

Apache KafkaConfluent CloudRedpandaGoogle BigQuerySnowflakePostgres Table FunctionAmazon DynamoDBAmazon S3

Use Cases /

User-facing dashboardsReal-time Change Data Capture (CDC)Gaming analyticsWeb analyticsReal-time personalizationUser-generated content (UGC) analyticsContent recommendation systemsVector search
All systems operational

Copyright © 2025 Tinybird. All rights reserved

|

Terms & conditionsCookiesTrust CenterCompliance Helpline
Tinybird wordmark
PricingDocs
Bars

Data Platform

Managed ClickHouse
Production-ready with Tinybird's DX
Streaming ingestion
High-throughput streaming ingest
Schema iteration
Safe migrations with zero downtime
Connectors
Plug and play Kafka, S3, and GCS

Developer Experience

Instant SQL APIs
Turn SQL into an endpoint
BI & Tool Connections
Connect your BI tools and ORMs
Tinybird Code
Ingest and query from your terminal

Enterprise

Tinybird AI
AI resources for LLMs and agents
High availability
Fault-tolerance and auto failovers
Security and compliance
Certified SOC 2 Type II for enterprise
Sign inSign up
Product []

Data Platform

Managed ClickHouse
Production-ready with Tinybird's DX
Streaming ingestion
High-throughput streaming ingest
Schema iteration
Safe migrations with zero downtime
Connectors
Plug and play Kafka, S3, and GCS

Developer Experience

Instant SQL APIs
Turn SQL into an endpoint
BI & Tool Connections
Connect your BI tools and ORMs
Tinybird Code
Ingest and query from your terminal

Enterprise

Tinybird AI
AI resources for LLMs and agents
High availability
Fault-tolerance and auto failovers
Security and compliance
Certified SOC 2 Type II for enterprise
PricingDocs
Resources []

Learn

Blog
Musings on transformations, tables and everything in between
Customer Stories
We help software teams ship features with massive data sets
Videos
Learn how to use Tinybird with our videos
ClickHouse for Developers
Understand ClickHouse with our video series

Build

Templates
Explore our collection of templates
Tinybird Builds
We build stuff live with Tinybird and our partners
Changelog
The latest updates to Tinybird

Community

Slack Community
Join our Slack community to get help and share your ideas
Open Source Program
Get help adding Tinybird to your open source project
Schema > Evolution
Join the most read technical biweekly engineering newsletter

Skip the infra work. Deploy your first ClickHouse
project now.

Get started for freeRead the docs
A geometric decoration with a matrix of rectangles.

Product /

ProductWatch the demoPricingSecurityRequest a demo

Company /

About UsPartnersShopCareers

Features /

Managed ClickHouseStreaming IngestionSchema IterationConnectorsInstant SQL APIsBI & Tool ConnectionsTinybird CodeTinybird AIHigh AvailabilitySecurity & Compliance

Support /

DocsSupportTroubleshootingCommunityChangelog

Resources /

ObservabilityBlogCustomer StoriesTemplatesTinybird BuildsTinybird for StartupsRSS FeedNewsletter

Integrations /

Apache KafkaConfluent CloudRedpandaGoogle BigQuerySnowflakePostgres Table FunctionAmazon DynamoDBAmazon S3

Use Cases /

User-facing dashboardsReal-time Change Data Capture (CDC)Gaming analyticsWeb analyticsReal-time personalizationUser-generated content (UGC) analyticsContent recommendation systemsVector search
All systems operational

Copyright © 2025 Tinybird. All rights reserved

|

Terms & conditionsCookiesTrust CenterCompliance Helpline
Tinybird wordmark
PricingDocs
Bars

Data Platform

Managed ClickHouse
Production-ready with Tinybird's DX
Streaming ingestion
High-throughput streaming ingest
Schema iteration
Safe migrations with zero downtime
Connectors
Plug and play Kafka, S3, and GCS

Developer Experience

Instant SQL APIs
Turn SQL into an endpoint
BI & Tool Connections
Connect your BI tools and ORMs
Tinybird Code
Ingest and query from your terminal

Enterprise

Tinybird AI
AI resources for LLMs and agents
High availability
Fault-tolerance and auto failovers
Security and compliance
Certified SOC 2 Type II for enterprise
Sign inSign up
Product []

Data Platform

Managed ClickHouse
Production-ready with Tinybird's DX
Streaming ingestion
High-throughput streaming ingest
Schema iteration
Safe migrations with zero downtime
Connectors
Plug and play Kafka, S3, and GCS

Developer Experience

Instant SQL APIs
Turn SQL into an endpoint
BI & Tool Connections
Connect your BI tools and ORMs
Tinybird Code
Ingest and query from your terminal

Enterprise

Tinybird AI
AI resources for LLMs and agents
High availability
Fault-tolerance and auto failovers
Security and compliance
Certified SOC 2 Type II for enterprise
PricingDocs
Resources []

Learn

Blog
Musings on transformations, tables and everything in between
Customer Stories
We help software teams ship features with massive data sets
Videos
Learn how to use Tinybird with our videos
ClickHouse for Developers
Understand ClickHouse with our video series

Build

Templates
Explore our collection of templates
Tinybird Builds
We build stuff live with Tinybird and our partners
Changelog
The latest updates to Tinybird

Community

Slack Community
Join our Slack community to get help and share your ideas
Open Source Program
Get help adding Tinybird to your open source project
Schema > Evolution
Join the most read technical biweekly engineering newsletter
Back to Blog
Share this article:
Back

Build natural language filters for real-time analytics dashboards

Click-to-filter is out. Prompt-to-filter is in. Learn how to ditch the filter sidebars and dropdowns and replace them with a single user text input and an LLM.
AI x Data
Cameron Archer
Cameron ArcherTech Writer

If you have a real-time dashboard in your application or plan on building one, you can improve it with LLMs. There are a ton of ways to add AI features to your real-time dashboards, but here, I'm going to focus on filtering.

You know what a dashboard filter is: the little pills, checkboxes, and dropdowns you can click to filter the results. A proper real-time dashboard will update to show filtered results almost immediately.

But let's say you have a lot of filter dimensions. Sidebars and filter drawers get clunky in this case. Better to just have a single text input. Pass the input to an LLM, and have it generate the filters, like this:

Here's how you build that, step-by-step:

Context, data, and prerequisites

Before I dive into the implementation, let's set the context. We are going to build a dashboard filter component that:

  • Uses an LLM to parse a free text user input and apply filters to a real-time dashboard
  • Refreshes the dashboard very quickly
  • Filters performantly even when the underlying dataset becomes very large
  • Can handle large sets of dimensions with high cardinality

For this tutorial, I'm riffing on this open source LLM Performance Tracker template by Tinybird, which includes a natural language filter feature (see the video above).

The underlying data for this dashboard has the following schema:

You can see it's storing a bunch of performance and metadata for LLM call events.

The live demo allows you to select values for the following filter dimensions:

  • model
  • provider
  • organization
  • project
  • environment

When you click a specific model, for example, the dashboard will update to only show metrics for that model.

Prerequisites

I'm going to assume that you already have a dashboard you want to filter, so you can apply these steps generally to your use case. If you want to create a quick data project to follow along, use these commands to bootstrap something quick with Tinybird:

That will deploy a basic Tinybird datasource and API endpoint on your local machine with 100,000 rows of data for testing.

Now, let's see how to replace "click-to-filter" with "prompt-to-filter"...

Step 1. Review your API

I'm assuming that you have an API route for your real-time dashboard that can accept various parameters to request filtered data to visualize in the dashboard. Something like this:

In Tinybird, for example, any SQL pipe you build is automatically deployed as a REST endpoint with optional query parameters.

My Tinybird API definition looks like this:

A quick summary of this API:

  • It uses Tinybird's pipe syntax, defining a single SQL node to select from the llm_events table.
  • It returns time series aggregations, grouped by date and category, of various LLM call metrics such as errors, total tokens, completion tokens, duration, and cost.
  • It accepts a column parameter that defines the grouping category (e.g., model, provider, etc.)
  • It accepts many filter parameters (e.g. organization, project, model) which are conditionally applied in the WHERE clause if they are passed.
  • These parameters are defined using Tinybird's templating language.

So I can pass a value for any of these filter parameters, and Tinybird will query the database for data that matches those filters and return the response as a JSON payload that I can use to hydrate my chart.

In the past, I'd create a UI component in my dashboard to allow a user to select those filters. Here, we're using AI.

Step 2. Create an LLM filter API route

To start building your natural language filter, you need a POST route handler to accept the user prompt and return structured filter parameters.

The API route should implement the following logic:

  • Accept a JSON payload with prompt and (optionally) apiKey fields (if you want the user to supply their own AI API key)
  • Fetches the available dimensions for filtering
  • Define a system prompt to guide the LLM in creating structure parameters for the response
  • Queries an LLM client with the API key, system prompt, and user prompt
  • Returns the LLM response (which should be a structured filter object as JSON)
  • Error handling, of course

If you want to see a full implementation of such an API route, just look at this. If you want step-by-step guidance, follow along.

Step 3. Define the system prompt

Perhaps the most important part of this is creating a good system prompt for the LLM. The goal is to have an LLM client that will accept user input and consistently output structured query parameters to pass to your dashboard API.

Here's a simple but effective system prompt example:

You could further extend this system prompt by passing available dimensions and example values. To make this work, you can query the underlying data. A Tinybird API works well for this:

This queries the underlying dataset (latest month of data) and returns an array of possible values for each of the five filter dimensions defined in the API.

This API can be used to show the LLM what is available.

You could create a little utility to fetch the dimensions and unique values:

And then call that to define the system prompt dynamically:

Step 4. Create the LLM client

Once you've defined a good system prompt, it's as simple as creating an LLM client in the API route and passing the system prompt + prompt.

For example:

Step 5. Capture and pass the user prompt

I'm not going to share how to build a UI input component to capture the user prompt. It's 2025, and any LLM can 1-shot that component for you.

But the idea here is that your API route should accept the prompt input when the user submits the input.

For example, here's a basic way to call the LLM filter API route (/search) within a function triggered by an Enter key event handler:

Step 6. Update the filters based on the API response

After you've passed your user input to the LLM and gotten a response from the API route, you just need to fetch your dashboard API with the new set of filter parameters.

For example, taking the response from the above handleSearch function:

In this case, we add the new filter params to the URL of the dashboard and use the useSearchParams hook in the chart components, updating each chart with the applied search params.

Step 7. Test it

So far, we have:

  1. Created an API route that accepts a user input, passes it to an LLM with a system prompt, and returns a structured filter JSON
  2. Added a user input component that passes the prompt to the API route
  3. Updated the filter parameters in the URL search params based on the API response

So, looking back at the data model, let's imagine we used the following text input:

The search API should return something like this:

Which would update the URL of the dashboard to:

Which would trigger a new fetch of the Tinybird API for our time series chart:

Giving us an API response that looks something like this:

Which we can use to hydrate the chart. Boom.

Performance

A real-time dashboard should filter quickly. With a typical click-to-filter approach, we don't need to worry about the LLM response. In fact, if you look at the statistics from the Tinybird API response above, you can see the filtered query took just 7 ms, querying about 5000 rows.

Of course, as events grow into the millions or billions, we might expect some performance degradation there, but there are plenty of strategies in Tinybird to maintain sub-second query response times even as data becomes massive. This is the benefit of using Tinybird.

As far as the LLM response, you can query the underlying Tinybird table to see how long the LLM takes to respond, on average:

By the way, the LLM Performance Tracker template on which I based this tutorial actually includes a filter selection to analyze your own LLM calls within the dashboard, which we can use to see this in action:

In my case, the LLM typically took under a second to respond. Taking a look at the network waterfall, I could see the actual response time of the /search API route, for example:

In this particular case, the response was under 4 seconds. To be honest, that's not ideal for a real-time dashboard, but it's something that can be difficult to control when using a remote LLM.

To further improve the performance, you could consider something like WebLLM to run the LLM in the browser to perform this simple task. Cutting down on network times could improve performance significantly.

Conclusion

The way we search and visualize data is changing a lot thanks to AI. There are a lot of AI features you can add to your application, and a simple one I've shown here is natural language filtering of real-time analytics dashboards.

If you'd like to see a complete example implementation of natural language filtering, check out the LLM Performance Tracker by Tinybird. It's an open source template to monitor LLM usage, and it includes (as I have shown here) a feature to enable natural language filtering on LLM call data.

You can use it as a reference for your own natural language filtering project, or fork it to deploy your own LLM tracker, or just use the hosted public version if you want to track LLM usage in your application.

For example:

Alternatively, check out Dub.co, an open source shortlink platform. They have a nice "Ask AI" that you can use for reference. Here's the repo.

Do you like this post? Spread it!

Skip the infra work. Deploy your first ClickHouse
project now.

Get started for freeRead the docs
A geometric decoration with a matrix of rectangles.

Product /

ProductWatch the demoPricingSecurityRequest a demo

Company /

About UsPartnersShopCareers

Features /

Managed ClickHouseStreaming IngestionSchema IterationConnectorsInstant SQL APIsBI & Tool ConnectionsTinybird CodeTinybird AIHigh AvailabilitySecurity & Compliance

Support /

DocsSupportTroubleshootingCommunityChangelog

Resources /

ObservabilityBlogCustomer StoriesTemplatesTinybird BuildsTinybird for StartupsRSS FeedNewsletter

Integrations /

Apache KafkaConfluent CloudRedpandaGoogle BigQuerySnowflakePostgres Table FunctionAmazon DynamoDBAmazon S3

Use Cases /

User-facing dashboardsReal-time Change Data Capture (CDC)Gaming analyticsWeb analyticsReal-time personalizationUser-generated content (UGC) analyticsContent recommendation systemsVector search
All systems operational

Copyright © 2025 Tinybird. All rights reserved

|

Terms & conditionsCookiesTrust CenterCompliance Helpline
Tinybird wordmark

Related posts

AI x Data
Apr 17, 2025
Using LLMs to generate user-defined real-time data visualizations
Cameron Archer
Cameron ArcherTech Writer
1Using LLMs to generate user-defined real-time data visualizations
I Built This!
May 25, 2025
How to build CI/CD pipelines for real-time analytics projects
Gonzalo Gomez Ortiz
Gonzalo Gomez OrtizSoftware Engineer
1How to build CI/CD pipelines for real-time analytics projects
Engineering Excellence
May 06, 2025
Building a conversational AI tool for real-time analytics
Rafael Moreno Higueras
Rafael Moreno HiguerasFrontend Engineer
1Building a conversational AI tool for real-time analytics
Tinybird news
May 06, 2025
Explorations: a chat UI for real-time analytics
Javi Santana
Javi SantanaCo-founder
1Explorations: a chat UI for real-time analytics
I Built This!
Apr 04, 2025
How Inbox Zero uses Tinybird for real-time analytics
Elie Steinbock
Elie SteinbockFounder - Inbox Zero
1How Inbox Zero uses Tinybird for real-time analytics
Scalable Analytics Architecture
Jun 03, 2025
Optimizing Apache Iceberg tables for real-time analytics
Alberto Romeu
Alberto RomeuSoftware Engineer
1Optimizing Apache Iceberg tables for real-time analytics
AI x Data
May 08, 2025
Which LLM writes the best analytical SQL?
Victor Ramirez Garcia
Victor Ramirez GarciaSoftware Engineer
1Which LLM writes the best analytical SQL?
AI x Data
Mar 27, 2025
Instrument your LLM calls to analyze AI costs and usage
Alberto Romeu
Alberto RomeuSoftware Engineer
1Instrument your LLM calls to analyze AI costs and usage
AI x Data
Jul 02, 2025
How to build an analytics agent with Agno and Tinybird: Step-by-step
Cameron Archer
Cameron ArcherTech Writer
1How to build an analytics agent with Agno and Tinybird: Step-by-step
AI x Data
Jul 07, 2025
10 Analytics Agents examples you can copy
Alberto Romeu
Alberto RomeuSoftware Engineer
110 Analytics Agents examples you can copy