Introducing Explorations 🗺️
Chat with your data.
Back
May 06, 2025

Building a conversational AI tool for real-time analytics

We just launched a conversational AI feature. Here's how we built the feature that lets you chat with your data.
Rafa MorenoFrontend Engineer

We just announced Explorations, a new conversational UI feature in Tinybird to explore and interact with your data using natural language. Instead of writing complex SQL queries or manually building dashboards, you just ask questions of your data and get answers.

How does it work?

  • You type simple, natural-language questions directly into the interface.
  • It translates your questions into contextualized SQL queries automatically.
  • The result is displayed visually as tables and charts.
Start building with Tinybird!
If you've read this far, you might want to use Tinybird as your analytics backend. You can just get started, on the free plan.

Why is it useful?

Data is often large, messy, and unstructured. You have to spend a lot of time studying the schema or SELECT * … LIMIT 1 to figure out its shape. Then you have to constantly check the SQL reference and iterate your queries to get things right. When data is complex, this is a huge time drain.

Explorations…

  • Removes the barrier to understanding your data. Anyone can explore complex tables with natural language instead of writing complicated queries.
  • Speeds up the analytics workflows significantly by getting you to a functional query much faster.
  • Combines classic Tinybird data exploration interfaces ("Playgrounds" and "Time-series") into a unified, streamlined UI.

Tech stack

Under the hood, Explorations is powered by the following technologies:

  • Next.js with App Route Handlers: We leverage Next.js app routes to stream responses incrementally. This eliminates wait time because parts of the data become immediately visible as they're retrieved.
  • Vercel AI SDK: We use the Vercel AI SDK to simplify how we handle chat state. The AI SDK allows us to subscribe easily to ongoing data streaming, ensuring a synchronized experience between client and server.
  • Vertex AI: Vertex AI simplifies model selection, allowing us to easily switch between different models and providers and choose the best one for the job. It also helps us define fallback models with similar quality in case the primary model is unavailable for some reason.

Dealing with LLMs

Despite how common AI chatbots are, developing this conversational AI interface was not a straightforward process, requiring several iterations to overcome some unique challenges we faced.

Tool definition

An LLM needs access to many different operations to explore data in Tinybird:

  • Retrieving relevant resources (data sources, pipes, etc.) given a prompt
  • Sending queries to our SQL API to understand how the data looks
  • Creating SQL nodes
  • Generating timeseries visualizations
  • And many other smaller things (for example, creating a relevant title for the exploration)

Completing an exploration task requires careful execution of these operations in a specific order. The first LLM call acts as an orchestration layer. It has access to all these tools, and it is in charge of deciding which ones to use depending on the user's request.

Allowing LLMs to use external systems is called function calling. To help decide the correct tool for the job, the system prompt explains the UX and the requirements for accessing each available tool. Here is a truncated view of the system prompt:

LLM chaining

Depending on the complexity of the task proposed by a user, a single LLM request may not be enough, so in some cases, we need to run multiple LLM calls to build upon each other's work. This technique is called LLM chaining.

Under the hood, we reuse the data stream generated for the initial LLM call, so the user receives feedback immediately on what is happening.

Setting context

Another important part of the Explorations workflow is having rich, structured context so the system has enough information to find the right solution. We manage three types of context:

  • Resource definitions and schemas: We provide the column names and types of every resource in the workspace so the LLM can infer the right data to use.
  • Data context: one of the available tools can execute queries to get a small sample of data and understand how it looks.
  • Workspace rules: Similar to how you can create rules files for Cursor / Windsurf, you can define your own rules to tune the Explorations experience to your specific requirements or use case.

A practical example

We use Tinybird to store and analyze product usage data generated by Tinybird. The data looks something like this:

We might say: "We want to know the actions that javi@tinybird.co did within the product in the last week by day in time series format."

This will trigger a new exploration and the first step in its workflow: the orchestrator layer. The orchestrator has access to all the tools available, and it will use them until the problem is solved.

The first mandatory tool to use in this case is executeQuery. This tool helps the LLM understand the data before proceeding with a final query in an Exploration node. By default, we add all data source schemas to the context, so the tool will know that there is a table called events with these fields: timestamp, event, email, and data. Once it knows the resources it can query, the tool will send requests to Tinybird Query API

If the result is valid, the orchestrator will pass all the info to another tool called createTimeSeriesNode, which generates the final query in timeseries format with the relevant resources and extracts the axis keys to visualize it in a chart. If there is an error in the query result, it will be passed again to the orchestrator LLM to be fixed and tried again. This will continue until the query is valid, and a node can be createdto create the final node. We limit each workflow to 20 tool invocations, so if the orchestrator can't arrive at a valid node query after 20 invocations, the chat will stop. At this point, the user will see a "Fix error" button under the node, which will start a new chat workflow to fix the node query. 

While all this is happening, we render in the chat, via streaming, the reasoning that our AI system is performing. Once the final query is validated, the new node will be generated in Tinybird and rendered in the UI.

The final result will look like this:

Subscribe to our newsletter
Get 10 links weekly to the Data and AI articles the Tinybird team is reading.

Try Explorations

Explorations is making it easier for both new and experienced Tinybird users to understand their data without wasting time formulating complex SQL. To try it out, head to cloud.tinybird.co or run tb open in the CLI, and select Explorations in the left nav.

If you're new to Tinybird, you can sign up for free here.

Do you like this post? Spread it!

Related posts

Skip the infra work. Ship your first API today.

Read the docs
Tinybird wordmark