---
title: "Using LLMs to generate real-time data visualizations"
excerpt: "Using LLMs to generate user-defined real-time data visualizations lets users create charts by asking. Here's how to build it."
authors: "Cameron Archer"
categories: "AI x Data"
createdOn: "2025-04-16 00:00:00"
publishedOn: "2025-04-17 00:00:00"
updatedOn: "2025-04-17 00:00:00"
status: "published"
---

<p>Tinybird is an analytics backend for software applications, and as LLM usage and AI features become more commonplace, developers are increasingly using Tinybird to track LLM usage, costs, and performance, both internally and in user-facing features.</p><p>We recently open sourced an app template, <a href="https://www.tinybird.co/blog-posts/introducing-llm-performance-tracker"><u>the LLM Performance Tracker</u></a>, that includes a frontend + backend to capture LLM calls from your AI apps and analyze your LLM usage in real time.</p><p>The template is full of cool features (kudos to my coworker @alrocar), but I want to focus on one in particular because I think it's going to be the new normal for real-time data visualization.</p><p>If you check out the <a href="https://llm-tracker.tinybird.live"><u>live demo</u></a> of the app, you'll notice a button in the top right corner: <strong>AI Cost Calculator</strong>.</p><p>Clicking this button opens a modal where you can define how you want to visualize your LLM costs. For example:</p>
<!--kg-card-begin: html-->
<iframe width="100%" src="https://snippets.tinybird.co/XQAAAAKEAQAAAAAAAABBKUqGk9nLKvXryMbH2LTjseB7qVM72ZFzpvFkt0QZSorKzRZyLQM5N_IN9AxufsU4XT9eRSQD75MKeHZvZVtR4JReo17UL5PIGhPZgGdp8VP4lVz-crgF4fqtdjnXMB6scBoXSY5EbRriWTDhNTqT0DjZPHNGk002IU5s7RGhQbn_LLfCKrJJgj4yeHRMWeAb2NFphA88VT1G0PpQqzb39RgsVvNxwvGSnU9o6AqoTpIHzaz7ua-O5sA4g7yuL8Mx1vCF4N2HxSBHf_2VSJNtZWdp5e8GYcwQuySELErgJq9rJqbIJwbGEnB71OzYG2oEyrJmPNe8Vl-VZLOp7ShqosNTUdlrW5-H9DJ0DRvR-S0Pf_uYHtA/embed"></iframe>
<!--kg-card-end: html-->
<p>You can see it in action here:</p><figure class="kg-card kg-video-card kg-width-regular" data-kg-thumbnail="https://tinybird-blog.ghost.io/content/media/2025/04/costcalculator-1_thumb.jpg" data-kg-custom-thumbnail="">
            <div class="kg-video-container">
                <video src="https://tinybird-blog.ghost.io/content/media/2025/04/costcalculator-1.mp4" poster="https://img.spacergif.org/v1/1920x1080/0a/spacer.png" width="1920" height="1080" playsinline="" preload="metadata" style="background: transparent url('https://tinybird-blog.ghost.io/content/media/2025/04/costcalculator-1_thumb.jpg') 50% 50% / cover no-repeat;"></video>
                <div class="kg-video-overlay">
                    <button class="kg-video-large-play-icon" aria-label="Play video">
                        <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24">
                            <path d="M23.14 10.608 2.253.164A1.559 1.559 0 0 0 0 1.557v20.887a1.558 1.558 0 0 0 2.253 1.392L23.14 13.393a1.557 1.557 0 0 0 0-2.785Z"></path>
                        </svg>
                    </button>
                </div>
                <div class="kg-video-player-container">
                    <div class="kg-video-player">
                        <button class="kg-video-play-icon" aria-label="Play video">
                            <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24">
                                <path d="M23.14 10.608 2.253.164A1.559 1.559 0 0 0 0 1.557v20.887a1.558 1.558 0 0 0 2.253 1.392L23.14 13.393a1.557 1.557 0 0 0 0-2.785Z"></path>
                            </svg>
                        </button>
                        <button class="kg-video-pause-icon kg-video-hide" aria-label="Pause video">
                            <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24">
                                <rect x="3" y="1" width="7" height="22" rx="1.5" ry="1.5"></rect>
                                <rect x="14" y="1" width="7" height="22" rx="1.5" ry="1.5"></rect>
                            </svg>
                        </button>
                        <span class="kg-video-current-time">0:00</span>
                        <div class="kg-video-time">
                            /<span class="kg-video-duration">0:26</span>
                        </div>
                        <input type="range" class="kg-video-seek-slider" max="100" value="0">
                        <button class="kg-video-playback-rate" aria-label="Adjust playback speed">1×</button>
                        <button class="kg-video-unmute-icon" aria-label="Unmute">
                            <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24">
                                <path d="M15.189 2.021a9.728 9.728 0 0 0-7.924 4.85.249.249 0 0 1-.221.133H5.25a3 3 0 0 0-3 3v2a3 3 0 0 0 3 3h1.794a.249.249 0 0 1 .221.133 9.73 9.73 0 0 0 7.924 4.85h.06a1 1 0 0 0 1-1V3.02a1 1 0 0 0-1.06-.998Z"></path>
                            </svg>
                        </button>
                        <button class="kg-video-mute-icon kg-video-hide" aria-label="Mute">
                            <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24">
                                <path d="M16.177 4.3a.248.248 0 0 0 .073-.176v-1.1a1 1 0 0 0-1.061-1 9.728 9.728 0 0 0-7.924 4.85.249.249 0 0 1-.221.133H5.25a3 3 0 0 0-3 3v2a3 3 0 0 0 3 3h.114a.251.251 0 0 0 .177-.073ZM23.707 1.706A1 1 0 0 0 22.293.292l-22 22a1 1 0 0 0 0 1.414l.009.009a1 1 0 0 0 1.405-.009l6.63-6.631A.251.251 0 0 1 8.515 17a.245.245 0 0 1 .177.075 10.081 10.081 0 0 0 6.5 2.92 1 1 0 0 0 1.061-1V9.266a.247.247 0 0 1 .073-.176Z"></path>
                            </svg>
                        </button>
                        <input type="range" class="kg-video-volume-slider" max="100" value="100">
                    </div>
                </div>
            </div>
            
        </figure><p>A quick summary of what is happening under the hood:</p><ol><li>The user input is passed to an API</li><li>The API uses an LLM to generate structured parameters for Tinybird data API</li><li>The component determines what kind of chart to show by analyzing the user input</li><li>The component fetches the Tinybird API with the LLM-supplied filters and hydrates the chart</li></ol><p>Let me walk you through how we built this feature. If you're interested in building dynamic, user-generated data visualizations in your application, you can use this as inspiration.</p><p>By the way, all of the code snippets I share below are gleaned from the <a href="https://github.com/tinybirdco/llm-performance-tracker"><u>open source LLM Performance Tracker repo</u></a>.</p><h2 id="the-components">The components</h2><p>There are 4 core components to this feature:</p><ol><li>A Tinybird datasource called <a href="https://github.com/tinybirdco/llm-performance-tracker/blob/main/tinybird/datasources/llm_events.datasource"><u><code>llm_events.datasource</code></u></a></li><li>A Tinybird pipe called <a href="https://github.com/tinybirdco/llm-performance-tracker/blob/main/tinybird/endpoints/llm_usage.pipe"><u><code>llm_usage.pipe</code></u></a></li><li>A React component called <a href="https://github.com/tinybirdco/llm-performance-tracker/blob/main/dashboard/ai-analytics/src/app/components/CostPredictionModal.tsx"><u><code>CostPredictionModal.tsx</code></u></a></li><li>An <a href="https://github.com/tinybirdco/llm-performance-tracker/blob/main/dashboard/ai-analytics/src/app/api/extract-cost-parameters/route.ts"><u><code>extract-cost-parameters</code></u></a> API route</li></ol><p>In addition, there are some utilities and <a href="https://github.com/tinybirdco/llm-performance-tracker/blob/main/dashboard/ai-analytics/src/services/tinybird.ts"><u>services</u></a> to simplify fetching the Tinybird APIs from the frontend.</p><p>Let's take a look at each of these components sequentially to understand how to create user-generated real-time data visualizations.</p><h2 id="storing-and-processing-llm-calls-with-tinybird">Storing and processing LLM calls with Tinybird</h2><p>The basic primitives in Tinybird are data sources and pipes. Data sources store data, pipes transform it.</p><p>The <code>llm_events</code> data source in this project is designed to store time series data: LLM call events and all the metadata associated with the call.</p><p>Here's the table schema:</p>
<!--kg-card-begin: html-->
<iframe width="100%" src="https://snippets.tinybird.co/XQAAAAK3CQAAAAAAAABBKUqGk9nLKv92oEM94V5GWX9vauW8Rhn1cAMprkIke0oPk_QP3n1lYtzB8k2R4hNdogdsp4fq5vFt5dK5lCZ8rrJC1X1D2hY2s0ygPfNPtYwtaOVR87vb5KEvdWJ0rFnrMA1FiKSA_cpMhkmZzns0hJiPH1dIS_Y5JD8_zn-7s1X5f7FNBkvV-kgFtkKtLX7Ml0zzVrfYTRFRMz9_YdrGs7pXas9bOsTidWSGD0MYGNBD54JpEC3iGPK6kB2I1ahFkxHOBAPvmrAoEV2FQKl-21YRDCAO22E6It3Yf8_DP8waFezeBcx8YZYQ4UX1hU6OF05l5UAMMYbEJo9w1zBZ_MYyw2Thon6ZO8D1NJcQC4WAfHJykEs08BRtStPdr_rMXJLd9j8CEFiKpKuAmYlntF5pIBiTYWPezHFA7G-l211uz9zSgzo8pWH-e2ZAni_61lPrtJTu5IfsGCi38yCnr5Pu4KYcnKuecAlbfdbfN6UFPHqJ4BZk-_bFzKJf6sOuMmIRTvdj88M9iljbEDzP3FbD5RNvgpPR4DTPRv7WlbNZ5R6yYbgLaVtoczTLBr-CpaI_ihe2n6gT3Wdd_gf_0LbfDwd7QJKaqisXa0klSTwrKPiMUvUcWa1HH75ap3M83sR6sPdd-rwu2VPHIrnJVgV1XdkeoYN6qiB5tKZZ5LBt29hMTGhYX9KxET2wvrkFHqjHleLGOCsPa6dn_v8JLs5GdWH-p4-Z2NI3XTV1SkAZjqFFkaW6YSHk2wacNlZOytPq_cCFjvnWSOSXlpGPFeNYOcKwqtnJ29rmzdWlKTaxGgdRWQXSSW_PBcUQJR9E6b8DAUatH5YvqiHIU35uJx8D13-0-b7SHj3LnFJyoR-MAo0H6QrzqrQ5lwHlKkBGeleWPEaHGY6FcI_hRV6to7wUM-H6KME7SY0KFc6CtpnJGLHnfOn-Ss1cifnDAw3jK9oNgX6OBSZjM_yA3dOkh_09m0AZNXO549yCbqX-4UJM__Kb23U/embed"></iframe>
<!--kg-card-end: html-->
<p>The <code>llm_usage</code> pipe defines a SQL query to select from the <code>llm_events</code> table. This pipe gets deployed as an API Endpoint, with <a href="https://www.tinybird.co/docs/work-with-data/query/query-parameters" rel="noreferrer">query parameters defined using the Tinybird templating language</a>:</p>
<!--kg-card-begin: html-->
<iframe width="100%" src="https://snippets.tinybird.co/XQAAAAJVCAAAAAAAAABBKUqGk9nLKvRcem1g1yLAAZOZgfyRQ-hs2c06PdsqraqnH404OiE1CSsrNPh4OfF9Q3t8LSFbwF6T71d9rT2kEKWQcKlkFUkSlGHT9mcpeS7iFKlKl97PavN93KpDR9K_PQSSI3DGZ2fKszWWMM30WlkRcD_QwELoo09swpxeBOqMWQAE9Yj6PxEvqu8vkypIWVq7w99R_dHCADbzBpyPv8iIcrDpM6D5cirk1obns7wE5pa8X56_rn8gJBJYVoI-OPDtbAtj4zTSB2s9uJr31aYGuGbgZvvgTZbyFZiHx785PfKcaIL10FEaNGp5CJGCxqm5vxJPccraUC4KK3dKAoF-e4NcdfHkQTfU6aLTLBotsJ_HSeANFQmVz14vGwGHbmPS9FRK5mjiWBRYeTmSat40fA39_qE_slAbUX5Pb6zYEyDn0nbQJWF0UwM1TS8qgCKpDZX9cTWQX0zLuU23kLGegXzLuFV6IJ6tVJ4VpEjzY1bh_oLKRvY4QlTjFM143eMTuUeWbw0fmzLElOhxatwLsyH8aooQoa9zl7jytPzx2nuzVmUNY3VPLlkvoPn4BSqzt4ZpeBj6UcaJ7lkBwr9R3fm0C4GRmhB2XLaG0QDOq4RJHeHuq4uzjAviXeNSqyrkbSkrbvFnNoLF2Z-XgBhwr4_VY2kJpeFSi4cJPE4i8t0tw29S7AXvYyj54wZU4_ZtxC3m6h8aRYWsIu4IcvQybyCinoF7WKav-ssakFkBn_E5RNc2Xil2f0nFXfpS4SVgzF9FF7UsoDVBsC14aGbWalVJ9Y037xzNsfexL3CN_rjr_2iJrXPHv4YX3zHClV2c0i4zTIsztdT2Ruc7k95XEnTfi9q3WFFhFxWk88KWBnVeFiVQCK4v1CQC7PaeupcAdl_Cut6_ZOygZBLEXZiKo2NptbV0TzRSyx8NTAuiGCyZsRJtiyfw6PbKplwZp-3_9eonBw/embed"></iframe>
<!--kg-card-end: html-->
<p>A quick explanation of what is happening in this pipe definition:</p><ul><li>Aggregates LLM usage data (e.g. cost, tokens, requests, etc.) by date and, optionally, by a specified category (e.g. model)</li><li>Is secured by a <code>read_pipes</code> token</li><li>Includes dynamic filtering for optionally supplied parameters like model, organization, project, environment, etc.</li></ul><p>Once deployed (<code>tb --cloud deploy</code>), we can access this API via HTTP and supply parameters in the URL, for example:</p>
<!--kg-card-begin: html-->
<iframe width="100%" src="https://snippets.tinybird.co/XQAAAAK8AAAAAAAAAABBKUqGk9nLKvLGdEM9_hRqB8UV-wfqh51RlY047_S4P7NE1GxPxjw7PEdxriv2JvSnF1LgWaR-DEZfIYGAtQJtFrAXe6Zd0VrSaYcXT1NZ6UcLLoyt-13vclTNJMGUjCbWxNgB3ockqAJoIgENdbVwfDet1HQk9yw5sEJQ35eWwRmaYhESN4_CTv7tNZuml--iDI_X0KCF9gBQOPOtaM2blrinuL-InyTRX__FwsAA/embed"></iframe>
<!--kg-card-end: html-->
<p>This will return a JSON object with time series data containing all of the aggregate metrics grouped by model, filtered only on OpenAI calls.</p><p>This API is designed for scalability and speed, and should easily respond in milliseconds even as the number of LLM calls logged grows into the millions.</p><p>The time series chart in our dynamic UI fetches data from this API.</p><h2 id="defining-an-api-route-to-generate-structured-parameters-from-user-input">Defining an API route to generate structured parameters from user input</h2><p>The <code>extract-cost-parameters</code> API route is the key piece of AI functionality. The LLM's job is to take the free-text user input, analyze it, and produce a set of structured parameters that can be passed to the Tinybird API.</p><p>To do that, it implements the following logic.</p><p>First, it fetches the pipe definition for the <code>llm_usage</code> pipe and the available dimensions (from another Tinybird API endpoint, <a href="https://github.com/tinybirdco/llm-performance-tracker/blob/main/tinybird/endpoints/llm_dimensions.pipe"><u><code>llm_dimensions</code></u></a>):</p>
<!--kg-card-begin: html-->
<iframe width="100%" src="https://snippets.tinybird.co/XQAAAALRAgAAAAAAAABBKUqGk9nLKvxKlEM934GGgOxtdUn0WAl4wvZqnO3ydAvOPSNRMhEDnVomodOcKcW4wCJxjtmIXfLMz2f0wstGTjmo_HQMEGjzgKXOzxz5qCQ32ugkQO3Zx_56cqqTTtBRGKZYsfIQdIAgSbnR3nPTMNHF2fukfQtRTEz7oLtyyiiK5ryH2j0Y_UFpBjZzD1BczOsoykzMvUZP7BNlGP5IdKKD9zJCVhPQTjF5IqS12yIIhOspTyTWVgZaijKjmGGwAmmPrOOs4JHos0-mm-vsNBYgvKTpCo_otQxSA8Q19x2HDi35GXdxfFESN_uGmeBozHGBxt45xPYoNLNWq3E9IGHjmBXThXN9-SoJjcfEeDclCaYdZC6NKdxctzOGL_Pq7kLvRnbDEwkKNKyNKLxcTOPa3gzmZc5wzpaQo8WLfquEm-mpw_G7UjYlygDrKBitY2CKZbNwuZmPw__Vag7-Q5QKKx__6iDliw/embed"></iframe>
<!--kg-card-end: html-->
<p>The available dimensions are used to define the system prompt for the LLM, so it knows which dimensions are available for filtering:</p>
<!--kg-card-begin: html-->
<iframe width="100%" src="https://snippets.tinybird.co/XQAAAAIkAwAAAAAAAABBKUqGk9nLKvLMcmPuBTHTySrjaktgI8wlsqjEARYd06petVRkTQC6JkpOFeZLsjH2gjJ-7hQ1f7_e1L6msHbh1rYl4zwfFyDat9A-FmENDki1h928pVP5LgzbTN7Q-EMEmCV05rWgpJyG6ORDgasfHG_ZMcPhP1WX2Tml1pJVHttySbQ-DV5lh-ywalXPgWRLUXWmAXo2dKNV7taAgb5LqsnUXtoVF2h96rOi1PZoMuGkaBJ4eqd6ZzEgXUopjpBxadB-mkN3KaBaqjSNtEXfU0YVEkHof0l01JLeVFQmQWqedo5-p5OQGfaf-f0w6S3TX5-EO2mROOESvqbAd8YuxKWHSjjlNTgrnm1pfBjyE3_Y6xvuqt-S2CUVy22NPTl0xcn7dcrWCBEZCHBhDMFwmeZYXWkAcaW9PZpAgEVmS4BBJj4ds0i3mTB83KRKYU9SY2_5Ac8aYdKsUrvYGKVYALMo5hfbXgxpaSUAyCc8VuwAYFUd0oqiABhXPUQK0TUZG-nIz3F_fKv1yeG5EilN0BOf0L3Xxywjg8Zlf9X_Mv0sz2PLFjec-Vjtw6sGgrhEGms7lUoyg-mAqOhOsTxmfcl0y06iBCjsO12PF_pkzKNuXOy9AIjq3etWNNe9iQ8g-yfZG50MPjvfRIOQXa-892Bbge21Cr8byv99wkFvbqFe7roS3__w6ARf/embed"></iframe>
<!--kg-card-end: html-->
<p>Then, the request is made to the LLM provider (in this case OpenAI using gpt-3.5-turbo model):</p>
<!--kg-card-begin: html-->
<iframe width="100%" src="https://snippets.tinybird.co/XQAAAAJOAwAAAAAAAABBKUqGk9nLKwduvs2oyzbruyvHfEi2yF26Ejx2hIApAGZgRuNW4qLRyydRMVOSumPcpok-_amklkVYOcafyCdQoNHXkxW1FyQ_OxRwb9SHncPL-3hWKd_S3xgL_bFeyaVJeB7HgBKHv4S-qP98_AbvkaaekpboNu2656Bi8J41Xp8SEi7cDPZbR5HONTxdRR0A7-FDQHwMVrNC6nIwnnftQjvQft4XBJCos2s3kUJm9QfG-uCPJhI16kgtLy474DOg61gQOpIyoQ_6yOGBE-c7LTg-Y7mLpjTIhK9DKX7AYbp8SSPvzyIarNJD5ZjkmCe10dVRsRe1fWS8DF0tgIvOl1k8IbYB6qxApy2CiFP13j0yW_CCBo7cxba-xFqXcr4MmlQ1Uy6gN71BxNTpo9A6YtNOxuIFYLDJLeeZ7wiIFLIqnjA2zbtQut-BJzjUV402QHECygfIUzoxHavzXMUSRXBuyAowtQuv2dF5QpeqTfb9CFR30-wdWOZGAioFZVtx-C2gjGkTYV0Fh7y9OBzCV9fNCKnqc7pzRRcNW0NWwlYHJJ9SHPu1WY0sMfQAyfk5KIWJeZvw19-4IsTraPnlik3s4GzqaW9pJLnRMV1zo-94kUM9MfW7s2A4ktWUTy7JxWODJAZvTIQZJGOcVd_v_9ucQ64/embed"></iframe>
<!--kg-card-end: html-->
<p>Note that we're using a wrapped model, which is how we <a href="https://www.tinybird.co/docs/forward/get-data-in/guides/ingest-vercel-ai-sdk"><u>instrument the Vercel AI SDK</u></a> to send LLM call events to Tinybird for usage (so we're both using this app to analyze LLM calls, and also analyzing calls made from this AI app :mindblown:)</p><p>Finally, the backend does some type checking and applies defaults for missing parameters, returning the structured parameters in the API response:</p>
<!--kg-card-begin: html-->
<iframe width="100%" src="https://snippets.tinybird.co/XQAAAAJTBQAAAAAAAABBKUqGk9nLKu3-imX6GRIJt6C6o4bmB9fBvqXdZLidFQLC6KSFJJNOzM0gtieBPWOPVILEBWXG6CjMa0Y9noZvEuCSeNvkCVKKJWGRuV0y_ZRjZfo3RYANKbrxBPKjHgjl-6ph-MX7_voej0xJ5GYdSZ8REIl3DNZc_-xEWiLqNtJeJpr-0JsA8sLDQ_DLjhvKv2fzeSbntmC6EfUZutSfbpE2_xoXLgv28XSb6Uvbito1RMsnKJZmILecya8QKVLSs1Nr_TohRitCEub7LYTWoubDkQFAgynRnLXi4EtAOxmt8TEqANSclkTIT08mZ65lPDISz2QzPk17vZXN538QULchFqyQozyiWVZRG-8eliP52surAVqVZvyFmPR679Lt--Cuwpb804EM6MKH4aK7GHBEKnTWZZjzCoFr8n8ucMkF-3XTErDDkGuP_WD-JCbTRx_-T588MsYHZYC-Qb5lyObLWTP0ZYYeyDKVZ0nv4mrxmTZm8rNrf2b7PsY4CG_80twE-Sr8P1fDdZpL4A2m5WR006NmwFyEaaK1rnxiu8ekCeFsEuakweR-KHlBEbRrIuUgdlbi2m3JIIWXq9Ft_8Xk57IYrWQyIp_ZJTOHfU0eLlkwxryOYnpUlQbqJ6_LZ8N3bOpT7_oZsLWP6veQNeLCaen1sdj2pl__9HJKw1ZgLgObE9PVwz91DPEDq9gGreYAf_n_zTWnXw/embed"></iframe>
<!--kg-card-end: html-->
<h2 id="gathering-user-input-and-displaying-the-chart-in-the-ui">Gathering user input and displaying the chart in the UI</h2><p>The core UI component is <code>CostPredictionModal.tsx</code>, which handles receiving user input, getting structured parameters from the LLM, fetching data from Tinybird with the structured parameters, and defining the type of chart to use based on the query.</p><p>First, the component handles the user input:</p>
<!--kg-card-begin: html-->
<iframe width="100%" src="https://snippets.tinybird.co/XQAAAAKNAQAAAAAAAABBKUqGk9nLKu4MZ3LH51JRwnOfmw1dkQH8H4mkRYHcPrrV82FD-9ovAqQlx7Y_DcYajA03nbmbX3juniBI-KpaqKbKMNv5-gHkXIsqTD-_epjbmb9OtEfTNfGjl3qqCOEqfqG-OTl341Tu_ZdrFmVh4WftwQsIUkRPKI-z2DEEYObZyFbAvioTCIoi95VI7bKOJsR2-X9JhOiEskg-njmoDmm4JA6V9RIePg9BqyTZxOua7EDS5ZOGHcO0smwClq9pGqI6Q3I17sq8x4BBz5ogxqAh34CzDZ3J4n86fatQ8UU8ZxVmuFYmA6NrpKmw8Mg32mvDWRz3Qs8IfqrUWmDVecWKUoYhzpQA6Gxjd13z8Z10h8x_7g1q70v9AN2ITx0Iz67Cc11xs5ZOzJz_uBB8gA/embed"></iframe>
<!--kg-card-end: html-->
<p>On submit, it determines the type of query based on heuristics (this could easily be handled by an additional LLM for more complex use cases):</p>
<!--kg-card-begin: html-->
<iframe width="100%" src="https://snippets.tinybird.co/XQAAAAJMAgAAAAAAAABBKUqGk9nLKvd_-pU5WCZH3mFLoa1v0vJMPG88NjzCN6IfboRMggmKaqNLvaqROmJl49zIuX6w8ie2DRohAkNzmPbCTD88zNFvXAMC-loCWbjAUV45wQhzIE13xNr1GMbRy-NzMVcAUjdxVTAM8NaH16T7kg8n-AsgUgV-xTfcHbSUCXDo4RhHRtWktFj0RamI3b-jp5nC6KyYyT5D0PRgkvbDvUEm2h5IfliJMC-Ovgb5YC_iYCEgX-znx2zWKz8TBFMo_BM6aZCYibS_ad7wSei9Pi-DKk3rsgpNkAVZg1HMV0yU498CwoeSUgJJj3ghYUEo7Jd7zyhIuzHBcUvdGHBJPNzFEDF-EN89Zi5PCrwxIuMAk3MNDp0QN1VLb-Zg2V6JUHM0_m5yKA/embed"></iframe>
<!--kg-card-end: html-->
<p>And determines what type of chart to use (<code>AreaChart</code> vs <code>BarChart</code>, multiple categories, etc.) based on this analysis:</p>
<!--kg-card-begin: html-->
<iframe width="100%" src="https://snippets.tinybird.co/XQAAAAIJAwAAAAAAAABBKUqGk9nLKv3YDpMJRgSgXaD9U8TtQzpZK_gDu5e988x6X6fIbhdsdTh9bH99vZgB9hO2E_2nWI6rRUySHgFlISZkq_-KNbB2EWT6nQHnNKMydE9C6ob1oypa9FLOl_MUgm9bVPWWxDntJNtSesPX2WfxQV8T6qFHTjdbHMHI901QSE26Ah8Z3O2KX6Wigm7eIDXcbea4tYS5AcfNOcAH7YLiXmDi_L7QB9m5ot8u30vUDVv3ckMbc5SD0cci7WDxM4y-qBxQse9a3a3PGKOALEmecTg1Iqlgibg4NMXprS0gDCLRYdmW5Bsyv5nuiSxEd1NNF759YCiNRDuYZlD2-XBp80wqeT01IgOe18zJRb5cnVvPzxeisaHK7OB48wA3a3S5oiYqQDe2U7-3IllC-MOtc8TbYD3PUTOFdC_kDSwPvIzbnEQ6BsNgy3thjHXiGbwTkbzjyyTov1rNO8E__ixbo2G_uAX-7nteWS4vEhqKTMIVjmoKbHC16yvPfCYeDJ7NdaGfkueFVRD04tIN6wfwbYoT_5LrtIc/embed"></iframe>
<!--kg-card-end: html-->
<p>It then passes the user input to the <code>extract-cost-parameters</code> API route and sets the parameters based on the response:</p>
<!--kg-card-begin: html-->
<iframe width="100%" src="https://snippets.tinybird.co/XQAAAAJGAwAAAAAAAABBKUqGk9nLKwdn0SRqLXZyqtRyto0hGytroB6N4lDUsJyNa9Y73FmrGXvqwih_c1nOxytI8EehTq3N431F8SVv7_N0YXV0YXVSgJWS_-KtoogHKsz_FDBl7g-L2690PKhbJVyackByJZsRLjR-zJPlMmD-A2uO7WsX2EcaWIhM4RU8YDMDuwmhTn7fpDennFv8XYondUbV5w3TZ5mgNyZjrnQzLZQe0liJr7OHyACAd_UhD_eauhrxJ_CE_JqcgYDfqDXs4e28ATkcICGbhw3vHOiRtloRYPl1LuO-Ec7htGoL_ZwM0uBh6mxaMPdI2MdUzgTlFf69xnje5yGtL_q7s742R0V_QPh1uTMbpp2XscELbZrdO0rz6bLljKqXuSlAurNG7Fsis4lgfIQuWX1RnUZ6FymH9CmYOnw6npS_lNr3R7K97sdJ3eYPjocjq5--6KpFSpb88YgbenNs2B16So7MUIh_ydZ-yQYLXdZRFP0J7Gi3GPXsyKellQCCnY3Y0SxiqRDGTP9pcI7N9lHbfFrjAfZ7kuE5ogH4JYHvI8hWnnvjO1ONHd6K2WK21zZALBC8MepvtCgo0zkXXNmqV_dPPL5doafsb8rqpAsNUTETx4yahswzKXNmM2D8bRrmktgqMZ1h7NH7vcq78o654ck5bkxaEzfUSW6LT1vtfbgJ97ZtkNGkvhn_BdJWAA/embed"></iframe>
<!--kg-card-end: html-->
<p>Once it receives the parameters, it parses and cleans those parameters, fetches the Tinybird API, and runs the <code>calculateCosts()</code> function with the result:</p>
<!--kg-card-begin: html-->
<iframe width="100%" src="https://snippets.tinybird.co/XQAAAAItBgAAAAAAAABBKUqGk9nLKvXp75Y-I_9hVTJmHh6YYqAD-D8NQOASLOGYKJXiYFyqqEBxgw6EdnotsxZyu8-w0yBw66TN8003q7SKiEb3AK_5oKoPiR7C2PFusq8ljVLneuC8HmZ1mpLjRlSrKomxU1uqqSv4CMW5BjM8fpJWb2EN-U8AlncwJgQ0qXqNTirjtl9J_Q6juRPPubaDScumnCv5of_ZE8G3SqqF-6tIiY3wcCAjhqYwA0dQlHz9ACTJFRFNjh1-Kr-d3NzGEu7-xKWhDz4MfBskEhsZsQ08edo5a9Ja5G7zDoeRcj1-KjdatzphIbtAMg4znoJP7JsvjJm7j4EwcNPprBPOT4KsJmqLEcuauz5GrGxSlF4mfNfhx9nbkvdN5J7OgR9fBD9vjqMMJNukUrXXph1ZlrB5ecUjhYfDNidCbuqXpq50F6lL97pT3_qlqBQAqYHJQKSeUZ4zM9-IG7kodO_aoKQvvkEsOAdu07CTuexkCYRGGiZ3FmG92Vo4IboQjhvbPUhfiP0HLIx-mKm4zw6VRRPg9bMLE_dSBX773l3Z7AvKQ9VCXr215xtBB4fZMsUYTg4Vwhkq9FvKdYJu7zc739lfj8YRoxjAHfOat97-15Fn2VlG8zcabf6mhXYcZa7ZvQKvEC1WLcZrZSDEn6xijyuGBlICUt33fiyYCTdrOFHoESdbOrdGNwXmk9RTVN1d8z4CZ9jHTVimCL84QgAs5RTFxjFH2TS2S_R4c594cvZfcNyfUMmRWJf9e9MipxEktbgrf3y_CY5d4DRD3vtgjyq0GDzYXu-B-hxL7mH_TGyLqbYj7z3BkjCijWc1qOs232JvQRFiKkDN1GmXgAEq6DMwpNtNYJNRx9ckb611iw3U_w0jm6hBUMG5fmQguP63LcZ23t-pVx7WYZ1n_6adUx8/embed"></iframe>
<!--kg-card-end: html-->
<p>The <code>calculateCosts()</code> function is responsible for setting the React states for the data and categories. For example, to process non-grouped data:</p>
<!--kg-card-begin: html-->
<iframe width="100%" src="https://snippets.tinybird.co/XQAAAAL4BAAAAAAAAABBKUqGk9nLKwQ70V1SsgamjB_cMVPzEGeHyDB4yfXy-Kp0usp9votzsxhL3aZfRU-OyKmxp4gof0YCCwa6jMUArVmuSBFdpuQeNyhcEdXXHfw0Wz9W9Yean-5Oh4SonTJzOVPmB9pbIYuECPu_d1e1f-Jda5CfMHIfljJark99FDVh0REv-MWqgycH-ewrFA4l_Oog1ipb3MyCeC4lUXPdiTfSzIW9cDIwjBWfb1HaqKa4ZHWx1eXdm41zdLG7heqAdHy_FS65MOvHyrPjtX1dQtgo_l40nLFytetcm5UUiOI3uDD24XZwweXQD2FRrza__LZE0bn9fG9otz83kq9OQfzk4qb0x0z2XA3aHP902C95ircGrHM0hYwHsQD8WncMXqSXxmfhNixIAt77kXYs2i-2ohxNAg7K4cKLdRMu_zvrj88AcMhRuD4zrRGGIo4YiVgfVvGIfd210T4dCk_dBFG-XQ2T5VSFyAz_wqhjU70Y7eYXG4-Kw2Vyd-PQLC2FxBFobJDZY5LkUuGB9cS4Ob0--awL4tUVHsXnL0rwWuhtS8rb63FvxwPP052kc093vf3uoOzMSIeoF3HfKG2Yb48K2JNxvhmzwOJyYj1zy9hEgDw3uay9UUJ-slNE3te-_JRFzNxKxlsVKbrN3YrtyESvmbCkPuqBq0P-H_AZGYq7t0I6PM8YwlFQJBgVcCdt4gUyNzYvR3UdDbvnZrxx0yCqsE-ijfQEFUGsVr4yFhFIm21qYAmUoB4zHJREGYyKs-l_CwRAKHW1ylS0DczqpX2nnELyRKxU6Z3VRP_V2Q1BZjKO2Y9lvgHylLO1pgs1OIMk77AdJMbsaZ-CrHtb_48A4dk/embed"></iframe>
<!--kg-card-end: html-->
<p>Finally, the component renders the chart with the data/categories stored in state (using chart components from the Tremor charts library).&nbsp;</p><h2 id="the-result-a-dynamic-chart-that-matches-user-intent">The result: A dynamic chart that matches user intent</h2><p>This is a relatively simple implementation of a dynamic, user-generated data visualization. It uses heuristic analysis to define the type of chart to generate, but this could easily be outsourced to LLMs as well for a more flexible/dynamic implementation.</p><p>Here are the important takeaways:</p><ol><li>We can use LLMs to generate structured data snippets from free-text input.</li><li>We need a performant analytics backend (e.g. Tinybird) to parse those structured data snippets and return the data we need to visualize in real time</li><li>We can define the type of visualization to create heuristically (as we did here) or using LLMs.</li></ol><p>This pattern opens up a bunch of possibilities to allow end users to generate their own data visualizations. All we must do is give LLMs contextual understanding of the underlying data to be able to create structured filters, aggregations, and groupings.</p><h2 id="discussion-why-not-use-llms-for-everything">Discussion: Why not use LLMs for everything?</h2><p>In this demo, we used the LLM to take a free-text user input and return structured parameters that we could pass to my real-time analytics API.</p><p>Things we didn't use LLMs for:</p><ol><li>Determining what type of chart to produce</li><li>Generating a SQL query to fetch the data</li></ol><p>Why didn't I use LLMs?</p><p>Well, for #1, we certainly could have. The use case was simple enough that it didn't seem necessary, but it could easily be augmented. You simply add something to the LLM system prompt and ask it to determine what kind of query it is, and add it to the structured response of the LLM output. Easy.</p><p>#2 is a little more nuanced. Yes, we could ask the LLM to generate the SQL for us, and then ask the LLM to generate the chart component based on the results of the SQL.</p><p>Here's why we used a dynamic API endpoint instead:</p><ol><li><strong>Encapsulated logic and best practices</strong>. If we're repeatedly delivering the same kind of analysis, having a somewhat static endpoint (with dynamic parameters) can both simplify and improve performance. We can encapsulate good data engineering practices into our query, rather than relying on the LLM to produce something good.</li><li><strong>Authentication, security, and multi-tenancy</strong>. Instructing an LLM to query a raw table of multi-tenant data carries a significant security risk. What if the wrong customer's data gets exposed? We could isolate each customer's data into a separate table, but that isn't always feasible. Using an API secured by tokens/JWTs guarantees security and data privacy in multi-tenant architectures.</li><li><strong>Rate limiting</strong>. Related to the above. We can add rate limits to user tokens for the API to ensure it isn't abused.</li><li><strong>Better observability</strong>. If LLMs are generating SQL queries willy nilly, it becomes much more challenging to monitor performance and debug. While <a href="https://www.tinybird.co/blog-posts/instrument-your-llm-calls" rel="noreferrer">LLM observability is getting better</a>, this scenario would add a lot of complexity we don't want to deal with.</li><li><strong>More deterministic output and resource usage</strong>. LLMs are great. But they do not supply deterministic responses with deterministic resource usage. As a SaaS/product builder, I would be wary of outsourcing too much functionality to LLMs, especially that which can use considerable compute resources.</li></ol><p>LLMs will get better and better at writing good SQL. For now, we're sticking with tried-and-true APIs that give us speed, simplicity, and predictability in our analysis.</p><h2 id="get-started">Get started</h2><p>If you want to see the full implementation of this feature, check out the components mentioned in the<a href="https://github.com/tinybirdco/llm-performance-tracker"><u> LLM Performance Tracker repo</u></a>.</p><p>If you're new to Tinybird, you can <a href="https://www.tinybird.co/signup"><u>sign up for free</u></a> (no time limit) and create real-time LLM analysis API endpoints in a few minutes using the template:</p>
<!--kg-card-begin: html-->
<iframe width="100%" src="https://snippets.tinybird.co/XQAAAALYAAAAAAAAAABBKUqGk9nLKvxC6zJFHlZRqNYu38bIIaK7bdfYWGQhnoy-eqEKFF_tUPfWKY4G-xH0jcFasjyfwaD5qIVOvLB821OpF5TXg0Si1oTKGMqBEBs4l3a29SZUnX0eIWkrv_O2qeHzQJ080s5dNBXEppanNhjuRC-aaC8tA86sofRUGsAP4f5i_5UP6DuobiW4VCz7Wx2xDxBGVYuv8Fff6uHZp-xPfSLe5Qn5O6PvI7dnc___5qTwAA/embed"></iframe>
<!--kg-card-end: html-->

