---
title: "How to set up custom dashboards for monitoring real-time cultural trends"
excerpt: "Cultural moments break in minutes. Build a trends dashboard that tracks mention velocity across platforms and geographies, with parameterized SQL endpoints returning fresh data in under 100ms."
authors: "Tinybird"
categories: "I Built This!"
createdOn: "2026-05-11 00:00:00"
publishedOn: "2026-05-11 00:00:00"
updatedOn: "2026-05-11 00:00:00"
status: "published"
---

Cultural moments don't wait for your batch pipeline. A track drops, a hashtag catches, a moment goes viral, and within 20 minutes the conversation has already peaked and shifted. By the time your hourly ETL runs, the data is a history lesson.

A real-time cultural trends dashboard has different requirements from a standard analytics board. You're not just counting events. You're tracking velocity: which topics are accelerating, where geographically, and on which platforms. That means fresh data at every filter change, across millions of incoming signals, without your database falling over when a team of analysts all start drilling into the same breaking topic simultaneously.

This post walks through building that dashboard on Tinybird: schema design for trend signals, a materialized velocity rollup, parameterized endpoints for every filter combination your team needs, and a React frontend that updates without a page refresh.


## What you're building

A dashboard that answers:

- What topics are accelerating in mentions right now, by category and platform?
- Which regions are driving a given trend?
- How does mention velocity compare across the last 1, 6, and 24 hours?
- What's the sentiment breakdown for a specific topic over time?

Filters (category, platform, region, time window) change query parameters on a live endpoint. No new SQL. No full-table scans. No backend changes when your product team wants a new slice.

The pipeline: signals stream in from your collection layer, a materialized view pre-aggregates mention counts and sentiment by minute, Pipes expose parameterized endpoints, the frontend polls and re-fetches on filter changes.

## Step 1: Design the signals schema

Cultural trend data has a few dimensions that matter for dashboard performance: time, topic, category, platform, and region. Get the schema right and queries stay fast even at hundreds of millions of rows.

```sql
-- trend_signals.datasource
SCHEMA >
    `timestamp`       DateTime,
    `topic`           String,
    `category`        LowCardinality(String),
    `platform`        LowCardinality(String),
    `region`          LowCardinality(String),
    `signal_type`     LowCardinality(String),
    `sentiment_score` Nullable(Float32),
    `source_id`       String

ENGINE "MergeTree"
ENGINE_PARTITION_KEY "toYYYYMM(timestamp)"
ENGINE_SORTING_KEY "timestamp, category, platform, region"
```

`LowCardinality(String)` on `category`, `platform`, `region`, and `signal_type` is important. These fields have bounded cardinality (a few dozen distinct values), so ClickHouse{% sup %}®{% /sup %} encodes them as dictionary integers. Filtering and grouping on them is roughly 2-4x faster than plain `String` columns.

The sorting key puts `timestamp` first because all trend queries are time-bounded. `category` and `platform` follow because they're the most common filters. Queries that match this prefix order skip entire data ranges without reading them.

`topic` is deliberately not in the sorting key. Topics are unbounded cardinality and you'll search them with `LIKE` or exact match, not range scans. Putting high-cardinality string columns in the sorting key usually hurts performance.

## Step 2: Ingest signals

Your collection layer captures signals from social APIs, RSS feeds, streaming platforms, or internal event sources. Push them to Tinybird's Events API:

```bash
curl -X POST "https://api.tinybird.co/v0/events?name=trend_signals" \
  -H "Authorization: Bearer $TB_TOKEN" \
  -d '{"timestamp":"2026-05-11T10:00:00Z","topic":"quiet luxury","category":"fashion","platform":"tiktok","region":"us","signal_type":"mention","sentiment_score":0.72,"source_id":"tt_9182736"}'
```

For higher volume, batch multiple signals per request and use the TypeScript SDK:

```typescript
import { createClient } from "@tinybird/sdk";

const tb = createClient({ token: process.env.TB_TOKEN });

async function ingestSignals(signals: TrendSignal[]) {
  // Batch up to 1000 events per call
  await tb.datasource("trend_signals").append(signals);
}
```

The Events API handles 1K+ requests per second. If you're pulling from multiple social platform APIs in parallel, you can fire batches concurrently without coordination overhead.

## Step 3: Pre-aggregate with a velocity rollup

A dashboard refreshing every 5 seconds across a team of analysts can't hit raw event data. Pre-aggregate into 1-minute buckets using `AggregatingMergeTree`.

The rollup tracks mention count, unique source count (an approximation of reach), and average sentiment per minute:

```sql
-- trend_velocity_mv.datasource
SCHEMA >
    `minute`          DateTime,
    `topic`           String,
    `category`        LowCardinality(String),
    `platform`        LowCardinality(String),
    `region`          LowCardinality(String),
    `mention_count`   UInt32,
    `unique_sources`  AggregateFunction(uniq, String),
    `avg_sentiment`   AggregateFunction(avg, Float32)

ENGINE "AggregatingMergeTree"
ENGINE_PARTITION_KEY "toYYYYMM(minute)"
ENGINE_SORTING_KEY "minute, category, platform, region, topic"
```

The Pipe that populates it on every insert:

```sql
-- trend_velocity_mv.pipe
NODE mat
SQL >
  SELECT
    toStartOfMinute(timestamp) AS minute,
    topic,
    category,
    platform,
    region,
    count()                        AS mention_count,
    uniqState(source_id)           AS unique_sources,
    avgState(sentiment_score)      AS avg_sentiment
  FROM trend_signals
  GROUP BY minute, topic, category, platform, region

TYPE MATERIALIZED
DATASOURCE trend_velocity_mv
```

Insertions stay fast because the actual merge of partial aggregates happens in the background. Dashboard queries call `uniqMerge()` and `avgMerge()` to finalize the pre-computed sketches at read time, which is far cheaper than re-scanning raw signals.

## Step 4: Build the trend velocity endpoint

This Pipe is the core of the dashboard. It accepts filter parameters for every dimension your team wants to slice by, and adds a velocity calculation: mentions in the current window vs. the previous window.

```sql
-- trend_velocity.pipe
NODE base
SQL >
  %
  SELECT
    minute,
    topic,
    category,
    platform,
    region,
    sum(mention_count)             AS mentions,
    uniqMerge(unique_sources)      AS reach,
    avgMerge(avg_sentiment)        AS sentiment
  FROM trend_velocity_mv
  WHERE
    minute >= now() - interval {{ Int32(hours, 1) }} hour
    {% if defined(category) %}
    AND category = {{ String(category, '') }}
    {% end %}
    {% if defined(platform) %}
    AND platform = {{ String(platform, '') }}
    {% end %}
    {% if defined(region) %}
    AND region = {{ String(region, '') }}
    {% end %}
    {% if defined(topic) %}
    AND topic ILIKE {{ String(topic, '') }}
    {% end %}
  GROUP BY minute, topic, category, platform, region

NODE velocity
SQL >
  SELECT
    topic,
    category,
    platform,
    region,
    sum(mentions)                                              AS total_mentions,
    max(reach)                                                 AS peak_reach,
    round(avg(sentiment), 2)                                   AS avg_sentiment,
    sumIf(mentions, minute >= now() - interval 30 minute)      AS mentions_last_30m,
    sumIf(mentions, minute < now() - interval 30 minute
                AND minute >= now() - interval 60 minute)      AS mentions_prev_30m,
    round(
      if(mentions_prev_30m > 0,
        (mentions_last_30m - mentions_prev_30m) / mentions_prev_30m * 100,
        0),
      1
    )                                                          AS velocity_pct
  FROM base
  GROUP BY topic, category, platform, region
  ORDER BY total_mentions DESC
  LIMIT {{ Int32(limit, 50) }}

TYPE ENDPOINT
```

`velocity_pct` is the percentage change in mentions over the last 30 minutes vs. the 30 minutes before that. A topic at +340% is accelerating. One at -60% has peaked. Your dashboard can sort by this instead of raw count to surface what's actually breaking now.

The `%` prefix enables SQL templating. `String(topic, '')` with an `ILIKE` operator lets analysts search for partial topic matches. All inputs are validated before they reach the database.

## Step 5: Deploy and test the endpoint

```bash
tb deploy
```

Live at `https://api.tinybird.co/v0/pipes/trend_velocity.json`.

```bash
# Top trends globally, last hour
curl "https://api.tinybird.co/v0/pipes/trend_velocity.json" \
  -H "Authorization: Bearer $TB_READ_TOKEN"

# Fashion trends on TikTok, UK, last 6 hours
curl "https://api.tinybird.co/v0/pipes/trend_velocity.json?category=fashion&platform=tiktok&region=uk&hours=6" \
  -H "Authorization: Bearer $TB_READ_TOKEN"

# Search for a specific topic across all platforms
curl "https://api.tinybird.co/v0/pipes/trend_velocity.json?topic=%25quiet+luxury%25" \
  -H "Authorization: Bearer $TB_READ_TOKEN"
```

Response:

```json
{
  "data": [
    {
      "topic": "quiet luxury",
      "category": "fashion",
      "platform": "tiktok",
      "region": "uk",
      "total_mentions": 48200,
      "peak_reach": 12400,
      "avg_sentiment": 0.68,
      "mentions_last_30m": 8100,
      "mentions_prev_30m": 1920,
      "velocity_pct": 321.9
    }
  ],
  "statistics": {
    "elapsed": 0.063,
    "rows_read": 18240,
    "bytes_read": 364800
  }
}
```

63ms. Including the velocity window calculation across two 30-minute buckets.

## Step 6: Wire the React frontend

A hook that fetches trend data and re-fetches when any filter changes:

```typescript
// hooks/useTrendData.ts
import { useState, useEffect } from "react";

interface TrendFilters {
  hours: number;
  category?: string;
  platform?: string;
  region?: string;
  topic?: string;
}

interface TrendRow {
  topic: string;
  category: string;
  platform: string;
  region: string;
  total_mentions: number;
  peak_reach: number;
  avg_sentiment: number;
  velocity_pct: number;
}

export function useTrendData(filters: TrendFilters, refreshMs = 10000) {
  const [data, setData] = useState<TrendRow[]>([]);
  const [loading, setLoading] = useState(true);

  useEffect(() => {
    const params = new URLSearchParams({ hours: String(filters.hours) });
    if (filters.category) params.set("category", filters.category);
    if (filters.platform) params.set("platform", filters.platform);
    if (filters.region)   params.set("region",   filters.region);
    if (filters.topic)    params.set("topic",    `%${filters.topic}%`);

    const url = `https://api.tinybird.co/v0/pipes/trend_velocity.json?${params}`;
    const headers = { Authorization: `Bearer ${process.env.NEXT_PUBLIC_TB_TOKEN}` };

    const fetchData = async () => {
      const res = await fetch(url, { headers });
      const json = await res.json();
      setData(json.data ?? []);
      setLoading(false);
    };

    fetchData();
    const interval = setInterval(fetchData, refreshMs);
    return () => clearInterval(interval);
  }, [filters.hours, filters.category, filters.platform, filters.region, filters.topic, refreshMs]);

  return { data, loading };
}
```

The dashboard component with all filter controls:

```tsx
// components/TrendsDashboard.tsx
import { useState } from "react";
import { useTrendData } from "../hooks/useTrendData";

const CATEGORIES = ["music", "fashion", "sports", "film", "politics", "food"];
const PLATFORMS  = ["tiktok", "twitter", "instagram", "reddit", "news"];
const REGIONS    = ["us", "uk", "br", "de", "fr", "jp", "au"];

export function TrendsDashboard() {
  const [hours, setHours]       = useState(1);
  const [category, setCategory] = useState<string | undefined>();
  const [platform, setPlatform] = useState<string | undefined>();
  const [region, setRegion]     = useState<string | undefined>();
  const [topicSearch, setTopicSearch] = useState("");

  const { data, loading } = useTrendData(
    { hours, category, platform, region, topic: topicSearch || undefined },
    10_000
  );

  const accelerating = data.filter((t) => t.velocity_pct > 50);

  return (
    <div className="dashboard">
      <div className="filters">
        <select value={hours} onChange={(e) => setHours(Number(e.target.value))}>
          <option value={1}>Last 1 hour</option>
          <option value={6}>Last 6 hours</option>
          <option value={24}>Last 24 hours</option>
        </select>
        <select value={category ?? ""} onChange={(e) => setCategory(e.target.value || undefined)}>
          <option value="">All categories</option>
          {CATEGORIES.map((c) => <option key={c} value={c}>{c}</option>)}
        </select>
        <select value={platform ?? ""} onChange={(e) => setPlatform(e.target.value || undefined)}>
          <option value="">All platforms</option>
          {PLATFORMS.map((p) => <option key={p} value={p}>{p}</option>)}
        </select>
        <select value={region ?? ""} onChange={(e) => setRegion(e.target.value || undefined)}>
          <option value="">All regions</option>
          {REGIONS.map((r) => <option key={r} value={r}>{r}</option>)}
        </select>
        <input
          placeholder="Search topic..."
          value={topicSearch}
          onChange={(e) => setTopicSearch(e.target.value)}
        />
      </div>

      {loading ? <p>Loading...</p> : (
        <>
          <section>
            <h2>Accelerating now ({accelerating.length})</h2>
            {accelerating.map((t) => (
              <TrendCard key={`${t.topic}-${t.platform}-${t.region}`} trend={t} />
            ))}
          </section>
          <TrendTable rows={data} />
        </>
      )}
    </div>
  );
}
```

Each filter change triggers a new fetch with updated query parameters. No debounce. No query builder. The endpoint validates every parameter before it hits the database, so a stray character in the topic search field won't cause issues.

## Tracking trend velocity over time

The velocity endpoint gives you the current acceleration snapshot. To chart how a specific topic's velocity has evolved, add a second endpoint that returns the per-minute timeseries:

```sql
-- topic_timeseries.pipe
NODE timeseries
SQL >
  %
  SELECT
    minute,
    sum(mention_count) AS mentions,
    uniqMerge(unique_sources) AS reach
  FROM trend_velocity_mv
  WHERE
    topic = {{ String(topic, required=True) }}
    AND minute >= now() - interval {{ Int32(hours, 24) }} hour
    {% if defined(platform) %}
    AND platform = {{ String(platform, '') }}
    {% end %}
  GROUP BY minute
  ORDER BY minute ASC

TYPE ENDPOINT
```

`required=True` on `topic` means the endpoint returns a 400 if the parameter is missing. No accidental full-table scans when someone calls the URL without filling in the form.

This endpoint feeds a sparkline or time-series chart in the detail view when an analyst clicks into a specific trend.

## Local development

```bash
tb local start
tb dev
```

`tb dev` watches your `.datasource` and `.pipe` files and hot-reloads on every save. For testing, you can pipe a sample of collected signals through the local Events API and iterate on the velocity SQL without touching production data.

For CI, create a preview workspace per branch:

```bash
tb preview create --name trends-velocity-experiment
```

Point your frontend at the preview workspace while testing changes to the velocity calculation. Token resolution is automatic for GitHub Actions and Vercel. Tear down the preview when the branch merges.

{% cta
  title="Build your trends dashboard today"
  text="Ingest signals, write SQL, get an API. No infra to manage."
  button={href: "https://cloud.tinybird.co/signup", target: "_blank", text: "Start for free"}
/%}
