Continuous Integration and Deployment (CI/CD)¶
BETA
Intermediate
This guide depends on the “Versions” feature currently in beta. Versions are free of charge during beta. Contact us at support@tinybird.co to activate the feature in your Workspace.
Once you connect your Data Project and Workspace through Git you need to implement a Continuous Integration (CI) and Deployment (CD) workflow for your Tinybird Data Project.
In this guide you’ll use a GitHub Action to explain how to automate CI/CD pipelines, but you can use any other platform. See ready-to-use CI and CD templates in this repository.
Guide preparation¶
You can follow along using the ecommerce_data_project.
Download the project by running:
git clone https://github.com/tinybirdco/ecommerce_data_project
cd ecommerce_data_project
Then, create a new workspace and authenticate using your user admin token (admin user@domain.com). If you don’t know how to authenticate or use the CLI, check out the CLI Quick Start.
tb auth -i
** List of available regions:
[1] us-east (https://ui.us-east.tinybird.co)
[2] eu (https://ui.tinybird.co)
[0] Cancel
Use region [1]: 2
Copy the admin token from https://ui.tinybird.co/tokens and paste it here :
Finally, push the Data Project to Tinybird:
tb push --push-deps --fixtures
** Processing ./datasources/events.datasource
** Processing ./datasources/top_products_view.datasource
** Processing ./datasources/products.datasource
** Processing ./datasources/current_events.datasource
** Processing ./pipes/events_current_date_pipe.pipe
** Processing ./pipes/top_product_per_day.pipe
** Processing ./endpoints/top_products.pipe
** Processing ./endpoints/sales.pipe
** Processing ./endpoints/top_products_params.pipe
** Processing ./endpoints/top_products_agg.pipe
** Building dependencies
** Running products_join_by_id
** 'products_join_by_id' created
** Running current_events
** 'current_events' created
** Running events
** 'events' created
** Running products
** 'products' created
** Running top_products_view
** 'top_products_view' created
** Running products_join_by_id_pipe
** Materialized pipe 'products_join_by_id_pipe' using the Data Source 'products_join_by_id'
** 'products_join_by_id_pipe' created
** Running top_product_per_day
** Materialized pipe 'top_product_per_day' using the Data Source 'top_products_view'
** 'top_product_per_day' created
** Running events_current_date_pipe
** Materialized pipe 'events_current_date_pipe' using the Data Source 'current_events'
** 'events_current_date_pipe' created
** Running sales
** => Test endpoint at https://api.tinybird.co/v0/pipes/sales.json
** 'sales' created
** Running top_products_agg
** => Test endpoint at https://api.tinybird.co/v0/pipes/top_products_agg.json
** 'top_products_agg' created
** Running top_products_params
** => Test endpoint at https://api.tinybird.co/v0/pipes/top_products_params.json
** 'top_products_params' created
** Running top_products
** => Test endpoint at https://api.tinybird.co/v0/pipes/top_products.json
** 'top_products' created
** Pushing fixtures
** Warning: datasources/fixtures/products_join_by_id.ndjson file not found
** Warning: datasources/fixtures/current_events.ndjson file not found
** Checking ./datasources/events.datasource (appending 544.0 b)
** OK
** Checking ./datasources/products.datasource (appending 134.0 b)
** OK
** Warning: datasources/fixtures/top_products_view.ndjson file not found
Connecting your Workspace to Git¶
Once you have the Data Project locally connected to a Git repository and a working Workspace in Tinybird, you can link both by running the command below from your Data Project root directory.
tb init --git
** Initializing releases based on git for Workspace 'ecommerce_data_project_01'
** Checking diffs between remote Workspace and local. Hint: use 'tb diff' to check if your Data Project and Workspace synced
** No diffs detected for 'ecommerce_data_project_01'
** Workspace 'ecommerce_data_project_01' release initialized to commit '28df60db85ded29e005be55c61eef05fd92f69e4'
Do you want to generate ci/cd config files? [Y/n]: Y
** List of available providers:
[1] GitHub
[2] gitLab
[0] Cancel
Use provider [1]: 1
** File .github/workflows/tinybird_ci.yml generated for CI/CD
** File .github/workflows/tinybird_cd.yml generated for CI/CD
** Warning: Set $ADMIN_TOKEN in GitHub secrets. Copy from the admin token from https://api.tinybird.co/26d573e1-8240-4327-a833-d9920dfd7f1c/tokens
** GitHub CI/CD config files generated
** Read this guide to learn how to run CI/CD pipelines: https://www.tinybird.co/docs/guides/continuous-integration.html
** - /datasources already exists, skipping
** - /datasources/fixtures already exists, skipping
** - /endpoints already exists, skipping
** - /pipes already exists, skipping
** - /tests already exists, skipping
** - /scripts already exists, skipping
** - /deploy created
** '.tinyenv' created
The command above:
Checks there are no differences from your local Data Project and your remote Workspace.
Saves a reference to the current Git repository commit in the Workspace. This commit reference is used later on to diff Workspace resources and resources in a branch to ease deployment.
Let’s you opt-in to create CI/CD pipelines for GitHub or Gitlab. If you opt-in, you’ll have to create either a secret or environment variable (depending on your Git provider) called
ADMIN_TOKEN
with your user admin token (admin user@domain.com) as a variable.
Once the command succeeds, just push the CI/CD templates to your Git repository and you are ready to go. Of course, you can tweak and customize the actions as needed for your use cases.
Tinybird provides some basic CI/CD templates in this Git repository https://github.com/tinybirdco/ci. They include some knowledge about how to run basic CI and CD pipelines. Contact us at support@tinybird.co if you have questions about any specific requirement.
Troubleshooting the Git initialization¶
When your Data Project and your remote Tinybird Workspace have the same resources, tb init --git
is quite straightforward and allows you to easily start up a CI/CD pipeline.
When both projects are not in sync, you might experience one of these three scenarios:
There are remote files in the Workspace not present in your Data Project.
If that’s the case, just remove them from your Workspace (they are probably unnecessary) or tb pull --match "<resource_name>" --auto
to download it locally and push it to the Git repository before continuing.
You can also bypass remote resources by running the init
command like this: tb init --git --ignore-remote
There are local files in the Data Project not present in your Workspace.
Either you remove them from the Data Project or tb push
them to the Workspace and re-run the init
command.
There are diffs between the local files in the Data Project and the ones in your Workspace.
If that’s the case, you have to decide which version is the source of truth. If the resources in the Workspace, then tb pull --auto --force
to overwrite your local files. If your local files, then tb push --force
file by file so the Workspace is synchronized with your Data Project in Git.
Once your Workspace is linked to your Data Project via a Git commit, you are ready to start running CI/CD pipelines. Here’s how they look in more detail.
You can use tb diff
to check if Data Project is synced. It diffs local Datafiles to the corresponding remote files in the Workspace.
You can make your shell PROMPT print your current Workspace by following this quick guide
How Continuous Integration works¶
When you iterate your Data Projects, you most likely want to validate your Endpoints. In the same way that you write integration and acceptance tests for source code in a software project, you can write tests for your Endpoints to be run on each pull or merge request.
Continuous Integration helps with:
Linting: Datafiles syntax and formatting
Correctness: Making sure you can push your changes to a Tinybird Workspace
Quality: Running fixture tests and/or data quality tests to validate the changes in the Pull Request
Regression: Running automatic regression tests to validate Endpoint performance (both in processed bytes and time spent) and data quality.
The following section will use the CI template generated and GitHub Actions and the Tinybird CLI to illustrate how you can test your endpoints on any new commit to a Pull Request.
The GitHub Action¶
As mentioned above just run tb init --git
and choose GitHub
as the provider to to configure the CI GitHub action.
This is how it looks:
name: Tinybird - CI Workflow
on:
workflow_dispatch:
pull_request:
branches:
- main
- master
types: [opened, reopened, labeled, unlabeled, synchronize]
concurrency: ${{ github.workflow }}-${{ github.event.pull_request.number }}
jobs:
ci: # ci using Environments from Workspace 'ecommerce_data_project_01'
uses: tinybirdco/ci/.github/workflows/ci.yml@main
with:
data_project_dir: .
secrets:
admin_token: ${{ secrets.ADMIN_TOKEN }} # set admin token associated to an account in GitHub secrets
tb_host: https://api.tinybird.co
Now you need to add a new ADMIN_TOKEN
secret with the Workspace admin token to the Settings of the GitHub repository.
Let’s review the CI workflow generated:
1. Trigger the CI Workflow¶
on:
workflow_dispatch:
pull_request:
branches:
- main
types: [opened, reopened, labeled, unlabeled, synchronize]
The CI workflow is triggered when a new Pull Request is opened, reopened, synchronized or labels are updated. Base branch has to be main
. Of course you can tweak this workflow to your needs.
2. CI Job configuration¶
ci: # ci using Environments from Workspace 'ecommerce_data_project_01'
uses: tinybirdco/ci/.github/workflows/ci.yml@main
with:
data_project_dir: .
secrets:
admin_token: ${{ secrets.ADMIN_TOKEN }} # set admin token associated to an account in GitHub secrets
tb_host: https://api.tinybird.co
The ci
job has a comment with a reference to the remote Workspace, in this case ecommerce_data_project_01
.
If your Data Project directory is not in the root of the Git repository, you can change the data_project_dir
variable.
About secrets:
tb_host
: The url of the region you want to use, by default is filled with the region of the Workspace you had ontb init --git
admin_token
: The admin token is the Auth Token that gives all the permissions for a specific workspace. You can find more information in here or in theAuth Tokens
section of the Tinybird UI.
The CI Workflow¶
The CI workflow is based on this generic CI template that performs the following steps:
0. Workflow configuration¶
Sets the default working-directory
to the data_project_dir
variable defined, check outs the main branch to get the head commit and installs Python 3.8.
defaults:
run:
working-directory: ${{ inputs.data_project_dir }}
steps:
- uses: actions/checkout@master
with:
fetch-depth: 300
ref: ${{ github.event.pull_request.head.sha }}
- uses: actions/setup-python@v3
with:
python-version: "3.8"
architecture: "x64"
- name: Set environment variables
run: |
_ENV_FLAGS="${ENV_FLAGS:=--last-partition --wait}"
_NORMALIZED_ENV_NAME=$(echo $DATA_PROJECT_DIR | rev | cut -d "/" -f 1 | rev | tr '.-' '_')
GIT_BRANCH=${GITHUB_HEAD_REF}
source .tinyenv
echo "GIT_BRANCH=$GIT_BRANCH" >> $GITHUB_ENV
1. Install the Tinybird CLI¶
The Tinybird CLI is required to interact with your Workspace, create a test Environment, and run the tests. Additionally you can do some pre-flight checks to validate the syntax of the Datafiles.
Note that you could run this workflow locally by having a local Data Project and the CLI authenticated to your Tinybird Workspace.
- name: Install Tinybird CLI
run: pip install tinybird-cli
- name: Tinybird version
run: tb --version
2. Check the Data Project syntax¶
Check the syntax of the Datafiles
- name: Check all the datafiles syntax
run: tb check
3. Create a new Tinybird Environment to deploy changes and run the tests¶
An Environment is an ephemeral snapshot of the resources in your Workspace. It’s designed to be temporary and disposable so that you can test changes before deploying them to your Workspace.
An Environment is created on each CI job run, in this case use GITHUB_RUN_ID
as a unique identifier for the Tinybird Environment name. This way you can run multiple tests in parallel. Once the tests are finished, the Environment is deleted.
Environments are created using the tb env create
command.
- name: Create new test Environment with data
run: |
tb \
--host ${{ secrets.tb_host }} \
--token ${{ secrets.admin_token }} \
env create tmp_ci_${_NORMALIZED_ENV_NAME}_${GITHUB_RUN_ID} \
${_ENV_FLAGS}
You can configure which data to attach to the Environment with the _ENV_FLAGS
variable. You can use the --last-partition --wait
flag to attach the most recent ingested data in the main Environment. This way you can run the tests using the same data as in production. Alternatively you can leave it empty and use fixtures.
4. Push the changes to the Tinybird Environment¶
- name: List changes with main
run: tb diff --main --no-verbose
- name: Push changes to the test Environment
run: |
source .tinyenv
CI_DEPLOY_FILE=./deploy/${VERSION}/ci-deploy.sh
if [ -f "$CI_DEPLOY_FILE" ]; then
./deploy/${VERSION}/ci-deploy.sh
else
tb deploy --populate --fixtures --wait
fi
- name: List changes with test Environment (should be empty)
run: tb diff
You can deploy the changes in your current Pull Request to the test Environment previously created in two ways:
By default with
tb deploy
The deploy
command is able to get the diff
resources between your current Pull Request branch and the base branch head commit, and push the resources to the Tinybird Workspace. This is the default behaviour for all Pull Requests, and you don’t have to do anything.
Custom deploy command
Alternativelly for complex changes you can decide how to deploy the changes to the test Environment.
For this to work, you have to place an executable shell script file in deploy/$VERSION/ci-deploy.sh
with the CLI commands to push the changes. Learn more about custom deployment strategies.
Note the $VERSION
variable is defined in the .tinyenv
file in your Data Project directory. By default, it is 0.0.0
, but you can increment as needed.
Before and after the Push changes to the test Environment
a tb diff
command is run, so you can check that the changes are correctly pushed to the test Environment.
5. Run the tests¶
- name: Run fixture tests
run: |
if [ -f ./scripts/exec_test.sh ]; then
./scripts/exec_test.sh
fi
- name: Run data quality tests
run: |
tb test run -v
- name: Get regression labels
id: regression_labels
uses: SamirMarin/get-labels-action@v0
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
label_key: regression
- name: Run pipe regression tests
run: |
echo ${{ steps.regression_labels.outputs.labels }}
REGRESSION_LABELS=$(echo "${{ steps.regression_labels.outputs.labels }}" | awk -F, '{for (i=1; i<=NF; i++) if ($i ~ /^--/) print $i}' ORS=',' | sed 's/,$//')
echo ${REGRESSION_LABELS}
tb env regression-tests coverage --wait $(echo ${REGRESSION_LABELS} | tr , ' ')
Finally you can run your test suite.
There are three type of tests:
Data fixture tests: Test very concrete business logic based on fixture data (see
datasources/fixtures
).Data quality tests: Test very concrete data scenarios
Regression tests: Test that requests to your API Endpoints are still working as expected. These tests only work if you attached some production data (
last-partition
) when creating the test Environment.
There are sometimes when the regression tests are expected to fail because you made some change that is not backward compatible. You can skip some of the asserts for the regression-tests
command by labelling the Pull Request with the supported flags of the tb env regression-tests
command.
--no-assert-result
: if you expect the endpoint output is different from current version--no-assert-result-no-error
: if you expect errors from the endpoint--no-assert-result-rows-count
: if you expect the numbers of elements in the endpoint output is different from current version--assert-result-ignore-order
: if you expect the endpoint output is returning same elements but in different order--assert-time-increase-percentage -1
: if you expect the endpoint execution time to increase more than a 25% from current version--assert-bytes-read-increase-percentage -1
: if you expect the endpoint bytes read to increase more than a 25% from current version
Please refer to the Implementing test strategies guide to learn more about testing Tinybird Data Projects.
6. Delete the Environment¶
Once the tests are finished, the Tinybird test Environment is cleaned up in a separate step.
cleanup:
runs-on: ubuntu-latest
if: ${{ always() }}
needs: [create_workspace]
steps:
- uses: actions/checkout@master
- uses: actions/setup-python@v3
with:
python-version: "3.8"
architecture: "x64"
- name: Install Tinybird CLI
run: pip install tinybird-cli
- name: Tinybird version
run: tb --version
- name: Drop test Environment
run: |
tb \
--host ${{ secrets.tb_host }} \
--token ${{ secrets.admin_token }} \
env rm tmp_ci_${_NORMALIZED_ENV_NAME}_${GITHUB_RUN_ID} \
--yes
There’s a soft limit on the number of Environments you can create per workspace. Contact us at support@tinybird.co if you need to increase this limit.
A real world example¶
Let’s see how to use the Tinybird CLI to automate a CI workflow for a real world example using the ecommerce data project.
In this Pull Request, a WHERE
filter from a Materialized View is being removed, and instead a WHERE action = 'buy'
filter will be added directly to the Endpoints.
This is what we can see in the CI job.
All data files in the Pull Request are validated first using
tb check

A new Environment is created in Tinybird taking data in the main Environment from the year 2020, which was the last partition that received data.

There are 5 Datafiles that need to be pushed because they have been changed.

Datafiles are pushed in order, including a populate operation, leaving Endpoints for the end.

After that, there are no changes to be pushed, so the Tinybird test Environment contains the changes in the Pull Request

Regression tests
FAIL
due to thetop_products
Pipe endpoint processing 202% more data in this version than in the current version published. The reason is we’ve removed the filter in the Materialized View, and more data is being scanned.

That’s expected so, we add a label
--assert-bytes-read-increase-percentage -1
to the PR to skip the assert of bytes processed. See regression tests configuration.

Finally the Pull Request is merged and the changes are pushed to the main Environment in the same way they were pushed to the test Environment.

How Continuous Deployment works¶
Once a Pull Request passes CI and has been reviewed by some peer, it’s time to merge it to your main Git branch.
Ideally, changes should be automatically deployed to your main Environment, this is usually called Continuous Deployment
(or Continuous Delivery).
This workflow comes with several challenges, most of them related to handling the current state of your Tinybird Workspace. For instance:
As opposed to when you deploy a stateless application, deployments to a Workspace are incremental, based on the previous resources in the Workspace.
Another flavour of the state handling are some resources or operations that are created or run programmatically: populate operations, permission handling, etc.
Also deployments are performed to the same main Environment, so you need to be aware of that and implement some policy to avoid collisions from different Pull Requests deploying at the same time, regressions, etc.
As deployments rely on Git commits to push resources, it is required that your branches are not out-of-date for merging. Use you Git provider to control it.
Consider the CD workflow explained below a guide that covers the most common cases but some other complex deployments will require some knowledge and expertise from the team deploying the change.
Continuous Deployment helps with:
Correctness: Making sure you can push your changes to a Tinybird Workspace
Deployment: Deploying the changes to the Workspace automatically
Data Operations: Centralize data operations required after resources have been pushed to the Workspace
The following section will use the CD template generated and GitHub Actions and the Tinybird CLI to illustrate how you can deploy the changes in a pull request after merging it.
The GitHub Action¶
name: Tinybird - CD Workflow
on:
workflow_dispatch:
push:
branches:
- main
jobs:
cd: # deploy changes to workspace 'ecommerce_data_project_01'
uses: tinybirdco/ci/.github/workflows/cd.yml@main
with:
data_project_dir: .
secrets:
admin_token: ${{ secrets.ADMIN_TOKEN }} # set admin token associated to an account in GitHub secrets
tb_host: https://api.tinybird.co
This workflow will deploy on merge to main
to the Workspace defined by the ADMIN_TOKEN
set as secret in the Settings of the GitHub repository.
If your Data Project directory is not in the root of the Git repository, you can change the data_project_dir
variable.
About secrets:
tb_host
: The url of the region you want to use, by default is filled with the region of the Workspace you had ontb init --git
admin_token
: The admin token is the Auth Token that gives all the permissions for a specific workspace. You can find more information in here or in theAuth tokens
section of the Tinybird UI.
Let’s review the CD workflow generated:
The CD Workflow¶
The CD workflow is based on this generic CD template that performs the following steps:
0. Workflow configuration¶
1. Install the Tinybird CLI¶
- name: Install Tinybird CLI
run: pip install tinybird-cli
- name: Tinybird version
run: tb --version
- name: Check all the data files syntax
run: tb check
The Tinybird CLI is required to interact with your Workspace. Additionally you can do some pre-flight checks to validate the syntax of the datafiles.
Note that you could run this workflow locally by having a local Data Project and the CLI authenticated to your Tinybird Workspace.
This step is equivalent to the CI workflow.
2. Create a new Tinybird Environment to deploy changes¶
- name: Create new test Environment with data
run: |
tb \
--host $TB_HOST \
--token $ADMIN_TOKEN \
env create tmp_cd_${_NORMALIZED_ENV_NAME}_${GITHUB_RUN_ID} \
${_ENV_FLAGS}
- name: List changes with Workspace (should be empty)
run: tb diff --main --no-verbose
- name: Push changes to the test Environment
run: |
source .tinyenv
CI_DEPLOY_FILE=./deploy/${VERSION}/ci-deploy.sh
if [ -f "$CI_DEPLOY_FILE" ]; then
./deploy/${VERSION}/ci-deploy.sh
else
tb deploy --populate --fixtures --wait
fi
- name: List changes with test Environment (should be empty)
run: tb diff
Although this was already done in CI, it is a pre-flight step to check that changes can actually be deployed.
3. Deploy changes¶
- name: Deploy changes to the main Environment
run: |
if ${{ inputs.tb_deploy}}; then
tb env deploy --semver ${VERSION} --wait
tb release ls
else
CD_DEPLOY_FILE=./deploy/${VERSION}/cd-deploy.sh
if [! -f "$CD_DEPLOY_FILE" ]; then
tb deploy
fi
fi
This is the step that deploys the changes to the main Environment. There are three options for deployment:
Use
tb deploy
If you already pushed your changes to the test Environment and they were correctly deployed you can trust the default tb deploy
command. Note in this case it’s not performing any data operation (like populates). That will come in the next step.
Use a custom shell script
In the same way you could use a custom executable shell script to deploy the changes to the test Environment, you can have a file in deploy/$VERSION/cd-deploy.sh
with the CLI commands required to deploy the changes to the main Environment.
Note the $VERSION
variable is defined in the .tinyenv
file in your Data Project directory. Byt default it is 0.0.0
, but you can increment as needed.
3. Post-deployment operations¶
- name: run post CD deploy commands
run: |
source .tinyenv
CD_DEPLOY_FILE=./deploy/${VERSION}/cd-deploy.sh
if [ -f "$CD_DEPLOY_FILE" ]; then
./deploy/${VERSION}/cd-deploy.sh
fi
Once the changes have been deployed to the Workspace, you can run some post-deployment actions, for instance, if you had to migrate some data between different versions of Data Sources or perform some specific populate action.
Just create a deploy/${VERSION}/cd-deploy.sh
as described in the previous section with the CLI commands required to be executed.
Testing strategies¶
There are different strategies to test your Data Project. See the implementing test strategies guide to learn more about them.
Configure the Continuous Integration regression tests¶
Regression tests for Pipe endpoints are automated tests designed to ensure that they are working correctly after any code changes or updates.
For each Endpoint, actual requests made to them from production according to a chosen test strategy are chosen: coverage, last, or manual. Then, the command executes these requests both in the production Workspace and in the test Environment. Several assertions are performed on both the payload and the performance (response times and bytes read) of these requests.
tb env regression-tests --wait
The default command executes the coverage testing strategy and performs all the assertions. The command also allows you to select a test strategy and customize assertions with options.
tb env regression-tests last pipe_top_.* --limit 10 --no-assert-result --wait
Also we can configure them using a YAML file.
tb env regression-tests -f file.yaml --wait
By specifying a YAML file, we can define a custom set of tests, each with its own configuration options and assertion criteria. This allows us to fine-tune the testing process.
- pipe: '.*' # regular expression that selects all endpoints in the workspace
tests: # list of tests to run for this pipe
- [coverage|last|manual]: # testing strategy to use (coverage, last, or manual)
config: # configuration options for this strategy
assert_result: bool = True # whether to perform an assertion on the results returned by the endpoint
assert_result_no_error: bool = True # whether to verify that the endpoint does not return errors
assert_result_rows_count: bool = True # whether to verify that the correct number of elements are returned in the results
assert_result_ignore_order: bool = False # whether to ignore the order of the elements in the results
assert_time_increase_percentage: int = 25 # allowed percentage increase in endpoint response time. use -1 to disable assert
assert_bytes_read_increase_percentage: int = 25 # allowed percentage increase in the amount of bytes read by the endpoint. use -1 to disable assert
failfast: bool = False # whether the test should stop at the first error encountered
Note that the order of preference for the configuration options is from bottom to top, so the configuration options specified for a specific Pipe will take precedence over the options specified at higher levels of the configuration file.
There are three types of tests available for performing regression tests on pipe endpoints: coverage, last, and manual.
The coverage tests use requests to combine all available parameters in the Endpoint.
The last test executes the last request made to the Endpoint. The available parameters for this type of test are:
limit: an integer indicating the maximum number of results to obtain from the endpoint. Default value is 10.
The manual test allows for custom requests to be made to the endpoint, which are defined in the YAML configuration file. The available parameters for this type of test are:
params: a list of dictionaries containing the parameters to be sent in each request. Each dictionary corresponds to a single request, and contains the endpoint parameters as key-value pairs.
Here’s an example YAML file with two entries for two regular expressions of a pipe, where one overrides the configuration of the other:
- pipe: 'top_.*'
tests:
- coverage:
config: # default config but reducing thresholds from default 25 and expeting different order in the response payload
assert_time_increase_percentage: 15
assert_bytes_read_increase_percentage: 15
assert_result_ignore_order: true
- last:
config: # default config but not asserting performance and failing at first occurrence
assert_time_increase_percentage: -1
assert_bytes_read_increase_percentage: -1
failfast: true
limit: 5
- pipe: 'top_pages'
tests:
- coverage:
- manual:
config:
params:
- {param1: value1, param2: value2}
- {param1: value3, param2: value4}
# Override config for top_pages executing coverage with default config and two manual requests