CLI command reference

tb auth

Configure your Tinybird authentication.

auth commands

CommandDescription
info OPTIONSGet information about the authentication that is currently being used
ls OPTIONSList available regions to authenticate
use OPTIONS REGION_NAME_OR_HOST_OR_IDSwitch to a different region. You can pass the region name, the region host url, or the region index after listing available regions with tb auth ls

The previous commands accept the following options:

  • --token INTEGER: Use auth token, defaults to TB_TOKEN envvar, then to the .tinyb file
  • --host TEXT: Set custom host if it's different than https://api.tinybird.co. Check this page for the available list of regions
  • --region TEXT: Set region. Run 'tb auth ls' to show available regions
  • --connector [bigquery|snowflake]: Set credentials for one of the supported connectors
  • --interactive,-i: Show available regions and select where to authenticate to

tb branch

Manage your Workspace branches.

Branch commands

CommandDescriptionOptions

create BRANCH_NAME

Create a new Branch in the current 'main' Workspace

--last-partition: Attach the last modified partition from 'main' to the new Branch

-i, --ignore-datasource DATA_SOURCE_NAME: Ignore specified Data Source partitions

--wait / --no-wait: Wait for Branch jobs to finish, showing a progress bar. Disabled by default

currentShow the Branch you're currently authenticated to

data

Perform a data branch operation to bring data into the current Branch

--last-partition: Attach the last modified partition from 'main' to the new Branch

-i, --ignore-datasource DATA_SOURCE_NAME: Ignore specified Data Source partitions

--wait / --no-wait: Wait for Branch jobs to finish, showing a progress bar. Disabled by default

datasource copy DATA_SOURCE_NAME

Copy data source from Main

--sql SQL: Freeform SQL query to select what is copied from Main into the Environment Data Source

--sql-from-main: SQL query selecting all from the same Data Source in Main

--wait / --no-wait: Wait for copy job to finish. Disabled by default

lsList all the Branches available

regression-tests

Regression test commands

-f, --filename PATH: The yaml file with the regression-tests definition

--skip-regression-tests / --no-skip-regression-tests: Flag to skip execution of regression tests. This is handy for CI Branches where regression might be flaky

--main: Run regression tests in the main Branch. For this flag to work all the resources in the Branch Pipe Endpoints need to exist in the main Branch.

--wait / --no-wait: Wait for regression job to finish, showing a progress bar. Disabled by default

regression-tests coverage PIPE_NAME

Run regression tests using coverage requests for Branch vs Main Workspace. It creates a regression-tests job. The argument supports regular expressions. Using '.*' if no Pipe name is provided

--assert-result / --no-assert-result: Whether to perform an assertion on the results returned by the Endpoint. Enabled by default. Use --no-assert-result if you expect the endpoint output is different from current version

--assert-result-no-error / --no-assert-result-no-error: Whether to verify that the Endpoint does not return errors. Enabled by default. Use --no-assert-result-no-error if you expect errors from the endpoint

--assert-result-rows-count / --no-assert-result-rows-count: Whether to verify that the correct number of elements are returned in the results. Enabled by default. Use --no-assert-result-rows-count if you expect the numbers of elements in the endpoint output is different from current version

--assert-result-ignore-order / --no-assert-result-ignore-order: Whether to ignore the order of the elements in the results. Disabled by default. Use --assert-result-ignore-order if you expect the endpoint output is returning same elements but in different order

--assert-time-increase-percentage INTEGER: Allowed percentage increase in Endpoint response time. Default value is 25%. Use -1 to disable assert

--assert-bytes-read-increase-percentage INTEGER: Allowed percentage increase in the amount of bytes read by the endpoint. Default value is 25%. Use -1 to disable assert

--assert-max-time FLOAT: Max time allowed for the endpoint response time. If the response time is lower than this value then the --assert-time-increase-percentage is not taken into account

--ff, --failfast: When set, the checker will exit as soon one test fails

--wait: Waits for regression job to finish, showing a progress bar. Disabled by default

--skip-regression-tests / --no-skip-regression-tests: Flag to skip execution of regression tests. This is handy for CI environments where regression might be flaky

--main: Run regression tests in the main Branch. For this flag to work all the resources in the Branch Pipe Endpoints need to exist in the main Branch

regression-tests last PIPE_NAME

Run regression tests using coverage requests for Branch vs Main Workspace. It creates a regression-tests job. The argument supports regular expressions. Using '.*' if no Pipe name is provided

--assert-result / --no-assert-result: Whether to perform an assertion on the results returned by the Endpoint. Enabled by default. Use --no-assert-result if you expect the endpoint output is different from current version

--assert-result-no-error / --no-assert-result-no-error: Whether to verify that the Endpoint does not return errors. Enabled by default. Use --no-assert-result-no-error if you expect errors from the endpoint

--assert-result-rows-count / --no-assert-result-rows-count: Whether to verify that the correct number of elements are returned in the results. Enabled by default. Use --no-assert-result-rows-count if you expect the numbers of elements in the endpoint output is different from current version

--assert-result-ignore-order / --no-assert-result-ignore-order: Whether to ignore the order of the elements in the results. Disabled by default. Use --assert-result-ignore-order if you expect the endpoint output is returning same elements but in different order

--assert-time-increase-percentage INTEGER: Allowed percentage increase in Endpoint response time. Default value is 25%. Use -1 to disable assert

--assert-bytes-read-increase-percentage INTEGER: Allowed percentage increase in the amount of bytes read by the endpoint. Default value is 25%. Use -1 to disable assert

--assert-max-time FLOAT: Max time allowed for the endpoint response time. If the response time is lower than this value then the --assert-time-increase-percentage is not taken into account

--ff, --failfast: When set, the checker will exit as soon one test fails

--wait: Waits for regression job to finish, showing a progress bar. Disabled by default

--skip-regression-tests / --no-skip-regression-tests: Flag to skip execution of regression tests. This is handy for CI environments where regression might be flaky

regression-tests manual PIPE_NAME

Run regression tests using coverage requests for Branch vs Main Workspace. It creates a regression-tests job. The argument supports regular expressions. Using '.*' if no Pipe name is provided

--assert-result / --no-assert-result: Whether to perform an assertion on the results returned by the Endpoint. Enabled by default. Use --no-assert-result if you expect the endpoint output is different from current version

--assert-result-no-error / --no-assert-result-no-error: Whether to verify that the Endpoint does not return errors. Enabled by default. Use --no-assert-result-no-error if you expect errors from the endpoint

--assert-result-rows-count / --no-assert-result-rows-count: Whether to verify that the correct number of elements are returned in the results. Enabled by default. Use --no-assert-result-rows-count if you expect the numbers of elements in the endpoint output is different from current version

--assert-result-ignore-order / --no-assert-result-ignore-order: Whether to ignore the order of the elements in the results. Disabled by default. Use --assert-result-ignore-order if you expect the endpoint output is returning same elements but in different order

--assert-time-increase-percentage INTEGER: Allowed percentage increase in Endpoint response time. Default value is 25%. Use -1 to disable assert

--assert-bytes-read-increase-percentage INTEGER: Allowed percentage increase in the amount of bytes read by the endpoint. Default value is 25%. Use -1 to disable assert

--assert-max-time FLOAT: Max time allowed for the endpoint response time. If the response time is lower than this value then the --assert-time-increase-percentage is not taken into account

--ff, --failfast: When set, the checker will exit as soon one test fails

--wait: Waits for regression job to finish, showing a progress bar. Disabled by default

--skip-regression-tests / --no-skip-regression-tests: Flag to skip execution of regression tests. This is handy for CI Branches where regression might be flaky

rm [BRANCH_NAME_OR_ID]Removes a Branch from the Workspace (not Main). It can't be recovered--yes: Do not ask for confirmation
use [BRANCH_NAME_OR_ID]Switch to another Branch

tb check

Check file syntax.

It only allows one option, --debug, which prints the internal representation.

tb datasource

Data Sources commands.

CommandDescriptionOptions
analyze OPTIONS URL_OR_FILEAnalyze a URL or a file before creating a new data source
append OPTIONS DATASOURCE_NAME URLAppends data to an existing Data Source from URL, local file or a connector

connect OPTIONS CONNECTION DATASOURCE_NAME

Create a new datasource from an existing connection

--kafka-topic TEXT: For Kafka connections: topic

--kafka-group TEXT: For Kafka connections: group ID

--kafka-auto-offset-reset [latest|earliest]: Kafka auto.offset.reset config. Valid values are: ["latest", "earliest"]

copy OPTIONS DATASOURCE_NAME

Copy data source from Main

--sql TEXT: Freeform SQL query to select what is copied from Main into the Branch Data Source

--sql-from-main: SQL query selecting * from the same Data Source in Main

--wait: Wait for copy job to finish, disabled by default

delete OPTIONS DATASOURCE_NAME

Delete rows from a Data Source

--yes: Do not ask for confirmation

--wait: Wait for delete job to finish, disabled by default

--dry-run: Run the command without deleting anything

--sql-condition: Delete rows with SQL condition

generate OPTIONS FILENAMESGenerate a Data Source file based on a sample CSV file from local disk or URL--force: Override existing files

ls OPTIONS

List Data Sources

--match TEXT: Retrieve any resources matching the pattern. eg --match _test

--format [json]: Force a type of the output

--dry-run: Run the command without deleting anything

--sql-condition: Delete rows with SQL condition

replace OPTIONS DATASOURCE_NAME URL

Replaces the data in a Data Source from a URL, local file or a connector

--sql: The SQL to extract from

--connector: Connector name

--sql-condition: Delete rows with SQL condition

rm OPTIONS DATASOURCE_NAMEDelete a Data Source--yes: Do not ask for confirmation

share OPTIONS DATASOURCE_NAME WORKSPACE_NAME_OR_ID

Share a Data Source

--user_token TEXT: When passed, we won't prompt asking for it

--yes: Do not ask for confirmation

sync OPTIONS DATASOURCE_NAMESync from connector defined in .datasource file

truncate OPTIONS DATASOURCE_NAME

Truncate a Data Source

--yes: Do not ask for confirmation

--cascade: Truncate dependent Data Source attached in cascade to the given Data Source

unshare OPTIONS DATASOURCE_NAME WORKSPACE_NAME_OR_ID

Unshare a Data Source

--user_token TEXT: When passed, we won't prompt asking for it

--yes: Do not ask for confirmation

tb dependencies

Print all Data Sources dependencies.

Its options:

  • --no-deps: Print only Data Sources with no Pipes using them
  • --match TEXT: Retrieve any resource matching the pattern
  • --pipe TEXT: Retrieve any resource used by Pipe
  • --datasource TEXT: Retrieve resources depending on this Data Source
  • --check-for-partial-replace: Retrieve dependant Data Sources that will have their data replaced if a partial replace is executed in the Data Source selected
  • --recursive: Calculate recursive dependencies

tb deploy

Deploy in Tinybird pushing resources changed from previous release using Git.

These are the options available for the deploy command:

  • --dry-run: Run the command without creating resources on the Tinybird account or any side effect.
  • -f, --force: Override Pipes when they already exist.
  • --override-datasource: When pushing a Pipe with a materialized node if the target Data Source exists it will try to override it.
  • --populate: Populate materialized nodes when pushing them.
  • --subset FLOAT: Populate with a subset percent of the data (limited to a maximum of 2M rows), this is useful to quickly test a materialized node with some data. The subset must be greater than 0 and lower than 0.1. A subset of 0.1 means a 10% of the data in the source Data Source will be used to populate the materialized view. Use it together with --populate, it has precedence over --sql-condition.
  • --sql-condition TEXT: Populate with a SQL condition to be applied to the trigger Data Source of the Materialized View. For instance, --sql-condition='date == toYYYYMM(now())' it'll populate taking all the rows from the trigger Data Source which date is the current month. Use it together with --populate. --sql-condition is not taken into account if the --subset param is present. Including in the sql_condition any column present in the Data Source engine_sorting_key will make the populate job process less data.
  • --unlink-on-populate-error: If the populate job fails the materialized view is unlinked and new data won't be ingested there. First time a populate job fails, the materialized view is always unlinked.
  • --wait: To be used along with --populate command. Waits for populate jobs to finish, showing a progress bar. Disabled by default.
  • --yes: Do not ask for confirmation.
  • --workspace_map TEXT..., --workspace TEXT...: Adds a workspace path to the list of external workspaces, usage: --workspace name path/to/folder.
  • --timeout FLOAT: Timeout you want to use for the job populate.
  • --user_token TOKEN: The User Token is required for sharing a Data Source that contains the SHARED_WITH entry.

tb diff

Diffs local datafiles to the corresponding remote files in the workspace.

It works as a regular diff command, useful to know if the remote resources have been changed. Some caveats:

  • Resources in the Workspace might mismatch due to having slightly different SQL syntax, for instance: A parenthesis mismatch, INTERVAL expressions or changes in the schema definitions.
  • If you didn't specify an ENGINE_PARTITION_KEY and ENGINE_SORTING_KEY, resources in the Workspace might have default ones.

The recommendation in these cases is use tb pull to keep your local files in sync.

Remote files are downloaded and stored locally in a .diff_tmp directory, if working with git you can add it to .gitignore.

The options for this command:

  • --fmt / --no-fmt: Format files before doing the diff, default is True so both files match the format
  • --no-color: Don't colorize diff
  • --no-verbose: List the resources changed not the content of the diff

tb fmt

Formats a .datasource, .pipe or .incl file.

Implementation is based in the ClickHouse dialect of shandy-sqlfmt adapted to Tinybird datafiles.

These are the options available for the fmt command:

  • --line-length INTEGER: A number indicating the maximum characters per line in the node SQL, lines will be splitted based on the SQL syntax and the number of characters passed as a parameter
  • --dry-run: Don't ask to override the local file
  • --yes: Do not ask for confirmation to overwrite the local file
  • --diff: Formats local file, prints the diff and exits 1 if different, 0 if equal

This command removes comments starting with # from the file, use DESCRIPTION instead.

You can add tb fmt to your git pre-commit hook to have your files properly formatted. If the SQL formatting results are not the ones expected to you, you can disable it just for the blocks needed. Read how to disable fmt.

tb init

Initializes folder layout.

It comes with these options:

  • --generate-datasources: Generate datasources based on CSV, NDJSON and Parquet files in this folder
  • --folder DIRECTORY: Folder where datafiles will be placed
  • -f, --force: Overrides existing files
  • -ir, --ignore-remote: Ignores remote files not present in the local data project on tb init --git
  • --git: Init workspace with Git releases. Generates CI/CD files for your Git provider
  • --override-commit TEXT: Use this option to manually override the reference commit of your workspace. This is useful if a commit is not recognized in your Git log, such as after a force push (git push -f).
  • --cicd: Generates only CI/CD files for your Git provider

tb materialize

Analyzes the node_name SQL query to generate the .datasource and .pipe files needed to push a new materialize view.

This command guides you to generate the Materialized View with name TARGET_DATASOURCE, the only requirement is having a valid Pipe datafile locally. Use tb pull to download resources from your workspace when needed.

It allows to use these options:

---push-deps: Push dependencies, disabled by default

  • --workspace TEXT...: Add a workspace path to the list of external workspaces, usage: --workspace name path/to/folder
  • --no-versions: When set, resource dependency versions are not used, it pushes the dependencies as-is
  • --verbose: Prints more log
  • --unlink-on-populate-error: If the populate job fails the Materialized View is unlinked and new data won't be ingested in the Materialized View. First time a populate job fails, the Materialized View is always unlinked.

tb pipe

Pipes commands.

CommandDescriptionOptions
append OPTIONS PIPE_NAME_OR_UID SQLAppend a node to a Pipe
copy pause OPTIONS PIPE_NAME_OR_UIDPause a running copy Pipe
copy resume OPTIONS PIPE_NAME_OR_UIDResume a paused copy Pipe

copy run OPTIONS PIPE_NAME_OR_UID

Run an on-demand copy job

--wait: Wait for the copy job to finish

--yes: Do not ask for confirmation

--param TEXT: Key and value of the params you want the Copy pipe to be called with. For example: tb pipe copy run <my_copy_pipe> --param foo=bar

data OPTIONS PIPE_NAME_OR_UID

Print data returned by a Pipe

--query TEXT: Run SQL over pipe results

--format [json|csv]: Return format (CSV, JSON)

generate OPTIONS NAME QUERYGenerates a Pipe file based on a sql query. Example: tb pipe generate my_pipe 'select * from existing_datasource'--force: Override existing files

ls OPTIONS

List Pipes

--match TEXT: Retrieve any resourcing matching the pattern. eg --match _test

--format [json|csv]: Force a type of the output

populate OPTIONS PIPE_NAME

Populate the result of a Materialized Node into the target Materialized View

--node TEXT: Name of the materialized node. Required

--sql-condition TEXT: Populate with a SQL condition to be applied to the trigger Data Source of the Materialized View. For instance, --sql-condition='date == toYYYYMM(now())' it'll populate taking all the rows from the trigger Data Source which date is the current month. Use it together with --populate. --sql-condition is not taken into account if the --subset param is present. Including in the sql_condition any column present in the Data Source engine_sorting_key will make the populate job process less data.

--truncate: Truncates the materialized Data Source before populating it.

--unlink-on-populate-error: If the populate job fails the Materialized View is unlinked and new data won't be ingested in the Materialized View. First time a populate job fails, the Materialized View is always unlinked.

--wait: Waits for populate jobs to finish, showing a progress bar. Disabled by default.

publish OPTIONS PIPE_NAME_OR_ID NODE_UIDChange the published node of a Pipe

regression-test OPTIONS FILENAMES

Run regression tests using last requests

--debug: Prints internal representation, can be combined with any command to get more information.

--only-response-times: Checks only response times

--workspace_map TEXT..., --workspace TEXT...: Add a workspace path to the list of external workspaces,usage: --workspace name path/to/folder

--no-versions: When set, resource dependency versions are not used, it pushes the dependencies as-is

-l, --limit INTEGER RANGE: Number of requests to validate [0<=x<=100]

--sample-by-params INTEGER RANGE: When set, we will aggregate the pipe_stats_rt requests by extractURLParameterNames(assumeNotNull(url)) and for each combination we will take a sample of N requests [1<=x<=100]

-m, --match TEXT: Filter the checker requests by specific parameter. You can pass multiple parameters -m foo -m bar

-ff, --failfast: When set, the checker will exit as soon one test fails

--ignore-order: When set, the checker will ignore the order of list properties

--validate-processed-bytes: When set, the checker will validate that the new version doesn't process more than 25% than the current version

rm OPTIONS PIPE_NAME_OR_IDDelete a Pipe. PIPE_NAME_OR_ID can be either a Pipe name or id in the Workspace or a local path to a .pipe file--yes: Do not ask for confirmation
set_endpoint OPTIONS PIPE_NAME_OR_ID NODE_UIDSame as 'publish', change the published node of a Pipe

sink run OPTIONS PIPE_NAME_OR_UID

Run an on-demand sink job

--wait: Wait for the sink job to finish

--yes: Do not ask for confirmation

--dry-run: Run the command without executing the sink job

--param TEXT: Key and value of the params you want the Sink pipe to be called with. For example: tb pipe sink run <my_sink_pipe> --param foo=bar

stats OPTIONS PIPESPrint pipe stats for the last 7 days--format [json]: Force a type of the output. To parse the output, keep in mind to use tb --no-version-warning pipe stats option
token_read OPTIONS PIPE_NAMERetrieve a token to read a Pipe
token_read OPTIONS PIPE_NAME NODE_UIDUnpublish the endpoint of a Pipe

tb pull

Retrieve the latest version for project files from your workspace.

With these options:

  • --folder DIRECTORY: Folder where files will be placed
  • --auto / --no-auto: Saves datafiles automatically into their default directories (/datasources or /pipes). Default is True
  • --match TEXT: Retrieve any resourcing matching the pattern. eg --match _test
  • -f, --force: Override existing files

tb push

Push files to your workspace.

You can use this command with these options:

  • --dry-run: Run the command without creating resources on the Tinybird account or any side effect
  • --check / --no-check: Enable/Disable output checking, enabled by default
  • --push-deps: Push dependencies, disabled by default
  • --only-changes: Push only the resources that have changed compared to the destination workspace
  • --debug: Prints internal representation, can be combined with any command to get more information
  • -f, --force: Override pipes when they already exist
  • --override-datasource: When pushing a pipe with a Materialized node if the target Data Source exists it will try to override it.
  • --populate: Populate materialized nodes when pushing them
  • --subset FLOAT: Populate with a subset percent of the data (limited to a maximum of 2M rows), this is useful to quickly test a materialized node with some data. The subset must be greater than 0 and lower than 0.1. A subset of 0.1 means a 10 percent of the data in the source Data Source will be used to populate the materialized view. Use it together with --populate, it has precedence over --sql-condition
  • --sql-condition TEXT: Populate with a SQL condition to be applied to the trigger Data Source of the Materialized View. For instance, --sql-condition='date == toYYYYMM(now())' it'll populate taking all the rows from the trigger Data Source which date is the current month. Use it together with --populate. --sql-condition is not taken into account if the --subset param is present. Including in the sql_condition any column present in the Data Source engine_sorting_key will make the populate job process less data
  • --unlink-on-populate-error: If the populate job fails the Materialized View is unlinked and new data won't be ingested in the Materialized View. First time a populate job fails, the Materialized View is always unlinked
  • --fixtures: Append fixtures to Data Sources
  • --wait: To be used along with --populate command. Waits for populate jobs to finish, showing a progress bar. Disabled by default
  • --yes: Do not ask for confirmation
  • --only-response-times: Checks only response times, when --force push a Pipe
  • --workspace TEXT..., --workspace_map TEXT...: Add a workspace path to the list of external workspaces, usage: --workspace name path/to/folder
  • --no-versions: When set, resource dependency versions are not used, it pushes the dependencies as-is
  • --timeout FLOAT: Timeout you want to use for the populate job
  • -l, --limit INTEGER RANGE: Number of requests to validate [0<=x<=100]
  • --sample-by-params INTEGER RANGE: When set, we will aggregate the pipe_stats_rt requests by extractURLParameterNames(assumeNotNull(url)) and for each combination we will take a sample of N requests [1<=x<=100]
  • -ff, --failfast: When set, the checker will exit as soon one test fails
  • --ignore-order: When set, the checker will ignore the order of list properties
  • --validate-processed-bytes: When set, the checker will validate that the new version doesn't process more than 25% than the current version
  • --user_token TEXT: The user token is required for sharing a Data Source that contains the SHARED_WITH entry

tb release

Manage your Workspace releases.

Release commands

CommandDescriptionOptions
createCreate a new Release in deploying status--semver [VERSION]: Semver of the new Release. Required
generateGenerates a custom deployment for a Release--semver [VERSION]: Semver of the new Release. Required
lsLists Releases for the current Workspace
previewUpdates the status of a deploying Release to preview--semver [VERSION]: Semver of the new Release. Required
promotePromotes to live status a preview Release--semver [VERSION]: Semver of the new Release. Required
rmRemoves a preview or failed Release. This action is irreversible--semver [VERSION]: Semver of the new Release. Required
rollbackRollbacks to a previous Release--semver [VERSION]: Semver of the new Release. Required

Remember only one Preview and one Rollback is allowed per major version.

tb sql

Run SQL query over Data Sources and Pipes.

  • --rows_limit INTEGER: Max number of rows retrieved
  • --pipeline TEXT: The name of the Pipe to run the SQL Query
  • --pipe TEXT: The path to the .pipe file to run the SQL Query of a specific NODE
  • --node TEXT: The NODE name
  • --format [json|csv|human]: Output format
  • --stats / --no-stats: Show query stats

tb token

Manage your workspace tokens.

CommandDescriptionOptions
copy OPTIONS TOKEN_IDCopy a token
ls OPTIONSList tokens--match TEXT: Retrieve any token matching the pattern. eg --match _test
refresh OPTIONS TOKEN_IDRefresh a token--yes: Do not ask for confirmation
rm OPTIONS TOKEN_IDRemove a token--yes: Do not ask for confirmation
scopes OPTIONS TOKEN_IDList token scopes

tb workspace

Manage your workspaces.

CommandDescriptionOptions

clear OPTIONS

Drop all the resources inside a project. This command is dangerous because it removes everything, use with care.

--yes: Do not ask for confirmation

--dry-run: Run the command without removing anything

create OPTIONS WORKSPACE_NAME

Create a new Workspace for your Tinybird user

--starter_kit TEXT: Use a Tinybird starter kit as a template

--user_token TEXT: When passed, we won't prompt asking for it

--fork: When enabled, we will share all datasource from the current workspace to the new created one

current OPTIONSShow the workspace you're currently authenticated to

delete OPTIONS WORKSPACE_NAME_OR_ID

Delete a workspace where you are an admin

--user_token TEXT: When passed, we won't prompt asking for it

--yes: Do not ask for confirmation

ls OPTIONSList all the workspaces you have access to in the account you're currently authenticated to
members add OPTIONS MEMBERS_EMAILSAdds members to the current Workspace--user_token TEXT: When passed, we won't prompt asking for it
members ls OPTIONSList members in the current Workspace
members rm OPTIONSRemoves members from the current Workspace--user_token TEXT: When passed, we won't prompt asking for it
members set-role OPTIONS [guest|viewer|admin] MEMBERS_EMAILSSets the role for existing workspace members--user_token TEXT: When passed, we won't prompt asking for it