Skip to main content

CI test scripts

Add a test: annotation at the top of any script to turn it into a CI test. When the tested script or flow is deployed, the test runs automatically.

Writing a test script

Single target

// test: script/u/admin/my_script
import * as wmill from "windmill-client";

export async function main() {
const result = await wmill.runScript("u/admin/my_script", {});

if (result !== 42) {
throw new Error(`Expected 42, got ${JSON.stringify(result)}`);
}

return result;
}

Multiple targets

// test:
// script/u/admin/script_a
// script/u/admin/script_b
// flow/u/admin/my_flow
import * as wmill from "windmill-client";

export async function main() {
// test all items
}

Wildcards

Annotations support glob patterns so a single test script can cover a whole folder tree:

  • * matches one path segment.
  • ** matches any depth.
// test: script/u/admin/*
// test: flow/u/team/**

The first pattern matches every script directly under u/admin/. The second matches every flow at any depth under u/team/. Wildcards can be combined with multi-target annotations on separate lines.

Branching on the triggering runnable

CI test jobs receive a WM_TESTED_RUNNABLE environment variable containing {kind}/{path} of the runnable that triggered the test (for example, script/u/admin/my_script). This lets a single test script cover multiple targets and branch on which one fired it.

// test: script/u/admin/*
import * as wmill from "windmill-client";

export async function main() {
const tested = process.env.WM_TESTED_RUNNABLE!; // e.g. "script/u/admin/my_script"
const path = tested.replace(/^script\//, "");
return await wmill.runScript(path, {});
}

Python

# test: script/u/admin/my_script
import wmill

def main():
result = wmill.run_script("u/admin/my_script", {})
assert result == 42
return result

In Python, the triggering runnable is exposed as os.environ["WM_TESTED_RUNNABLE"].

The annotation uses the comment syntax of each language (// for TypeScript, # for Python, etc.).

The script creation page also provides ready-made CI test templates for TypeScript and Python.

Where results appear

Test scripts show a yellow CI test badge in the script list and detail page.

On the detail page of the tested script or flow, each test is listed with its status (pass, fail, or running) and a link to the job run. Results auto-refresh while tests are running.

In workspace forks, CI results are also visible:

  • The fork banner shows a summary of passing, failing, and running tests across all changed items.
  • On the comparison page, each changed item displays a CI badge, and a test summary lists all test scripts with their latest results.