This is a unittest framework for Sublime Text.
It runs unittest testcases on local machines and via Github Actions.
It also supports testing syntax_test files for the new sublime-syntax
format and sublime-color-scheme files.
test*.py under the directory testsHere are some small examples
Command Palette using ctrl+shift+P or menu item Tools → Command Palette...Unittesting: ... command to run and hit EnterTo test any package…
UnitTesting: Test PackageAn output panel pops up displaying progress and results of running tests.
To run only tests in particular files, enter <Package name>:<filename>.
<filename> should be a unix shell wildcard to match the file names.
<Package name>:test*.py is used by default.
The command UnitTesting: Test Current Package runs all tests
of the current package the active view’s file is part of.
The package is reloaded to pickup any code changes and then tests are executed.
The command UnitTesting: Test Current Package with Coverage
runs tests for current package and generates a coverage report via coverage.
The .coveragerc file is used to control coverage configurations.
If it is missing, UnitTesting will ignore the tests directory.
[!NOTE]
As of Unittesting 1.8.0 the following commands have been replaced
to enable more flexible usage and integration in build systems.unit_testing_current_package
{ "command": "unit_testing", "package": "$package_name" }unit_testing_current_file
{ "command": "unit_testing", "package": "$package_name", "pattern": "$file_name" }
To run tests via build system specify unit_testing build system "target".
{
"target": "unit_testing"
}
Test Current Package build commandIt is recommended to add the following to .sublime-project file
so that ctrl+b would invoke the testing action.
"build_systems":
[
{
"name": "Test Current Package",
"target": "unit_testing",
"package": "$package_name",
"failfast": true
}
]
Test Current File build commandIt is recommended to add the following to .sublime-project file
so that ctrl+b would invoke the testing action.
"build_systems":
[
{
"name": "Test Current File",
"target": "unit_testing",
"package": "$package_name",
"pattern": "$file_name",
"failfast": true
}
]
Unittesting provides the following GitHub Actions, which can be combined
in a workflow to design package tests.
SublimeText/UnitTesting/actions/setup
Setup Sublime Text to run tests within.
This must always be the first step after checking out the package to test.
SublimeText/UnitTesting/actions/run-color-scheme-tests
Test color schemes using ColorSchemeUnit.
SublimeText/UnitTesting/actions/run-syntax-tests
Test sublime-syntax definitions using built-in syntax test functions of already running Sublime Text environment.
It is an alternative to SublimeText/syntax-test-action
or sublimehq’s online syntax_test_runner
SublimeText/UnitTesting/actions/run-tests
Runs the unit_testing command to perform python unit tests.
[!NOTE]
actions are released in the branch
v1.
Minor changes will be pushed to the same branch unless there are breaking changes.
To integrate color scheme tests via ColorSchemeUnit
add the following snippet to a workflow file
(e.g. .github/workflows/color-scheme-tests.yml).
name: ci-color-scheme-tests
on: [push, pull_request]
jobs:
run-syntax-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: SublimeText/UnitTesting/actions/setup@v1
- uses: SublimeText/UnitTesting/actions/run-color-scheme-tests@v1
To run only syntax tests add the following snippet to a workflow file
(e.g. .github/workflows/syntax-tests.yml).
name: ci-syntax-tests
on: [push, pull_request]
jobs:
run-syntax-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: SublimeText/UnitTesting/actions/setup@v1
- uses: SublimeText/UnitTesting/actions/run-syntax-tests@v1
[!NOTE]
If you are looking for syntax tests only, you may also checkout SublimeText/syntax-test-action.
Using this test makes most sense to just re-use an already set-up ST test environment.
To run only python unit tests on all platforms and versions of Sublime Text
add the following snippet to a workflow file (e.g. .github/workflows/unit-tests.yml).
name: ci-unit-tests
on: [push, pull_request]
jobs:
run-tests:
strategy:
fail-fast: false
matrix:
st-version: [3, 4]
os: ["ubuntu-latest", "macOS-latest", "windows-latest"]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v4
- uses: SublimeText/UnitTesting/actions/setup@v1
with:
package-name: Package Name # if differs from repo name
sublime-text-version: ${{ matrix.st-version }}
- uses: SublimeText/UnitTesting/actions/run-tests@v1
with:
coverage: true
package-name: Package Name # if differs from repo name
- uses: codecov/codecov-action@v4
name: ci-tests
on: [push, pull_request]
jobs:
run-tests:
strategy:
fail-fast: false
matrix:
st-version: [3, 4]
os: ["ubuntu-latest", "macOS-latest", "windows-latest"]
runs-on: ${{ matrix.os }}
steps:
# checkout package to test
- uses: actions/checkout@v4
# setup test environment
- uses: SublimeText/UnitTesting/actions/setup@v1
with:
sublime-text-version: ${{ matrix.st-version }}
# run color scheme tests (only on Linux)
- if: ${{ matrix.os == 'ubuntu-latest' }}
uses: SublimeText/UnitTesting/actions/run-color-scheme-tests@v1
# run syntax tests and check compatibility with new syntax engine (only on Linux)
- if: ${{ matrix.os == 'ubuntu-latest' }}
uses: SublimeText/UnitTesting/actions/run-syntax-tests@v1
with:
compatibility: true
# run unit tests with coverage upload
- uses: SublimeText/UnitTesting/actions/run-tests@v1
with:
coverage: true
extra-packages: |
A File Icon:SublimeText/AFileIcon
- uses: codecov/codecov-action@v4
Check this for further examples.
UnitTesting is primarily configured by unittesting.json file in package root directory.
{
"verbosity": 1,
"coverage": true
}
Options provided via build system configuration override unittesting.json.
{
"target": "unit_testing",
"package": "$package_name",
"verbosity": 2,
"coverage": true
}
Options passed as arguments to unit_testing command override unittesting.json.
window.run_command("unit_testing", {"package": "$package_name", "coverage": False})
| name | description | default value |
|---|---|---|
| tests_dir | the name of the directory containing the tests | “tests” |
| pattern | the pattern to discover tests | “test*.py” |
| deferred | whether to use deferred test runner | true |
| condition_timeout | default timeout in ms for callables invoked via yield |
4000 |
| failfast | stop early if a test fails | false |
| output | name of the test output instead of showing in the panel |
null |
| verbosity | verbosity level | 2 |
| warnings | The warnings filter controls python warnings treatment. | “default” |
| capture_console | capture stdout and stderr in the test output | false |
| reload_package_on_testing | reloading package will increase coverage rate | true |
| coverage | track test case coverage | false |
| coverage_on_worker_thread | (experimental) | false |
| generate_html_report | generate HTML report for coverage | false |
| generate_xml_report | generate XML report for coverage | false |
Valid warnings values are:
| Value | Disposition |
|---|---|
| “default” | print the first occurrence of matching warnings for each location (module + line number) where the warning is issued |
| “error” | turn matching warnings into exceptions |
| “ignore” | never print matching warnings |
| “always” | always print matching warnings |
| “module” | print the first occurrence of matching warnings for each module where the warning is issued (regardless of line number) |
| “once” | print only the first occurrence of matching warnings, regardless of location |
see also: https://docs.python.org/3/library/warnings.html#warning-filter
UnitTesting is based on python’s unittest library.
Any valid unittest test case is allowed.
Module lookup and imports respect Sublime Text’s package ecosystem.
Global imports lookup modules in:
${data}/sublime-text/Lib/python38/${data}/sublime-text/Packages/${install}/Packages/A common ST package’s folder structure may look like this:
Packages
├── ...
└── Package To Test
├── sub_package
| ├── __init__.py
| └── module.py
├── tests
| ├── __init__.py
| └── test_module.py
└── plugin.py
The Package To Test is the root package being tested,
which contains tests/ as a sub-package,
next to primary business logic in sub_package.
This differs from how normal python package repositories organize tests next to package’s source folder.
It requires package name (Package To Test) to be included in all absolute module names.
[!NOTE]
ST packages don’t need and should not contain a top-level
__init__.pymodule to be treated like a normal python package.
Test cases can use relative imports to access all modules.
Example:
Package To Test/tests/test_module.py
from unittest.mock import patch
from unittesting import TestCase
from ..sub_package.module import ClassToTest
class MyTestCase(TestCase):
def test_something(self):
obj = ClassToTest()
self.assertTrue(obj.some_method() == True)
def test_with_mock(self):
# need absolute module name including ST package here
with patch("Package To Test.sub_package.module") as mock:
...
Tests can be written using deferrable test cases to test results
of asynchronous or long lasting sublime commands, which require yielding
control to sublime text runtime and resume test execution at a later point.
It is a kind of cooperative multithreading such as provided by asyncio,
but with a home grown DeferringTextTestRunner acting as event loop.
The idea was inspired by Plugin UnitTest Harness.
DeferrableTestCase is used to write the test cases. They are executed by
the DeferringTextTestRunner and the runner expects not only regular test
functions, but also generators. If the test function is a generator, it does
the following
if the yielded object is a callable, the runner will evaluate the
callable and check its returned value. If the result is not None,
the runner continues the generator, if not, the runner will wait until the
condition is met with the default timeout of 4s. The result of the callable
can be also retrieved from the yield statement. The yielded object could
also be a dictionary of the form
{
# required condition callable
"condition": callable,
# system timestamp when to start condition checks (default: `time.time()`)
"start_time": timestamp,
# optional the interval to invoke `condition()` (default: 17)
"period": milliseconds,
# optional timeout to wait for condition to be met (default: value from unittesting.json or 4000)
"timeout": milliseconds,
# optional message to print, if condition is not met within timeout
"timeout_message": "Condition not fulfilled"
}
to specify various overrides such as poll interval or timeout in ms.
if the yielded object is an integer, say x, then it will continue the
generator after x ms.
yield AWAIT_WORKER would yield to a task in the worker thread.
otherwise, a single yield would yield to a task in the main thread.
Example:
import sublime
from unittesting import DeferrableTestCase
class TestCondition(DeferrableTestCase):
def test_condition1(self):
x = []
def append():
x.append(1)
def condition():
return len(x) == 1
sublime.set_timeout(append, 100)
# wait until `condition()` is true
yield condition
self.assertEqual(x[0], 1)
def test_condition2(self):
x = []
def append():
x.append(1)
def condition():
return len(x) == 1
sublime.set_timeout(append, 100)
# wait until `condition()` is true
yield {
"condition": condition,
"period": 200,
"timeout": 5000,
"timeout_message": "Not enough items added to x"
}
self.assertEqual(x[0], 1)
see also tests/test_defer.py.
Tests for asyncio are written using IsolatedAsyncioTestCase class.
import asyncio
from unittesting import IsolatedAsyncioTestCase
async def a_coro():
return 1 + 1
class MyAsyncTestCase(IsolatedAsyncioTestCase):
async def test_something(self):
result = await a_coro()
await asyncio.sleep(1)
self.assertEqual(result, 2)
UnitTesting provides some helper test case classes,
which perform common tasks such as overriding preferences, setting up views, etc.
Usage and some examples are available via docstrings, which are displayed as hover popup by LSP and e.g. LSP-pyright.
Thanks guillermooo and philippotto for their early efforts in AppVeyor and Travis CI macOS support (though these services are not supported now).
We use cookies
We use cookies to analyze traffic and improve your experience. You can accept or reject analytics cookies.