ur test run - run one or more test scripts
# run everything in a given namespace cd my_sandbox/TheNamespace ur test run --recurse # run only selected tests cd my_sandbox/TheNamespace ur test run My/Module.t Another/Module.t t/foo.t t/bar.t # run only tests which load the TheNamespace::DNA module cd my_sandbox/TheNamespace ur test run --cover TheNamespace/DNA.pm # run only tests which cover the changes you have in Subversion cd my_sandbox/TheNamespace ur test run --cover-svn-changes # run 5 tests in parallel as jobs scheduled via LSF cd my_sandbox/TheNamespace ur test run --lsf --jobs 5
Runs a test harness around automated test cases, like "make test" in a make-oriented software distrbution, and similar to "prove" run in bulk.
When run w/o parameters, it looks for "t" directory in the current working directory, and runs ALL tests under that directory.
Run all tests in the current directory, and in sub-directories.
Include "long" tests, which are otherwise skipped in test harness execution
Be verbose, meaning that individual cases will appear instead of just a full-script summary
- --cover My/Module.pm
Looks in a special sqlite database which is updated by the cron which runs tests, to find all tests which load My/Module.pm at some point before they exit. Only these tests will be run.
* you will still need the --long flag to run long tests.
* if you specify tests on the command-line, only tests in both lists will run
* this can be specified multiple times
TOOL can be svn, svk, or cvs. The script will run either "svn status", "svk status", or "cvs -q up" on a parent directory with "GSC" in it, and get all of the changes in your perl_modules trunk. It will behave as though those modules were listed as individual --cover options.
Tests should not be run locally, instead they are submitted as jobs to the LSF cluster with bsub.
Parameters given to bsub when sceduling jobs. The default is "-q short -R select[type==LINUX64]"
- --jobs <number>
This many tests should be run in parallel. If --lsf is also specified, then these parallel tests will be submitted as LSF jobs.
- automatic remote execution for tests requiring a distinct hardware platform
- logging profiling and coverage metrics with each test
2 POD Errors
The following errors were encountered while parsing the POD:
- Around line 948:
You forgot a '=back' before '=head1'
- Around line 950:
'=item' outside of any '=over'
=over without closing =back