From the Publisher
About the Author
Ian Langworth (http://langworth.com/) has been writing Perl for years and actively involved in the community since 2003. He has contributed a handful of modules to the CPAN, most of which are Kwiki-related. He has spoken at Perl-related conferences as LISA and YAPC. Ian is also the author surprisingly widespread utility, Cadubi, which is packaged for many free operating systems.
Ian is currently studying Computer Science and Cognitive Psychology at Northeastern University. Whilst pursuing a degree, he's participating in an volunteer systems administration group and working toward making higher code quality and robustness an easier goal to achieve.
He currently resides in Boston, Massachusetts where he participates in the local Boston Perl Mongers group and lives precariously close to Fenway Park.
chromatic is the technical editor of the O'Reilly Network, covering open source, Linux, development, and dynamic languages. He is also the author of the Extreme Programming Pocket Guide and Running Weblogs with Slash, as well as the editor of BSD Hacks and Gaming Hacks. He is the original author of Test::Builder, the foundation for most modern testing modules in Perl 5, and has contributed many of the tests for core Perl. He has given tutorials and presentations at several Perl conferences, including OSCON, and often writes for Perl.com, which he also edits. He lives just west of Portland, Oregon, with two cats, a creek in his backyard, and, as you may have guessed, several unfinished projects.
Excerpt. © Reprinted by permission. All rights reserved.
The goal of all testing is to improve the quality of code. Quality isnt just the absence of bugs and features behaving as intended. High-quality code and projects install well, behave well, have good and useful documentation, and demonstrate reliability and care outside of the code itself. If your users can run the tests too, thats a good sign.
Its not always easy to build quality into a system, but if you can test your project, you can improve its quality. Perl has several tools and techniques to distribute tests and test the non-code portions of your projects. The labs in this chapter demonstrate how to use them and what they can do for you.
Testing POD Files
The Plain Old Documentation format, or POD, is the standard for Perl documentation. Every Perl module distribution should contain some form of POD, whether in standalone .pod files or embedded in the modules and programs themselves.
As you edit documentation in a project, you run the risk of making errors. While typos and omissions can be annoying and distracting, formatting errors can render your documentation incorrectly or even make it unusable. Missing an =cut on inline POD may cause bizarre failures by turning working code into documentation. Fortunately, a test suite can check the syntax of all of the POD in your distribution.
How do I do that?
Consider a module distribution for a popular racing sport. The directory structure contains a t/ directory for the tests and a lib/ directory for the modules and POD documents. To test all of the POD in a distribution, create an extra test file, t/pod.t, as follows:
eval 'use Test::Pod 1.00';
plan( skip_all => 'Test::Pod 1.00 required for testing POD' ) if $@;
Run the test file with prove:
$ prove -v t/pod.t
ok 1 - lib/Sports/NASCAR/Car.pm
ok 2 - lib/Sports/NASCAR/Driver.pm
ok 3 - lib/Sports/NASCAR/Team.pm
All tests successful.
Files=1, Tests=3, 0 wallclock secs ( 0.45 cusr + 0.03 csys = 0.48 CPU)
What just happened?
Because Test::Pod is a prerequisite only for testing, its an optional prerequisite for the distribution. The second and third lines of t/pod.t check to see whether the user has Test::Pod installed. If not, the test file skips the POD-checking tests.
One of the test functions exported by Test::Pod is all_pod_files_ok( ). If given no arguments, it finds all Perl-related files in a blib/ or lib/ directory within the current directory. It declares a plan, planning one test per file found. The previous example finds three files, all of which have valid POD.
If Test::Pod finds a file that doesnt contain any POD at all, the test for that file will be a success.
Q: How can I test a specific list of files?
A: Pass all_pod_files_ok( ) an array of filenames of all the files to check. For example, to test the three files that Test::Pod found previously, change t/pod.t to:
eval 'use Test::Pod 1.00';
plan( skip_all => 'Test::Pod 1.00 required for testing POD' ) if $@;all_pod_files_ok(
Q: Should I ship POD-checking tests with my distribution?
A: Theres no strong consensus in the Perl QA community one way or the other, except that its valuable for developers to run these tests before releasing a new version of the project. If the POD wont change as part of the build process, asking users to run the tests may have little practical value besides demonstrating that you consider the validity of your documentation to be important.Not everyone agrees with this metric.
Testing Documentation Coverage
When defining an API, every function or method should have some documentation explaining its purpose. Thats a good goalone worth capturing in tests. Without requiring you to hardcode the name of every documented function, Test::Pod::Coverage can help you to ensure that all the subroutines you expect other people to use have proper POD documentation.
How do I do that?
Assume that you have a module distribution for a popular auto-racing sport. The distributions base directory contains a t/ directory with tests and a lib/ directory with modules. Create a test file, t/pod-coverage.t, that contains the following:
eval 'use Test::Pod::Coverage 1.04';
skip_all => 'Test::Pod::Coverage 1.04 required for testing POD coverage'
) if $@;
Run the test file with prove to see output similar to:
$ prove -v t/pod-coverage.t
not ok 1 - Pod coverage on Sports::NASCAR::Car# Failed test (/usr/local/share/perl/5.8.4/Test/Pod/Coverage.pm
at line 112)
# Coverage for Sports::NASCAR::Car is 75.0%, with 1 naked subroutine:
ok 2 - Pod coverage on Sports::NASCAR::Driver
ok 3 - Pod coverage on Sports::NASCAR::Team
# Looks like you failed 1 tests of 3.
Test returned status 1 (wstat 256, 0x100)
DIED. FAILED test 1
Failed 1/3 tests, 66.67% okay
Failed Test Stat Wstat Total Fail Failed List of Failed
t/pod-coverage.t 1 256 3 1 33.33% 1
Failed 1/1 test scripts, 0.00% okay. 1/3 subtests failed, 66.67% okay.
What just happened?
The test file starts as normal, setting up paths to load the modules to test. The second and third lines of t/pod-coverage.t check to see whether the Test::Pod::Coverage module is available. If is isnt, the tests cannot continue and the test exits.
Test::Pod::Coverage exports the all_pod_coverage_ok( ) function, which finds all available modules and tests their POD coverage. It looks for a lib/or blib/ directory in the current directory and plans one test for each module that it finds.