Deliver to your Kindle or other device


Try it free

Sample the beginning of this book for free

Deliver to your Kindle or other device

Perl Testing: A Developer's Notebook: A Developer's Notebook
See larger image

Perl Testing: A Developer's Notebook: A Developer's Notebook [Kindle Edition]

Ian Langworth , Chromatic
3.5 out of 5 stars  See all reviews (2 customer reviews)

Print List Price: £19.99
Kindle Price: £18.99 includes VAT* & free wireless delivery via Amazon Whispernet
You Save: £1.00 (5%)
* Unlike print books, digital books are subject to VAT.

Free Kindle Reading App Anybody can read Kindle books—even without a Kindle device—with the FREE Kindle app for smartphones, tablets and computers.

To get the free app, enter your e-mail address or mobile phone number.

‹  Return to Product Overview

Product Description

From the Publisher

Good software testing can increase your productivity, improve your designs, raise your quality, and make you more productive overall. With this series of hands-on labs, you'll learn how Perl's test tools work and how to use them to create basic and complex tests and interpret your results. Perl Testing: A Developer's Notebook is ideal if you want to reduce your software development cycle times.

About the Author

Ian Langworth ( has been writing Perl for years and actively involved in the community since 2003. He has contributed a handful of modules to the CPAN, most of which are Kwiki-related. He has spoken at Perl-related conferences as LISA and YAPC. Ian is also the author surprisingly widespread utility, Cadubi, which is packaged for many free operating systems.

Ian is currently studying Computer Science and Cognitive Psychology at Northeastern University. Whilst pursuing a degree, he's participating in an volunteer systems administration group and working toward making higher code quality and robustness an easier goal to achieve.

He currently resides in Boston, Massachusetts where he participates in the local Boston Perl Mongers group and lives precariously close to Fenway Park.

chromatic is the technical editor of the O'Reilly Network, covering open source, Linux, development, and dynamic languages. He is also the author of the Extreme Programming Pocket Guide and Running Weblogs with Slash, as well as the editor of BSD Hacks and Gaming Hacks. He is the original author of Test::Builder, the foundation for most modern testing modules in Perl 5, and has contributed many of the tests for core Perl. He has given tutorials and presentations at several Perl conferences, including OSCON, and often writes for, which he also edits. He lives just west of Portland, Oregon, with two cats, a creek in his backyard, and, as you may have guessed, several unfinished projects.

Excerpt. © Reprinted by permission. All rights reserved.

CHAPTER 4 Distributing Your Tests (and Code)

The goal of all testing is to improve the quality of code. Quality isn’t just the absence of bugs and features behaving as intended. High-quality code and projects install well, behave well, have good and useful documentation, and demonstrate reliability and care outside of the code itself. If your users can run the tests too, that’s a good sign.

It’s not always easy to build quality into a system, but if you can test your project, you can improve its quality. Perl has several tools and techniques to distribute tests and test the non-code portions of your projects. The labs in this chapter demonstrate how to use them and what they can do for you.

Testing POD Files

The Plain Old Documentation format, or POD, is the standard for Perl documentation. Every Perl module distribution should contain some form of POD, whether in standalone .pod files or embedded in the modules and programs themselves.

As you edit documentation in a project, you run the risk of making errors. While typos and omissions can be annoying and distracting, formatting errors can render your documentation incorrectly or even make it unusable. Missing an =cut on inline POD may cause bizarre failures by turning working code into documentation. Fortunately, a test suite can check the syntax of all of the POD in your distribution.

How do I do that?

Consider a module distribution for a popular racing sport. The directory structure contains a t/ directory for the tests and a lib/ directory for the modules and POD documents. To test all of the POD in a distribution, create an extra test file, t/pod.t, as follows:

use Test::More;
eval 'use Test::Pod 1.00';
plan( skip_all => 'Test::Pod 1.00 required for testing POD' ) if $@;
all_pod_files_ok( );

Run the test file with prove:

$ prove -v t/pod.t
ok 1 - lib/Sports/NASCAR/
ok 2 - lib/Sports/NASCAR/
ok 3 - lib/Sports/NASCAR/
All tests successful.
Files=1, Tests=3, 0 wallclock secs ( 0.45 cusr + 0.03 csys = 0.48 CPU)

What just happened?

Because Test::Pod is a prerequisite only for testing, it’s an optional prerequisite for the distribution. The second and third lines of t/pod.t check to see whether the user has Test::Pod installed. If not, the test file skips the POD-checking tests.

One of the test functions exported by Test::Pod is all_pod_files_ok( ). If given no arguments, it finds all Perl-related files in a blib/ or lib/ directory within the current directory. It declares a plan, planning one test per file found. The previous example finds three files, all of which have valid POD.

If Test::Pod finds a file that doesn’t contain any POD at all, the test for that file will be a success.

What about...

Q: How can I test a specific list of files?
A: Pass all_pod_files_ok( ) an array of filenames of all the files to check. For example, to test the three files that Test::Pod found previously, change t/pod.t to:

use Test::More;

eval 'use Test::Pod 1.00';
plan( skip_all => 'Test::Pod 1.00 required for testing POD' ) if $@;all_pod_files_ok(

Q: Should I ship POD-checking tests with my distribution?

A: There’s no strong consensus in the Perl QA community one way or the other, except that it’s valuable for developers to run these tests before releasing a new version of the project. If the POD won’t change as part of the build process, asking users to run the tests may have little practical value besides demonstrating that you consider the validity of your documentation to be important.Not everyone agrees with this metric.

Testing Documentation Coverage

When defining an API, every function or method should have some documentation explaining its purpose. That’s a good goal—one worth capturing in tests. Without requiring you to hardcode the name of every documented function, Test::Pod::Coverage can help you to ensure that all the subroutines you expect other people to use have proper POD documentation.

How do I do that?
Assume that you have a module distribution for a popular auto-racing sport. The distribution’s base directory contains a t/ directory with tests and a lib/ directory with modules. Create a test file, t/pod-coverage.t, that contains the following:

use Test::More;

eval 'use Test::Pod::Coverage 1.04';
skip_all => 'Test::Pod::Coverage 1.04 required for testing POD coverage'
) if $@;
all_pod_coverage_ok( );

Run the test file with prove to see output similar to:

$ prove -v t/pod-coverage.t
not ok 1 - Pod coverage on Sports::NASCAR::Car# Failed test (/usr/local/share/perl/5.8.4/Test/Pod/
at line 112)
# Coverage for Sports::NASCAR::Car is 75.0%, with 1 naked subroutine:
# restrictor_plate
ok 2 - Pod coverage on Sports::NASCAR::Driver
ok 3 - Pod coverage on Sports::NASCAR::Team
# Looks like you failed 1 tests of 3.
Test returned status 1 (wstat 256, 0x100)
Failed 1/3 tests, 66.67% okay
Failed Test Stat Wstat Total Fail Failed List of Failed
t/pod-coverage.t 1 256 3 1 33.33% 1
Failed 1/1 test scripts, 0.00% okay. 1/3 subtests failed, 66.67% okay.

What just happened?

The test file starts as normal, setting up paths to load the modules to test. The second and third lines of t/pod-coverage.t check to see whether the Test::Pod::Coverage module is available. If is isn’t, the tests cannot continue and the test exits.

Test::Pod::Coverage exports the all_pod_coverage_ok( ) function, which finds all available modules and tests their POD coverage. It looks for a lib/or blib/ directory in the current directory and plans one test for each module that it finds.

‹  Return to Product Overview