Testing Computer Software / Edition 2

Testing Computer Software / Edition 2

ISBN-10:
0471358460
ISBN-13:
9780471358466
Pub. Date:
04/26/1999
Publisher:
Wiley
ISBN-10:
0471358460
ISBN-13:
9780471358466
Pub. Date:
04/26/1999
Publisher:
Wiley
Testing Computer Software / Edition 2

Testing Computer Software / Edition 2

$84.0
Current price is , Original price is $84.0. You
$84.00 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores
$84.00 
  • SHIP THIS ITEM

    Temporarily Out of Stock Online

    Please check back later for updated availability.

    • Condition: Good
    Note: Access code and/or supplemental material are not guaranteed to be included with used textbook.

Overview

This book will teach you how to test computer software under real-world conditions. The authors have all been test managers and software development managers at well-known Silicon Valley software companies. Successful consumer software companies have learned how to produce high-quality products under tight time and budget constraints. The book explains the testing side of that success.

Who this book is for:
* Testers and Test Managers
* Project Managers-Understand the timeline, depth of investigation, and quality of communication to hold testers accountable for.
* Programmers-Gain insight into the sources of errors in your code, understand what tests your work will have to pass, and why testers do the things they do.
* Students-Train for an entry-level position in software development.

What you will learn:
* How to find important bugs quickly
* How to describe software errors clearly
* How to create a testing plan with a minimum of paperwork
* How to design and use a bug-tracking system
* Where testing fits in the product development process
* How to test products that will be translated into other languages
* How to test for compatibility with devices, such as printers
* What laws apply to software quality

Product Details

ISBN-13: 9780471358466
Publisher: Wiley
Publication date: 04/26/1999
Edition description: REV
Pages: 496
Product dimensions: 7.22(w) x 9.20(h) x 1.09(d)

About the Author

CEM KANER consults on technical and software development management issues and teaches about software testing at local universities and at several software companies. He also practices law, usually representing individual developers, small development services companies, and customers. He founded and hosts the Los Altos Workshops on Software Testing. Kaner is the senior author of Bad Software: What to Do When Software Fails (Wiley).

JACK FALK consults on software quality management and software engineering management. Jack is certified in Software Quality Engineering by the American Society of Quality. He is Vice Chair of the Santa Clara Valley Software Quality Association and an active participant in the Los Altos Workshops on Software Testing.

HUNG Q. NGUYEN is Founder, President, and CEO of softGear technology. He has worked in the computer software and hardware industries, holding management positions in engineering, quality assurance, testing, product development, and information technology, as well as making significant contributions as a tester and programmer. He is an ASQ-Certified Quality Engineer, and a senior member and San Francisco Section Certification Chairman of the American Society for Quality.

Read an Excerpt


Chapter 6: THE PROBLEM TRACKING SYSTEM


THE REASON FOR THIS CHAPTER

In Chapter 5, we described how a bug is reported. Here we describe what happens to the Problem Reportafter you report it. This chapter provides the basic design of a problem tracking database and puts it inperspective. It describes the system in terms of the flow of information (bug reports) through it and the needs ofthe people who use it. We provide sample forms and reports to illustrate one possible implementation of thesystem. You could build many other, different, systems that would support the functional goals we lay out forthe database.

NOTE

Up to now, the "you" - that we've written to has been a novice tester. This chapter marks a shift in position. Fromthis point onward, we're writing to a tester who's ready to lead her own project. We write to you here assumingthat you are a project's test team leader, and that you have a significant say in the design of the tracking system.If you aren't there yet, read on anyway. This chapter will put the tracking system in perspective, whatever yourexperience level.

ALSO NOTE

In our analysis of the issues involved in reporting information about people, we assume that you work in atypically managed software company. In this environment, your group is the primary user of the tracking systemand the primary decision maker about what types of summary and statistical reports are circulated. Under thesecircumstances, some types of reports that you can generate can be taken badly, as overreaching by a low leveldepartment in the company. Others will be counterproductive for other reasons, discussed below.

Butthe analysis runs differently if you work for a company that follows an executive-driven qualityimprovement program. In these companies, senior managers play a much more active role in setting qualitystandards, and they make broader use of quality reporting systems, including bug tracking Information. Thetracking system is much more of a management tool than the primarily project-level quality control tool that wediscuss in this chapter. These companies also pay attention to the problems inherent in statistical monitoring ofemployee behavior and to the risk of distracting a Quality improvement group by forcing it to collect too muchdata. Deming (1982) discusses the human dynamics of information reporting in these companies and the stepsexecutives must take to make these systems work.

OVERVIEW

The first sections analyze how an effective tracking system isused:

  • We start with a general overview of benefits and organizational risks created by the system.

  • Then we consider the prime objective of the system, its core underlying purpose. As we see it, the primeobjective is getting those bugs that should be fixed, fixed.

  • To achieve Its objective; id system must be capable of certain tasks. We identify tourrequirements.

  • Now look at the system in practice. Once you submit the report, what happens to it? Howdoes it get resolved? How does the tracking system itself help this process?

  • Finally, we consider the system's users. Many different people in your company use thissystem, for different reasons, We ask here, what do they get from the system, what otherinformation do they want, and what should you provide? There are traps here for theunwary.
The next sections of the chapter consider the details of thesystem.

  • We start with a detailed description of key forms and reports that most tracking systems provide.

  • Now you understand problem reporting and the overall tracking system design. We suggestsome fine points - ways to structure the system to increase report effectiveness and minimizeinterpersonal conflicts.

  • The last section in this group passes on a few very specific tips on setting up the online version of thereport form.

Problem Reports are a tester's primary work product. The problem tracking system and procedures will have moreimpact on testers reports' effectiveness than any other system or procedure.

You use a problem tracking system to report bugs, file them, retrieve files, and write summary reports about them. Agood system fosters accountability and communication about the bugs. Unless the number of reports is trivial, you needan organized system. Too many software groups still use pen-and-paper tracking procedures or computer-basedsystems that they consider awkward and primitive. It's not so hard to build a good tracking system and it's worth it,even for small projects.

This chapter assumes your company is big enough to have a test manager, marketing manager, project manager,technical support staff, etc. It's easier for us to identify roles and bring out some fine points this way. Be aware, though,that we've seen the same interactions in two-person research projects and development partnerships. Each person wearsmany hats, but as long as one tests the work of the other, they face the same issues. If you work in a small team, even asignificant two person class project in school (such as a full year, senior year project), we recommend that you apply asmuch of this system and the thinking behind it as you can.

This chapter describes a problem tracking system that we've found successful. We include the main data entry form,standard reports, and special implementation notes-enough for you to code your own system using any good databaseprogram. Beyond these technical notes, we consider the system objectives, its place in your company, and the effect ofthe system on the quality of your products.

The key issues in a problem tracking system are political, not technical. The tracking system is an organizationalintervention, every bit as much as it is a technical tool. Here are some examples of the system's political power and theorganizational issues it raises:

  1. The system introduces project accountability. A good tracking system takes information that hastraditionally been privately held by the project manager, a few programmers, and (maybe) theproduct manager, and makes it public (i.e., available to many people at different levels in thecompany). Throughout the last third of the project, the system provides an independent realitycheck on the project's status and schedule. It provides a list of key tasks that must be completed(bugs that must be fixed) before the product is finished. The list reflects the current quality of theproduct. And anyone can monitor progress against the list over a few weeks for a further check onthe pace of project progress.

  2. As the system is used, significant personal and control issues surface. These issues are standard onesbetween testing, programming, and other groups in the company, but a good tracking system oftenhighlights and focuses them. Especially on a network, a good system captures most of thecommunication between the testers and the programmers over individual bugs. The result is arevealing record that can highlight abusive, offensive, or time-wasting behavior by individualprogrammers or testers or by groups.

    Here are some of the common issues:

    • Who is allowed to report problems? Who decides whether a report makes it into the database? Who controls the report's wording, categorization, and severity?

    • Who is allowed to query the database or to see the problem summaries or statistics?

    • Who controls the final presentation of quality-related data and other progress statistics available from the database?

    • Who is allowed to hurt whose feelings? Why?

    • Who is allowed to waste whose time? Do programmers demand excessive documentation and support for each bug? Do testers provide so little information with Problem Reports that the programmers have to spend most of their time recreating and narrowing test cases?

    • How much disagreement over quality issues is tolerable?

    • Who makes the decisions about the product's quality? Is there an appeal process?Who gets to raise the appeal, arguing that a particular bug or design issue should not be set aside? Who makes the final decision?

  3. The system can monitor individual performance.It's easy to crank out personal statistics from thetracking system, such as the average number of bugs reported per day for each tester, or theaverage number of bugs per programmer per week, or each programmer's average delay before fixinga bug, etc. These numbers look meaningful. Senior managers often love them. They're often handyfor highlighting personnel problems or even for building a case to fire someone. However, if thesystem is used this way, some very good people will find it oppressive, and some not necessarilygood people will find ways to manipulate the system to appear more productive.

  4. The system provides ammunition for cross-group wars. Suppose that Project X is further behindschedule than its manager cares to admit. The test group manager, or managers of other projectsthat compete with Project X for resources, can use tracking system statistics to prove that X willconsume much more time, staff and money than anticipated. To a point, this is healthy accountability.Beyond that point, someone is trying to embarrass X's manager, to aggrandize themselves, or to get theproject cancelled unfairly - a skilled corporate politician can use statistics to make a project appear muchworse off than it is.
The key benefits of a good bug tracking system are the improvements in communication and accountability that getmore bugs fixed. Many of the personnel-related and political uses of the database interfere with these benefits bymaking people more cautious about what information they put on record, what reports they make or allow others tomake, and so on. We'll discuss some of these risks in more detail later. First though, consider the approach that webelieve works well.

THE PRIME OBJECTIVE OF A PROBLEM TRACKING SYSTEM


A problem tracking system exists in the service of getting the bugs thatshould be fixed, fixed. Anything that doesn't directly support this purposeis a side issue.

Some other objectives, including some management reporting, are fully compatible with the system's prime objective.But each time a new task or objective is proposed for the system, evaluate it against this one. Anything that detractsfrom the system's prime objective should be excluded.

THE TASKS OF THE SYSTEM

To achieve the system objective, the designer and her management must ensure that:

  1. Anyone who needs to know about a problem should learn of it soon after it's reported.

  2. No error will go unfixed merely because someone forgot about it.

  3. No error will go unfixed on the whim of a single programmer.

  4. A minimum of errors will go unfixed merely because of poor communication.
The minimalism of this task list is not accidental. These are the key tasks of the system. Be cautious about addingfurther tasks....

Table of Contents

Preface xiii

Notes on the book’s structure and layout xvii

Acknowledgments xxi

SECTION 1—FUNDAMENTALS

1. An example test series 1

The first cycle of testing 1

The second cycle of testing 11

What will happen in later cycles of testing 16

2. The objectives and limits of testing 17

You can’t test a program completely 17

The tester’s objective: Program verification? 23

So, why test? 25

3. Test types and their place in the software development process 27

Overview of the software development stages 30

Planning stages 32

Testing during the planning stages 33

Design stages 35

Testing during the design stages 39

Glass box code testing is part of the coding stage 41

Regression testing 49

Black box testing 50

Maintenance 57

4. Software errors 59

Quality 59

What is a software error? 60

Categories of software errors 60

5. Reporting and analyzing bugs 65

Write Problem Reports immediately 66

Content of the Problem Report 66

Characteristics of the Problem Report 74

Analysis of a reproducible bug 76

Tactics for analyzing a reproducible bug 79

Making a bug reproducible 82

SECTION 2—SPECIFIC TESTING SKILLS

6. The problem tracking system 87

The prime objective of a problem tracking system 90

The tasks of the system 90

Problem tracking overview 90

The users of the tracking system 97

Mechanics of the database 106

Further thoughts on problem reporting 115

Glossary 121

7. Test case design 123

Characteristics of a good test 124

Equivalence classes and boundary values 125

Visible state transitions 132

Race conditions and other time dependencies 133

Load testing 134

Error guessing 135

Function equivalence testing: automation, sensitivity analysis & random input 135

Regression testing: checking whether a bug fix worked 139

Regression testing: the standard battery of tests 140

Executing the tests 141

8. Testing printers (and other devices) 143

Some general issues in configuration testing 144

Printer testing 146

9. Localization testing 169

Was the base code changed? 170

Work with someone fluent in the language 170

Is the text independent from the code? 171

Translated text expands 171

Character sets 171

Keyboards 172

Text filters 172

Loading, saving, importing, and exporting high and low ASCII 173

Operating system language 173

Hot keys 173

Garbled in translation 173

Error message identifiers 174

Hyphenation rules 174

Spelling rules 174

Sorting rules 174

Uppercase and lowercase conversion 174

Underscoring rules 174

Printers 175

Sizes of paper 175

CPU’s and video 175

Rodents 175

Data formats and setup options 175

Rulers and measurements 176

Culture-bound graphics 176

Culture-bound output 176

European product compatibility 176

Memory availability 176

Do GUIs solve the problem? 177

Automated testing 177

10. Testing user manuals 179

Effective documentation 179

The documentation tester’s objectives 180

How testing documentation contributes to software reliability 181

Become the technical editor 182

Working with the manual through its development stages 183

Online help 188

11. Testing tools 189

Fundamental tools 189

Automated acceptance and regression tests 191

Standards 197

Translucent-box testing 200

12. Test planning and test documentation 203

The overall objective of the test plan: product or tool? 204

Detailed objectives of test planning and documentation 205

What types of tests to cover in test planning documents 210

A strategy for developing components of test planning documents 213

Components of test planning documents 217

Documenting test materials 242

A closing thought 253

SECTION 3—MANAGING TESTING PROJECTS AND GROUPS

13. Tying it together 255

Software development tradeoffs 257

Software development models 258

Quality-related costs 264

The development time line 266

Product design 267

Fragments coded: first functionality 274

Almost alpha 275

Alpha 277

Pre-beta 286

Beta 286

User interface (UI) freeze 293

Pre-final 295

Final integrity testing 299

Release 301

Project post-mortems 301

14. Legal consequences of defective software 303

Breach of contract 305

Torts: lawsuits involving fault 317

Whistle blowing 340

15. Managing a testing group 343

Managing a testing group 344

The role of the testing group 345

A test group is not an unmixed blessing 349

An alternative? Independent test agencies 350

Scheduling tips 352

Your staff 359

Appendix: common software errors 363

User interface errors 375

Error handling 396

Boundary-related errors 399

Calculation errors 401

Initial and later states 403

Control flow' errors 406

Errors in handling or interpreting data 416

Race conditions 421

Load conditions 423

Hardware 427

Source, version, and ID control 430

Testing errors 432

References 437

Index 451

About the Authors 480

From the B&N Reads Blog

Customer Reviews