Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

Recruiting Software Testers, Part 2


January 2000: Career and Training: Recruiting Software Testers, Part 2

One of the most difficult functions any manager has is selecting good staff. Decisions made in the hiring process ultimately will make or break the mission of the group–and, in the long run, the company.

Last month’s article discussed several fundamental factors to consider when seeking potential software testers. After initially defining staffing needs, a manager must establish requirements for the job, examine the motivations of people wanting to get into software testing, and gather information about–and phone screen–job candidates.

Ultimately, though, staffing decisions usually come down to the results of a rigorous interview process. How does the candidate approach testing, and how deep is her knowledge of the field? Does he have project-management experience? How does she relate to her peers, supervisors and staff? Are his bug reports comprehensive and insightful, or terse and ungrammatical? How well does she perform on tests and puzzles specially designed for candidates? These are the key questions that will separate the qualified from the unqualified.

Testing Philosophy

Once I’ve done my homework on the résumé and ascertained the basics about the candidate’s education and past employment, I delve into his testing knowledge and philosophy. For supervisory or senior positions, I ask the following questions:

  • What is software quality assurance?
  • What is the value of a testing group? How do you justify your work and budget?
  • What is the role of the test group vis-à-vis documentation, tech support, and so forth?
  • How much interaction with users should testers have, and why?
  • How should you learn about problems discovered in the field, and what should you learn from those problems?
  • What are the roles of glass-box and black-box testing tools?
  • What issues come up in test automation, and how do you manage them?
  • What development model should programmers and the test group use?
  • How do you get programmers to build testability support into their code?
  • What is the role of a bug tracking system?

I’m not looking for the one right answer about how testing should be done. I simply want to know if the candidate has thought about these issues in depth, and whether his views are roughly compatible with the company’s.

These questions, for example, are designed for a company that focuses on testing with little regard for process standards. Therefore, the candidate’s answers should assure me that he would be comfortable working in a group that doesn’t follow process standards such as ISO 9000-3 or the Capability Maturity Model.

Technical Breadth

After covering philosophy and knowledge, I evaluate the candidate’s technical breadth. Though the actual questions depend on the particular company and application area, the following elicit the many facets of an interviewee’s experience:

  • What are the key challenges of testing?
  • Have you ever completely tested any part of a product? How?
  • Have you done exploratory or specification-driven testing?
  • Should every business test its software the same way?
  • Discuss the economics of automation and the role of metrics in testing.
  • Describe components of a typical test plan, such as tools for interactive products and for database products, as well as cause-and-effect graphs and data-flow diagrams.
  • When have you had to focus on data integrity?
  • What are some of the typical bugs you encountered in your last assignment?

The answer to "Should every business test its software the same way?" indicates a candidate’s open-mindedness and breadth of exposure to the field. I believe the correct answer is no, and I expect to hear that more rigorous testing and process management should be applied to life-critical applications, than the here-today, new-version-tomorrow web-based application.

A candidate also should believe that different application issues call for different approaches. For example, testing a financial application that is written in COBOL and works with a huge database would require different techniques than those used to test the interactive competence of a word processor. Also, an exceptional candidate should discuss the different paradigms of software testing, or how different people view core issues in the field.

Within the black-box world, for instance, James Bach identifies domain testing, stress testing, flow testing, user testing, regression testing, risk-based testing and claim-based testing as separate techniques (Tripos: A Model to Support Heuristic Software Testing, 1997, available at http://www.stlabs.com/newsletters/testnet/docs/tripover.htm). However, in my course on testing, I identify nine paradigms that aid testers in determining the different criteria that create effective test cases or suites: domain testing, stress testing, risk-based testing, random testing, specification-driven testing, function testing, scenario-driven testing, user testing and security testing. A candidate shouldn’t believe that there is one correct partitioning of paradigms but should recognize that different groups with different approaches to testing can both be right.

When I interview senior candidates, I want to find out their opinions on common testing issues and hear a description and evaluation of the tools they’ve used. I’m not looking for agreement. Rather, I want to determine whether the candidate has a well developed, sophisticated point of view. The data-oriented questions, for example, would be excellent for probing a candidate’s sophistication in the testing of databases and test tools used. Of course, the questions need to be changed to match a candidate’s skill set and the class of application. There would be little value in asking a highly skilled tester or test manager for interactive applications such as games or word processors about databases and their test tools.

Project Management

As a matter of course, supervisory candidates must be queried on their personnel and project management philosophy. However, I also do the same for potential mid-level or senior testers. At some point in seniority, a tester becomes largely self-managing, assigned to a large area of work and left alone to plan the size, type and sequence of tasks within it. Peter Drucker (The Effective Executive, HarperCollins, 1966) defines any knowledge worker who has to manage his or her own time and resources as an executive. I’ve personally found understanding the managerial nature of my mid-level contributors a great insight.

Here, then, are some questions for supervisors or self-managers:

  • How do you prioritize testing tasks within a project?
  • How do you develop a test plan and schedule? Describe bottom-up and top-down approaches.
  • When should you begin test planning?
  • When should you begin testing?
  • Do you know of metrics that help you estimate the size of the testing effort?
  • How do you scope out the size of the testing effort?
  • How many hours a week should a tester work?
  • How should your staff be managed? How about your overtime?
  • How do you estimate staff requirements?
  • What do you do (with the project tasks) when the schedule fails?
  • How do you handle conflict with programmers?
  • How do you know when the product is tested well enough?

Staff Relations

This series is, again, primarily for supervisory staff. However, I ask the test-group manager questions, which are quite enlightening, of managerial candidates.

  • What characteristics would you seek in a candidate for test-group manager?
  • What do you think the role of test-group manager should be? Relative to senior management? Relative to other technical groups in the company? Relative to your staff?
  • How do your characteristics compare to the profile of the ideal manager that you just described?
  • How does your preferred work style work with the ideal test-manager role that you just described? What is different between the way you work and the role you described?
  • Who should you hire in a testing group and why?
  • What is the role of metrics in comparing staff performance in human resources management?
  • How do you estimate staff requirements?
  • What do you do (with the project staff) when the schedule fails?
  • Describe some staff conflicts you’ve handled.

If a candidate’s picture of the ideal manager is dramatically different from my impression of him or from his image of himself, I would need to determine whether this difference is a red flag or a reflection of genuine humility. On the other hand, I would not immediately assume that a candidate whose description exactly matches his perception and presentation of himself is pathologically egotistical. It’s possible he’s just trying to put himself in a good light during the interview. This is fine, as long as he doesn’t exaggerate or lie. Finally, if a candidate’s description of the ideal manager differs fundamentally from the expectations of the company, then I wonder whether this person could fit in with the company culture.

Tests and Puzzles

Some managers use logic or numeric puzzles as informal aptitude tests. I don’t object to this practice, but I don’t believe such tests are as informative as they are thought to be. First, there are huge practice effects with logic and numeric puzzles. I had my daughter work logic puzzles when she was in her early teens, and she became quite good at solving them.

Her success didn’t mean she was getting smarter, however. She was simply better at solving puzzles. In fact, practice efforts are long lasting and more pronounced in speeded, nonverbal and performance tests (Jepson, A.R., Bias in Mental Testing, The Free Press, 1980). Second, speed tests select for mental rabbits, those who demonstrate quick–but not necessarily thorough–thinking. Tortoises sometimes design better products or strategies for testing products.

A Simple Testing Puzzle

An old favorite among commonly used speed tests is G.J. Myers’ self-assessment (The Art of Software Testing, John Wiley & Sons, 1979). The candidate is given an extremely simple program and asked to generate a list of interesting test cases. The specific program involves an abstraction (a triangle). I prefer this puzzle because it tests something testers will actually do–analyze a program and figure out ways to test it. However, there will still be practice effects. Average testers who worked through Myers before will probably get better results than strong testers who have never seen the puzzle. Additionally, I suspect that cultural differences also will produce different levels of success, even among skilled testers. Someone who deals with abstractions, such as geometric abstractions or with the logical relationships among numbers, has an advantage over someone who tests user interfaces or a product’s compatibility with devices.

Bug Reports

Writing a bug report is one of the most basic and important parts of a tester’s job. Nonetheless, there is a lot of variation in the quality of bug reports, even among those written by testers who have several years’ experience.

To test this ability, find a reasonably straightforward bug in part of your software that is fairly easy to understand and have the candidate write a report. If none of your product’s bugs fit the bill, www.bugnet.com can provide one. It’s easy to recognize an excellent bug report; however, having sample reports from your staff can help you determine the quality of the attempt. There are many other good puzzles and tests in use, and all are based on a common premise: If you can find a way to present a portion of the job that the tester will actually do, you can see how well the tester does it. You have to make the test fair by designing it so that someone who doesn’t know your product can still do well. That’s challenging. But if you can come up with a fair test, the behavior that you elicit will be very informative.

Having comprehensively questioned and tested all your promising candidates, you’ll have ample data with which to make your decision–and choose a winner.

Another Simple Testing Test

Draw a simple Open File dialog box on a whiteboard, explaining "This is an Open File dialog. You can type in the file name (where it says File4 at the bottom), or you can click on the open button to open it." Hand the marker to the candidate and ask her how she would test the dialog. Make it clear that she can have as much time as she wants, can make notes on the whiteboard or on paper, and that many candidates take several minutes to think before they say anything. When the candidate begins presenting her thoughts, listen. Ask questions to clarify, but don’t criticize or challenge her. When the tester pauses, let her be silent. She can answer when she’s ready.

This is a remarkable test in the extent to which answers can vary. One candidate might stay at the surface, pointing out every flaw in the design of the dialog box: There is no cancel button or dialog title, no obvious way to switch between directories, and so on. Another candidate might skip the user-interface issues altogether and try testing the opening of large and small files, corrupt files, files with inappropriate extensions or remote files (specified by paths that she types into the file name box, such as d:\user\remote\File4).

Because of the variation in responses, over time I’ve changed how I use this test. Now, I present the dialog as before, giving the candidate a marker and whatever time she needs. I compliment the analysis after she has finished her comments, regardless of her performance. Then, I show her the types of tests that she missed. I explain that no one ever finds all the tests and that sometimes people miss issues because they are nervous or think they have been rushed. I spend most of the time showing and discussing different types of tests and the kind of bugs they can find. Finally, I erase the whiteboard, draw a Save File dialog that is just as poorly designed, and ask the tester to try again.

Because the differential practice effects are minimized by the initial practice test and coaching, the real test is the second one. The candidate receives feedback and is reassured that she isn’t a dolt. In fact, most testers are substantially less nervous the second time through.

The second test allows me to find out if this candidate will be responsive to my style of training. Did the candidate understand my explanations and do a substantially better job in her next attempt? If the answer is yes, I have a reasonable candidate (as measured by this test). If the candidate’s second analysis wasn’t much better than the first, she is unlikely to be hired. She might be a bright, well-intentioned, interesting person, but if she doesn’t learn when I teach, she needs a different teacher.

Occasionally, I have dispensed with the second test because the candidate did impossibly poorly during the first test or was extremely defensive or argumentative during my explanation of alternative tests. This usually means that I’m finished with the candidate. I’ll spend a little time looking for a polite way to wrap up the interview, but I won’t hire him.


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.