Competency Based Tester Qualifications - The Next Step for the Testing Profession?

Although the existing certification schemes (ISEB/ISTQB, CSQE etc.) go some way to providing a measure of a testers knowledge, their highly detailed syllabuses give no measure as to their competence or experience.

Examinations tend to be multiple choice and 'correct' answers are derived from what is written in the syllabus. As a consequence, questions and answers in these exams can't take much account of context or the experience of the candidate. To many people, this implies the schemes and certificates awarded are practically useless. Because the syllabuses are so highly detailed, courses become a commodity, focused on exam passing. Competition in the training marketplace is based on cost and volumes rather than the quality of training.

Unless our industry provides the next step in provision of meaningful qualifications to accurately reflect experience and competence, those who recruit and manage testing staff will continue to depend on inadequate knowledge-based qualifications.

Although ISEB/ISTQB qualifications are a pre-requisite for many tester roles now, I know from personal experience how easy it is for folk who have never tested in their life to swot and pass the Foundation exam. So how valuable are they really? Does a Foundation certificate only demonstrate that a candidate can pass an exam by rote? Not good.

This session suggests that there is increasing demand for training and certification schemes that are based more on competence, rather than memory. What would such schemes look like? We’ve got some ideas, but we seek your your input.

  • A bigger focus on the practical aspects of doing a job, rather than learning theory.
  • Practical, hands-on training on topics including writing, review, test design, execution, incident logging, phase end-reviews etc.
  • Coursework or course contribution-based assessments by peers and practitioners.

We’ll provide a suggested framework that may help structure the measurement of competencies for different roles within testing and some ideas on how this measurement could take place. But wouldn’t it be great to move this forward to a scheme that really benefits individuals and our industry? Some obvious questions include:

  • What could such training and assessment schemes look like?
  • Will schemes like this be a cash-cow for training providers?
  • How will the content of training and scope of certification be defined and agreed in the industry?
  • How would such a scheme be regulated so that training providers deliver training that improves competence rather than ability to pass exams?
  • How will a tester's competence be assessed?

Work with us in this session to help answer some of these questions and maybe we can identify some of you who would like to get more deeply involved to take this to the next level?

This topic will be discussed at the April 2008 TM Forum.


PeterFV's picture

If you set a standard it shouldn't be changed. It's not that the standards are too low but rather that they are measuring the Wrong Things. We've been here before.

If people are now raising the pass level then we (as test managers) are going to to be in the unhappy position of having to guess (on the basis of how recently the candidate passed the exam) of how good they are.

Let's go back to basics and (like M. Binet) find some really good tests, administer them, throw out the ones which fail to predict well and try again. This is what Susan's trying to do.