2012 Test Management Summit


The SIXTH TEST MANAGEMENT SUMMIT
took place on Monday 6th and Tuesday 7th February 2012

Designing for performance, Stevan Zivanovic, BJSS
Designing any testing is hard. The pressures imposed from projects are such that testing is expected to “just do it" and deliver. Is this fair? What can be done? How does this change when we deal with performance testing? This session will be highly interactive, exploring mechanisms for designing test approaches using the experiences of the participants.

Testing is a passion for Stevan. Over the 19 years that he has been actively involved, he has worked in a wide variety of businesses, from aircraft safety critical to financial “start-ups”. He has a pragmatic approach to testing and has a proven track record of implementing realistic, workable, real world changes from the individual to large, multinational teams. Stevan has presented at EuroSTAR, the UK Test Managers Forum, IQNITE, Agile Business Conference and various companies. He enjoys speaking and training people to motivate them in testing and quality.

 

Does testing improve quality? Ben Fry, SQS

“Quality” means different things to different people. To a user its meaning is different to that of a developer or a tester. It’s clear that conversations between senior and exec management focus on quality, not Testing. So, what role does Testing play in an organisation’s attempt to achieve an appropriate level of “quality”? And, what role should Testing play? We will discuss the different perceptions of “quality”, the role Testing plays, the role Testing should play, and ways we can improve the organisation-wide understanding and appreciation of “quality” and Testing.

 

Designing for Regression Testing, Nikhil Nashikkar & Vishnu Chittan, ThinkSoft Global

As the economic conditions hit, IT budgets are cut and more so the testing spend. While change is inevitable in IT organisations, the focus of the CIOs is to reduce the cost spent on regular so called “Business As usual activities”. From the test team’s perspective the changes need to be acknowledged when regression testing. This session will cover the following:

  • What components make a good cost effective regression test?

  • How do you measure success of a regression test?

  • Open discussion to the floor: How faster, smarter and better is the regression testing in your organisation?

     

Managing Remote Teams, Mike Bartley,T&V Solutions

“Susan, we need you to take over the team in X. Please make sure there is no drop in quality”

“Paul, the merger with Y means we have a team of software testers in Z to add to your existing team. That should solve your resource problems.”

“Jo, the CEO has decided we need to cut costs and outsource the software testing to X. Can you find a supplier and then manage the team please? Oh, and make sure the CEO gets his cost reduction!”

People are asked to manage remote teams for a variety of reasons and then expected to deliver to certain externally imposed targets.

In this session we will discuss the practicalities of managing a remote team and what needs to put in place to ensure that the various objectives can be met. We will ask what specific challenges are raised when the remote team is performing software testing.

 

Developing team skills, Mike Jarred and Luke Avsejs, IDBS
This session will outline the journey taken to implement a competency based Professional Development Framework for testing within IDBS. Most of the testers within IDBS have come from a scientific background so have huge domain knowledge; the PDF has been used to establish their level of skills and competency for testing, as well as show-casing their achievements to the rest of the organisation.
As well as sharing their implementation experience, Mike and Luke will host an interactive discussion on all aspects of skills development so the group can share knowledge and learn from each other.

Mike Jarred is the Director of Testing at IDBS, a market leading provider of innovative enterprise data management, analytics and modelling solutions which increase efficiency, reduce costs and improve the productivity of industrial R&D and clinical research. Mike has been in QA & Testing since 1990, implementing and developing testing teams in a variety of domains including Investment Banking, General Insurance, Retail and Private Medical Insurance.

Dr. Luke Avsejs is a Test Team Lead at IDBS with eight years of experience in IT involved with the development of staff. Currently working with pharmaceutical data management software Luke has a background in research chemistry and life sciences.



Crowdsourced Testing – Does it have Legs? Richard Neeve, Independent

Up until six months ago my interest in the burgeoning topic of crowdsourcing and specifically crowdsourced testing had been merely academic. But since then I’ve managed a client’s full-blooded embrace of crowdsourced testing, crowdsourced a name and logo for a startup and worked in a business that has crowdsourcing at its core. Previous TMF discussions on this topic have fallen a little flat and this one may too but I think the lack of concrete experience reports has been a problem.
Hopefully I can start to plug that gap by sharing what I’ve learned and trying to respond to the hopes and concerns people may have about using this approach in their own organisations.

Within a week of gaining a first class degree in computer science in 1998 Richard was recruited as a contract tester by Antony Marcano before rising through the ranks in a wide range of environments to become the BBC’s Head Of Testing in just his ninth year of experience.  In 2010/11 he worked with CxO level stakeholders to successfully deliver a critical troubleshoot for a city firm in the credit derivatives space despite having no previous financial experience.  He has maintained a 100% contract renewals record and has recently used commercial crowdsourcing to achieve objectives that have included, but also extended beyond, software testing.

 

Agile Techniques: Which ones really work in practice?, Paul Gerrard, Gerrard Consutling

After a brief brainstorm to set the agenda, this session will be run as a Goldfish Bowl. Come along and bring YOUR questions and experience to the group. Listen to people's experiences of what does and doesn't work.
We'll explain how a goldfish bowl works when you arrive, but put simply, it is a cross between musical chairs and a panel session where you take a seat on the panel to speak to the group. A civilised 'question time' that usually works very well at the TMF.

Paul Gerrard is a consultant, teacher, author, webmaster, programmer, tester, conference speaker, rowing coach and most recently a publisher. He has conducted consulting assignments in all aspects of software testing and quality assurance, specialising in test assurance. He has presented keynote talks and tutorials at testing conferences across Europe, the USA, Australia, South Africa and occasionally won awards for them. Educated at the universities of Oxford and Imperial College London, he is a Principal of Gerrard Consulting Limited and is the host of the UK Test Management Forum.

 

Testing phases, are they still relevant? Derk-Jan de Grood, Valori

Lately there is a lot of discussion about how test will be organised in the near future. Some state that the test phase will cease to exist. This session discusses that statement. Testing will be required throughout the development lifecycle. It is expected that test activities will shift from an independent phase near the end to the life cycle, towards various activities throughout the lifecycle. These activities include reviews as well as end-to-end testing and even testing in production. In order to decide whether a separate test phase has value, we need to regard two components. Testing has a technical and an intelligence component. The first has it focus on making the software work. The second component has its focus on governance and on providing information. We do not testing because we can, but we organize test activities because they matter. Each activity aims in not only finding bugs, but also in providing trust in an early stage of the development process. Is a lot of testing to be done by non-testers? e.g. The BA review of requirements, testing by developers in their SCRUM sprint. Plus, James Whittaker states that users will be involved in testing more and more. Is the testers role more of a coaching or controlling role? But this does not suit all situations. Lets consider where an old-school testing-phase might be the best solution. So we end up having more options, tools and measures to choose from. To do so, we require vision and understanding of the goals and the test profession.

Derk-Jan de Grood, works for Valori and is an experienced test manager and advises companies on how to set up their test projects. Derk-Jan published several successful books. Currently he is co-authoring a book on the future of testing. This book is expected medio 2012.

Load Testing in the Cloud – Benefits and Challenges, Bruno Duval and Thomas Ripoche, Neotys

Many companies have moved applications to the cloud as a way to reduce capital expenditure while improving IT focus and effectiveness. End users see the cloud as a way to access their documents and applications remotely from anywhere and from any device. IT managers see the cloud as a means of rapidly adapting their infrastructures as needed via virtualization using a pay-as-you-go model. But what about load & performance testing engineers? Can they seize the opportunities afforded by the cloud to better test the performance of web applications?
As with past overhyped trends in IT, it is important to look beyond the talk to find concrete ways to take advantage of this new technology’s flexibility and scalability to save time, reduce costs, and improve the way your organization works.
This presentation describes how the cloud is revolutionising load testing and discusses the advantages it provides in many situations to ensure your web applications perform well in production. We will also look at the drawbacks of only using the Cloud and investigate the key capabilities to look for in a load testing solution. Without the right tools in place, simply moving your testing activities to the cloud will likely not deliver the results necessary to justify the move. Understanding how to apply the right tools and practices to make the most of the cloud is fundamental to cloud-based testing and vital to ultimately going live with total confidence.

Bruno Duval has a master degree in Network and Distributed Systems and spent the last 10 years in the load and performance testing field. He has performed critical load tests in all industries and on all technologies. He is a real load test expert and has set up performance testing teams around the globe for several corporate accounts. In the last 5 years, he has lead the Neotys Professional Services team with consultants in Europe and America and actively participated in creating the groundbreaking Neotys Cloud Platform.   Recently Bruno realized numerous successful high volume load tests over the globe using NeoLoad and the Neotys Cloud Platform to generate over 100,000 concurrent users once again proving the strong expertise of Neotys on large projects.

Thomas Ripoche is VP of Sales EMEA & AsiaPac at Neotys. Through his career he has a vast experience in dealing with highly critical web performance testing projects. One of his main responsibilities is to enable the implementation of modern load testing practices using NeoLoad. This new generation of load testing tool, suited to today's web applications, generates more value for organizations while increasing efficiency and shortening load testing cycles.


Latest Open Source Successes, Paul Rolfe & Vishnu Chittan, ThinkSoft Global

Does the Test tool’s capability limit the degree of completeness that can be delivered? – While there could be a mixed response to this question, from a commercial stand point one has to consider what open source tools are capable of delivering.

This session will explore some of the recent successes and lessons learnt in practically delivering Test automation & performance testing using open source tools. It will be an interactive session with practical examples of open source tool problems solved to deliver desired results. The audience will also be encouraged to share their experiences.

 

Test Estimation – a pain or painless experience? Lloyd Roden, Lloyd Roden Consultancy

Test Estimation is one of the hardest activities to do well in testing, the main reason is that testing is not an independent activity and often has destabilising dependencies. During this session we shall uncover some of the common problems in test estimation, how to overcome them together with 7 ways we can estimate test effort. Learn how to estimate one of the vial missing ingredients when it comes to testing - estimation of quality. Learn how there is a direct correlation between the estimate of effort and the estimate of the quality that we supply to management.

Lloyd says … “With more than twenty-eight years in the software industry, I have worked as a Developer, Test Analyst and Test Manager for a variety of different organisations. From 1999 to 2011 I worked as a consultant/partner within Grove Consultants. In 2011 I set up Lloyd Roden Consultancy, an independent training and consultancy company specialising in software testing. My passion is to enthuse, excite and inspire people in the area of software testing and I have enjoyed opportunities to speak at various conferences throughout the world including STAREAST, STARWEST, EuroSTAR, AsiaSTAR, Belgium Testing Days and Better Software as well as Special Interest Groups in software testing in several countries.  I was privileged to be Programme Chair for both the tenth and eleventh EuroSTAR conferences and won the European Testing Excellence award in 2004.”

 

The Ten (false) Commandments of Testing, Morten Hougaard, Pretty Good Testing

The Ten (false) Commandments of Testing’ are:

  • You shall NOT start testing without requirements

  • You shall find EVERY bug

  • You shall use BEST PRACTICE

  • You shall be CERTIFIED

  • You shall AUTOMATE everything

  • You shall use THE Tool

  • You shall ENSURE (high) Quality

  • You shall use THE test technique

  • You shall use THE Model (Waterfall, V-Model, Scrum...)

  • You shall stay FULLY INDEPENDENT (from developers)

For many years, these (and more) false ‘commandments’, have quite heavily impacted the testing industry and done so much harm. And as an even worse thing, these ‘wrong messages’ (at least to a certain degree), still seems to be ‘preached out there’.
Morten invites you to participate in this discussion on what kind of (false) commandments we (still) meet ‘out there’ and how we can use whatever fit’s our situation best, becoming pragmatically (and thinking) test professionals, instead of merely ‘blind followers’.

 

Navigating the Sea of Siloes for Application Delivery, George Wilson, Original Software

A survey just over 18 months ago revealed discontent in the Application Quality Management (AQM) market, with 84% of users stating that their products did not meet their functional requirements. So why is it then that so many companies still rely on a plethora of management products to help with their application delivery? From defect management to bug recording, from requirements management to development reporting and from agile management to ALM dashboards, the fact is there doesn't
seem to be a solution that is able to provide a holistic view and complete metrics when it comes to managing and reporting on all aspects of software delivery. This session will explore ways to provide a real time dashboard of ALM processes so that you can plot your exact position and navigate your voyage successfully.

 

Mobile Testing: a mandatory task or an option? Michael Hentze, Tricentis

In 2011, sales of smartphones have increased by 74% compared with the previous year, and 2012 this trend will continue. A wide range of technologies, operating systems and application types do not only pose a technical challenge to software tests, but they also require considering basic requirements. Test managers must make strategic decisions on whether they use simulators or real mobile devices for their tests; they must guarantee security and integrate the test into existing ALM infrastructures. But: what must be tested and is this test mandatory or optional?