Quarterly Forums

Future Forum Events Forum events take place (usually) on the last Wednesday of April, July and October each year. The January event is the Annual Summit, of course.

See the next event listed here

Future forum events will take place on the following dates: 5 February 2014 - Quarterly forum at Ball's Brothers 29-30 April 2014 - Annual

Details of ALL Previous Forum Events are listed below.

Testing Tags: 

2004 Meetings

January 2004

The Inaugural Meeting of the Test Management Forum
The inaugural meeting took place on Wed 28th January 2004 at the Saint Georges Hotel, London.

Compuware graciously sponsored the event. and with twenty-seven attendees, the meeting was deemed a great success by the delegates.
Two discussions were facilitated by Paul Gerrard:

  • The Role of Testing in IT Governance
  • The selection, assessment and development of professional testers.

The forum members agreed to meet again on Wednesday 28th April 2004.

April 2004

The second meeting of the Forum took place on Wednesday 28th April at the Grange White Hall Hotel, London. We are grateful to IBM Business Consulting Services, who sponsored this event.

Programme Test Management, facilitated by Graham Thomas
notes

Supplier Management, facilitated by Susan Windsor
notes

July 2004

The Future of Testing Services

Many of us have worked with, for or though Testing Services companies. Historically, these companies have not provided services, but 'bodyshopped' to cover resource shortfalls for client organisations. It is clear that historically, many clients could not accommodate 'managed services' and perhaps, the testing companies found it easier to bodyshop. But it is also true that some testing companies started life as contract agencies and still work to that model. But, what do clients really want? What do testing companies really want to provide? Is the market maturing, opening the door to a range of sophisticated service providers? Or will the market always be dominated by contract agencies?

Facilitated by Paul Gerrard, Gerrard Consulting

Paul has written an article for the BCS 2005 Annual Review which is more broad ranging, but touches upon the Testing Services which you might find interesting. See attachments.

Antony Marcano asks a question on a related topic

here...

Future methodologies, old challenges and the future role of Test Managers

Agile methods promise a lot: more disciplined developers delivering more reliable, test-first developed code into flexible test environments. But it's clear that the Agile approach is a fig leaf for many development groups. Are they doing the same old stuff but saying they are now 'Agile'? It seems that Agile is here to stay, so how should Test Managers deal with the new
methodologies? How should we deal with the disjoint between Agile processes and 'Best Testing Practice'

Facilitated by Graham Dwyer

October 2004

'Testing Centres of Excellence' - the challenges and rewards

A growing industry trend is to move testing (as well as other disciplines) towards a Centre of Excellence model. Other, more mature industries such as the Aerospace industry have been working with this model for many years. This session will discuss the pros and cons of the Testing C of E. Who has implemented a centralised approach and what were the rewards and challenges? Who
has rejected it and why. For those on the path to a centre of excellence, what are the hurdles? Where do you start? How do you measure success?

Facilitated by Colin Robb, Mercury, now HP

Success with Tools

Nowadays, everyone has experience of implementing and using automated test execution tools. Some organisations have great success with tools, others use them with limited or sporadic success whilst a depressing number of companies get nowhere. The risk of investing in 'shelfware' is making us all wary of embarking on ambitious automation programmes. This session aims to explore the critical risks and success factors of test automation. Do we fail to manage expectations? Does it matter which tool we select? Are cultural issues significant? Are automation frameworks the answer? Bring your experience to the forum and air your views.

Paul introduced the topic by showing a subset of a presentation he did for Eurostar 1998, 'CAST: Past Present and Future'

Facilitated by Paul Gerrard, Gerrard Consulting

2005 Meetings

January 2005

Testing as thought leadership... How do we increase the perceived value of testing at board level?

How often is testing viewed as a necessary evil that incurs excessive costs, rather than an activity that adds real value? There are still many organisations who give testing little consideration until development is well advanced, and a lot of others that plan for testing but often reduce the time allocated to compensate for project overruns in design and development. For an industry that analysts state is approaching maturity, there are still many instances of projects failing to deliver quality products. How do we, as Test Managers, raise the profile and understanding of testing at board level so the perceived value of testing is not under estimated within the business.

Facilitated by Danie Jamieson, IS Integration

Some references, provided by members of the forum are included here:

In the opinion of Don Mills, Louise Tamres, 'Quick Start to Quality – Five Important Test Support Practices' addresses several of the issues we talked about today.

Here's the summary from Sticky Minds: 'As testers, we understand the virtues of clear requirements, effective configuration management, software inspections and reviews, project planning, and project tracking. But how does the test group influence an immature organization to improve in these areas? Sometimes, the test group has to use short cuts, partial implementations, and even a clandestine approach to get things done. Practical strategies used at several software organizations have quickly improved product quality by addressing these five critical development practices.'

The Sticky Minds url is www.stickyminds.com/se/S8345.asp

Antony Marcano provided the following references which will also be of interest: Testing, Zen & Positive Forces by Danny Faught

Testing & Codependency by Lee Copeland

Antony has also posted some thoughts on his BLOG here

Non-Functional Testing

In the push for functionality at speed, non-functional testing is often left to the very end of projects. With the criticality to businesses in having fast, reliable, available and secure systems, it is time to review non functional testing. With the changes to development methodologies, including Agile, non functional testing can no longer be considered as something that happens at the end of the testing lifecycle. What can be done and why?

Facilitated by Stevan Zivanovic, Probatur Limited

April 2005

AttachmentSize
InfluentialTestManager.zip2.29 MB
Psychology.zip513.23 KB

The Influential Test Manager

So what does make an influential Test Manager? The Oxford Dictionary defines influential as either exerting influence, or has great influence. What value does an influential Test Manager have? Does influence guarantee a successful project?

With your help, during this session we will define what we believe is success in a test project and explore three possible solutions to the question 'What influencing skills does a Test Manager need to engineer the success of a project?'. Are we talking about political skills, technical skills, people skills or a mixture of them all?

Facilitated by Geoff Thompson, Experimentus

The Psychology of Testers

What motivates someone to spend their working life checking what someone else has done?

Why, when we 'know' systems are not ready for live release, do we still get 'ignored'?

What's in it for us?

This session is intended to ask what really motivates testers. Maybe if we can understand what our pay-off is in doing the job, we can communicate that better. If we can communicate better, then surely, we can all do our jobs better, create better systems, and end the week happier.

Facilitated by Roy Dalgleish, British Airways

July 2005

AttachmentSize
Testing Metrics301.5 KB

Testing Outsourced Applications

The outsourcing trend for software development continues apace. How does this affect the way we test those applications? What are the key issues to bear in mind? Do cultural differences have an impact? Does distance affect the test process? Is communication the only issue?

Facilitated by Sarah Saltzman

Testing Metrics

In the 2004 survey of topics to be discussed at the Forum, metrics came top of the list. If testing is a measurement activity, metrics must lie at the heart of what testers do. We are all familiar with estimation, counts of tests passed, failed, incidents raised, resolved and so on. But are the standard metrics we tend to work with right ones? Do we count what is easy to count, rather than what is valuable? Do our stakeholders take these metrics seriously?

This talk focuses on two questions:

  • Are we counting the right things?
  • Are we presenting our metrics in the right way?

Facilitated by Paul, Gerrard, Gerrard Consulting

October 2005

Acceptance testing does it deliver business value to the client and supplier?

IT solutions often have many system components allied to numerous business processes. The objectives of acceptance testing are often not clearly understood,leading to misunderstandings and issues between suppliers, clients, operations and the business communities. If this is true, are we wasting valuable time in acceptance testing? If we are, how can we improve the efficiency and effectiveness of the acceptance process?

Facilitated by Sam Clarke Principal Consultant of nFocus

The Skill Set of a Test Manager

The ISEB/ISTQB certification scheme does not focus on the particular skill set of a Test Manager. Test Management skills are many and variable: they cover the technical aspects of testing of course, but they also cover broad interpersonal skills and many of the planning, organisational and controlling skills of project managers. This session will brainstorm a set of skills a test manager needs. The skills profile will be used as a straw man to encourage the wider testing community to compile a potential 'syllabus' for Test Management Training, but also for skills assessment. Surely the industry needs this more than ever?

Subsequent to the meeting, Paul conducted a survey of members.

The resulting paper is posted here...

Facilitated by Paul Gerrard, Gerrard Consulting

2006 Meetings

January 2006

Quality Street: Software Quality Unwrapped

The newspapers are full of stories of late, over budget IT projects that don’t meet business requirements. With hundreds of IT standards, tens of thousands of qualified testing professionals, a revamped customer focused ISO 9000 and the new CMMI models how can we ever go wrong on an IT project? How do you avoid the hype and identify the practical stuff that will help test managers to influence the quality of the software under test? This session will discuss requirements quality, test team capability and whether offshoring makes a difference.

Facilitated by Stephen Allott, Electromind

Post-Deployment Monitoring - is it part of Test?

OK, testing is done, the users are happy, and we've had the end-of-project party. Are we finished? The 'traditional' development lifecycle ends with delivery, but the business lifecycle is now in full swing. Many of the tools and techniques used to test the system can now be reused to monitor that same system in production. Technically, post-deployment monitoring fits easily into test. But most organisations have operations and support teams who have their own tools and processes. Should testing carry on? Should we leave it to the 'professionals'? Is this a natural extension of test or a prickly political issue?

Facilitated by Paul Gerrard, Gerrard Consulting

April 2006

Strategic Direction for Functional Test Automation

After many years of buying and trying to use test automation tools for functional testing, we must admit that our industry has totally failed to realise the predicted benefits. In the words of Paul Herzlich, Ovum Software Testing Analyst 'Functional Test Automation is Broken'. Today, only around 20% of functional testing is automated.

Is our dependence on functional testers reducing? The trend seems to be that developers will increasingly take on more unit testing and business analysts are starting to use a new breed of functional test tools – Test Automation Frameworks.

If we can't lead automation, where is Functional Test Automation leading us?

Facilitated by Susan Windsor, Managing Director WMHL Consulting

Test Process Improvement is a Waste of Time

Everyone wants to get better at testing, don't they? Everyone in IT wants to get better at everything, don't they? The usual approach from specialist consultants is to look at process. If we improve our process, we must improve our performance, right? Better, faster, cheaper - in whatever discipline we choose to focus on. Tools can help, training can help, infrastructure can help, but process comes first. But why is this?

The process improvement bandwagon started with Deming and others in the 1940s and 50s. The mantra then was, 'improve your product quality and cost of production by improving your processes based on statistical data'. This mantra works, but it works for manufacturing processes, with very high usage, mainly automated. Few people use production line processes to create software, so why do we delude ourselves that process is so important?

Perhaps it's because process is easy. Processes can be defined, scoped, taught, automated, measured, replaced. But what are the real barriers to improvement? People, culture and organisation. This session will discuss the potential for testing improvement approaches that might work.

Facilitated by Paul Gerrard, Gerrard Consulting

July 2006

Is Software Change Driving IT Organisational Change?

A 'successful' system or software product ought to last for many years. It might be an operating system, an application package or an internally developed system or web site. But, if the software is to last, it must evolve through its lifespan to keep in step with changing user needs. Whether you call it software evolution, maintenance or change, the hard fact is that the post-deployment cost of a successful system is several times more than its initial development cost.

So much is common knowledge to developers, maintenance programmers and testers of course. But are we any better now at managing software change than ten or twenty years ago? We are all familiar with the problems of changes made in a hurry, with insufficient analysis and inadequate testing. Promoting better change management, impact analysis and regression testing practices is all well and good. But the cost of failure in production may be driving some companies to look at other solutions. Is yours?

Facilitated by Colin Robb, HP

Aligning Development and Testing Lifecycles

The first objective of a test strategy is to align the testing activities with the development activities. It’s obvious really, but sometimes hard to do. In fact, it seems to be getting much harder recently with the advent of iterative and agile development lifecycles – hasn’t it?

Developers change their development approach in order to be more efficient and effective (and ‘up-to-date’). But testers and their approach haven’t kept pace. While the developers have changed their methods, by adopting an iterative or agile approach for example, the test team will probably be used to a more traditional, structured, V-Model approach.

It’s no surprise that testing and development activities aren’t aligned.

This session will take a look at traditional (structured), iterative (RAD) and agile (incremental) development lifecycles and their associated testing lifecycle counterparts.

Facilitated by Graham Thomas, Independent Consultant

October 2006

Can Running a Test Organisation Like a Business Improve Test Performance and Visibility?

Test and QA managers usually operate within the business with limited influence. They get their mandate from other parts of the business and have to respond after decisions have been made on projects, budgets, resources and requirements.

But could software test and QA offer the business bottom-line financial benefits? Rather than negotiating after the fact and being seen as the bottleneck to delivery, could we take control of our business, make it proactive and transparent? Where could we make a real impact?

Could taking a business-oriented approach help?

Facilitated by Declan Kavanagh, MD, Insight Test Services

Exit, Cry Tears - Dealing with Testing Review Boards

The time that we all dread has come - the testing review meeting. Months have passed since we agreed the exit criteria for this test stage. Back then, we were all optimistic, enthusiastic and willing. But time has passed and some tests haven't. Our business sponsor, once so keen to support the project is now under pressure to kill it. The developers are long gone; only the test manager is left to take the blame.

Is it really as bad as that? Well not usually. But presenting the test evidence to a group of senior managers to achieve sign-off or acceptance is daunting, to say the least. Is there a best practice in this area? What should go in the phase end test report? How do you answer the awkward questions, bound to arise? How and why do exit criteria always seem to be negotiable? Do you take it personally?

What are the challenges in running a testing review meeting?

Paul's slides can be found : PowerPoint or PDF

Facilitated by Paul Gerrard, Principal, Gerrard Consulting

2007 Meetings

April 2007

Developing a Global Testing Framework for SAP

Mark's session introduces a proposed testing framework to support global implementations of SAP at BP. The intent of the framework is to provide a basis for testing that is usable across different programmes while balancing the right level of testing vs. project risk to assure a successful go live. This is quite a challenge: Who are the audience - business or IT? What level of detail should it prescribe? How will it be rolled out to the organisation, evolve and be managed?

Facilitated by Mark Smith, FC&A Special Projects Director, BP

The Top-Down View of Estimation

As a Test Manager, you'll be keen to get your 'bottom-up' effort estimates as accurate as possible to support your planning and preparation activities, not to mention getting any budget and staffing allocation agreed! However, there are many other things that can greatly impact your plans. The focus of this session is to look at a checklist of those other factors and share experiences and techniques for taking them into account.

Facilitated by Susan Windsor, Director, WMHL Consulting

July 2007

How can we industrialise Testing?

Testing factories were prominent in the mid 1990's pre year 2000, but what has happened to them now? Is this practice coming full circle or is there another way?

Sarah Saltzman

The Joy of Automation

BT One IT established an off-shore Test Automation Centre in November 2005. However, we still struggle to attract and develop 'users' of our regression testing service. Barry and Mo will present how they have reached this point working with their off-shore partners in providing a test automation service drawing out some of the successful aspects as well as challenges that persist.

Barry Baker and Mo Shannon of BT

Should Test Managers Get Emotionally Involved?

Yes, you've been there. Is it wrong for us to become emotionally involved or is it human nature?

Paul Gerrard, Principal Gerrard Consulting

Requirement Lifecycle Management, better requirements for testing (without testing doing all the work)

Pushing for testing-friendly requirements in multiple projects by getting involved earlier and on a different level. Discussion on the nature of requirements, examples and ways of relating requirement activities to testing tasks and roles.

Richard Ammerlaan, Chief Testing Officer of Sogeti UK

October 2007

Test Techniques - Are they of any Practical Use?

Testing techniques are a key part of most certified testing training but testers often find it difficult to apply the techniques in the real world. They tend to rely on domain knowledge and experience when designing tests. Why is this?

  • Do we need more techniques or better examples?
  • Do we need to upskill testers more and make sure they document their test designs?
  • Could we just push a button and generate the tests from a model
  • Is process improvement enough or do we need better people?

Are YOU using techniques? Which ones and in what context?

Stephen Allott, Electromind

Keys to successfully hiring and retaining your testing team

As an NLP Master Practitioner, Jane will explore techniques for recruiting and retaining QA and testing people.

  • How do you maximise return on investment (ROI)?
  • What are recruitment vehicles?
  • How do you attract the right individuals?
  • Internal and external PR, your brand and the sale!
  • Personal and team motivation.

Jane Muller, Pervue Limited

I'm an Agile Test Manager. Do I really exist?

We know what it means to be a Test Manager. We know what it means to be Agile.

  • But what happens when you're a Test Manager responsible for an agile project?
  • Why don't the agile books ever mention a test manager role?
  • Is it possible that the role doesn’t make sense?

David Evans and Ivan Ericsson lead a lighter-hearted debate on the issue.

Ivan Ericsson and David Evans, SQS

The Top 10 tips for putting in place a software test managed service

Customers and buyers have different perceptions and expectations of managed testing services.

If this is the current or starting point in a commercial relationship then there is a lot of work to be done to ensure satisfied customers and suppliers are created.

Declan will introduce ten suggestions for getting the most out of your supplier.

Declan Kavanagh, Insight Test Services

2008 Meetings

July 2008

The Nineteenth Test Management Forum took place on Wednesday 30th July at the conference centre at Balls's Brothers, Minster Pavement.

The meeting was sponsored by Amsphere and Gerrard Consulting.

The format for this meeting was a little different this time around and consists of two presentations and a workshop/discussion.

James Whittaker, Microsoft - Testing the Software of Tomorrow

Software does not have a stellar reputation for quality. However, software continues to play an increasingly important role in society and the fact that curing cancer and solving global scale problems like world hunger and climate change requires software is not encouraging. How can software testing evolve to face such challenges? This presentations take a peek into the future of software development and envisions testing’s role in making software fundamentally more reliable. James Whittaker discusses three specific technological barriers that must be removed to allow testers to help make the difference between a future in which software works and the current state of the world in which it does not.

Here is a video of James giving some interesting background on his activities at Microsoft: http://channel9.msdn.com/posts/briankel/James-Whittaker-on-Software-Test...

Roger Halton, Programme Manager, Amsphere

Management of a High Risk Multi-Vendor Scenario: A Test Strategy Workshop.

This workshop will explore the approach required for a Test Strategy in a multi-system multi-vendor scenario using a real-life example.
Paul Gerrard, Principal, Gerrard Consulting - The Axioms of Testing

Is it possible to define a set of Test Axioms that provide a framework for all software testing? In this respect, an axiom would be an uncontested principle; something self-evidently and so obviously true and not requiring proof. What would such test axioms look like?

This talk sets out the case for Test Axioms. The work of practitioners and researchers could be on shaky ground without them. Some applications of the axioms that would appear useful are suggested for future development. This is a work in progress.

Testing Tags: 

October 2008

The Twentieth Test Management forum took place on Wednesday 29th October at the conference centre at Balls's Brothers, Minster Pavement.

The meeting was sponsored by:
SOGETI, ELECTROMIND and FACILITA.

We are most grateful to our sponsors.

2.00pm Introductions
2.10pm Three Challenges, Gordon McKeown and Alan Gordon Test Management and Contracts in Agile Environments, Paul Gerrard Has outsourcing come of age to help us weather the economic downturn?, Declan Kavanagh
3.45pm Back From The Grave: Your Worst Performance Testing Nightmares, Panel Session People Challenges for Test Teams, Steve Allott Future of testing - in turbulent Times, Susan Windsor

Materials from the sessions can be downloaded using the links at the bottom of this page

Session A - Top Three Ways Not to do Performance Testing

Gordon McKeown (Facilita) and Alan Gordon (SQS) facilitated a discussion of performance testing disasters and their causes, leading to a suggested list of "what not to do".

Session B - Back From The Grave: Your Worst Performance Testing Nightmares

A panel of esteemed performance testers took questions from, and throw questions at, the audience. Performance testers find themselves in some terrible situations. What was the worst hole you ever found yourself in? How did you get out of it? How do we make sure that these situations never, ever happen again?

TEST MANAGEMENT STREAMS

Session C - Test Management and Contracts in Agile Environments, Paul Gerrard, Principal, Gerrard Consulting

Agile approaches to software development are well-established and the values and principles of the Agile Alliance have proved their worth. However great they are though, they provide little guidance on contracts and test management. This session suggests how contracts are normally used, how they can be aligned with Agile and discusses testing's role in the context of contracts.

Session D - People Challenges for Test Teams, Steve Allott, Principal Consultant, ElectroMind

If you have been involved in a recent software testing project you’ll have felt the pressure no doubt, as time to market and increased application complexity threaten to swamp testers and their managers. Sometimes, when worrying about the many technical challenges, we forget that testing is about more than just browsers, time to first byte and benchmarking.

People are involved too, and in this facilitated workshop you’ll be able to share your experiences with the group and explore the people challenges faced by testers.

Session E - Has outsourcing come of age to help us weather the economic downturn?, Declan Kavanagh, Managing Director, Sogeti Ireland

As a result of the economic downturn substantial cost reductions are the order of the day. Is there an appropriate outsourcing model that can deliver the required cost reductions consistent with acceptable levels of risk? Is a blended solution managed by a strong on-shore partner the answer? If so, how do we do "blend suitability" analysis?

Session F - Future of testing - in turbulent Times, Susan Windsor, WMHL Consulting

We are living through extraordinary times – the financial world is being turned upside down. Everyone is affected. What impact will this have on us? Let’s speculate and try to be prepared.

  1. Are you affected? How? Are you being asked to slow down, speed up, stop, change – WHICH?
  2. Where are the effects being felt? Are Attitudes to Testing, Process and Techniques, Improvement, Tools, Certification and Training, Outsourcing, People, Skills and the Job Market the drivers or the victims?
  3. Are the effects different by industry? How do Finance, Retail, IT Services, Telecoms, Government, Utilities, Transport differ?
  4. Does size matter? Small niche players, large services/integration firms, risk adverse, early adopters, etc.
  5. What can we do to best position ourselves to ride it out? Where is the ‘smart money’ going?

We’ll capture the key elements of the discussion and publish on the website for wider discussion. Then have an update slot at the TM Summit in January and beyond.

Testing Tags: 

2009 Meetings

April 2009

The 23rd Test Management forum took place on Wednesday 29th
April
at the conference centre at Balls's Brothers, Minster Pavement.

The meeting was sponsored by our patrons:
HP
, SOGETI and SQS UK

We are most grateful to our generous patrons.

PROGRAMME

Reducing Waste in the Test cycle, Paul Rolfe, HP
An open discussion to share ideas of how to reduce waste in terms of the cost, time and effort involved in the test cycle.
Attendees are encouraged to bring along examples and ideas and the format will be informal.
Agile Development and Test in Practice, John sweet, BSkyB
At Sky Network Services agile development methodologies were introduced in January 2006 to build a new broadband provisioning system for the broadband and telephony division of Sky TV. This talk will include a brief introduction to agile methodologies, how the practises and processes were introduced at SNS and how they work today. Although introducing agile methodologies may seem like a long and painful uphill task, this talk will seek to show how, once achieved, the methods work really well to deliver good quality software to challenging timescales in a changing environment at a sustainable pace (i.e. no developer or tester burn out!)
Designing for Testability - Technical Focus, Adam Knight

Two linked sessions addressing the vital topic of designing systems and
applications to facilitate effective testing..
Managing Outsourcing - Case Study, Mike Bartley, Independent
One of the likely consequences of the current economic climate is that companies will look to reduce their fixed costs through outsourcing. This presentation considers how to set up and manage an outsourced (and offshored) software testing capability to give strategic benefits beyond cost savings.
Cost-Based Analysis and ROI of Automation, Aidus McVeigh, Sogeti



Why is it that every time Automation is implemented; whether it is using Enterprise tools, creating a Unit test Suite or an API Framework, the topic of cost-based analysis & ROI is shunned? We are expected to ‘guarantee’ that Automation will become the silver bullet of Technology and save the day. But how can we do this when so much (if not all) of the benefits are intangible? This session will look at this ongoing struggle, why it exists, how to overcome it and why ROI really is your friend.
Designing for Testability - Management Focus, Dave Evans, SQS

Session two will deal with the management, “political”, organisational and
economic dimensions.

 

Materials from the sessions can be downloaded using
the links below

July 2009

The 23rd Test Management forum took place on Wednesday 29th July at the conference centre at Balls's Brothers, Minster Pavement.

1.30pm Registration/Tea/Coffee
2.00pm Introduction
Room A Room B Room C
2.15pm
Roy Dalgleish, British Airways: Communication Skills for Test Managers
Communication skills form a central part of the toolkit of any tester. High quality testing is useless if the results of testing and their significance are obscured by poor quality communication to stakeholders. So why is it that so many testers say that "We don't get listened to!"? How do we bridge the gap from "data" to "meaningful communication" for our stakeholders? In fact, how would it be if we got listened to every time? This session delivers practical advice on how to identify your audience's preferred communication style, how to work with that style, and how to position yourself in a way most likely to get you heard.
Paul Rolfe/Andreas Golze/Ian Howles, HP: How Testing can make IT Lean
Today’s businesses are challenged by intense competition and growing customers demand for more and better services. This imposes on IT organizations the pressure to come up with new solutions in shorter time frames, while maintaining and improving the existing service landscape.
To achieve this IT organizations have to improve the communication with the end user and the various stakeholders who facilitate the change and the operation of a growing applications landscape.
One way to help solve these issues is to establish an end to end Testing Lifecycle which acts like a sensor in a closed-loop and feeds date into the governance system (Controller) to optimize IT provision.
This discussion explores the different perspectives and highlights potential risks and opportunities.
Jeremy Gidlow, Chris Thompson, Intechnica: Load testing across the Internet
The testing of online applications over the internet is a growing market driven by convenience and a number of commercial benefits.

This session looks at the benefits and issues associated with Internet load testing and discusses the factors which determine the suitability and success of this testing approach.

3.30pm Tea/Coffee
  Sogeti, The Value of Training in Today's market Gojko Adzic, Neuri Ltd: Specification workshops: Getting the business and developers to listen to testers
  • Involving the whole implementation team (BAs, developers, testers) in the producing the specifications leads to more complete specifications and avoid functional gaps and inconsistencies
  • Specification workshops facilitate this process efficiently
  • They also facilitate knowledge transfer and building a shared understanding of "done"
  • Testers get to learn about the future software first hand, but also influence it by participating in the workshops
  • Done frequently, this eliminates much of the waste in testing today and helps testers build in quality from the start
Gordon Cruickshank, CEO eoLogic: Can early validation of Software Architecture improve performance and reliability?

This session explores new techniques for validating software architectures aimed at complementing and increasing the effectiveness of performance testing.
Can innovations in model based testing plus pattern and anti-pattern recognition provide increased focus for the performance testing function? Can they lead to significant cost savings and reduce time-to-market? What is the relationship between static and dynamic testing?

5.15pm Drinks Reception


October 2009

The 24th Test Management forum took place on Wednesday 28th OCTOBER at the conference centre at Balls's Brothers, Minster Pavement.

The meeting was sponsored by our patrons: HP, SOGETI and SQS UK and as usual, FREE to attend. We are most grateful to our generous patrons.

PROGRAMME

Please note - downloads of the session materials are only available if you register and log in.

Tony Simms, Test Manager at the NSPCC: "Oooo those graphs look pretty!" How to manage performance testing when you are not a technical specialist.
Tony is very much a non-technical test manager and was faced with having to scope, procure and manage performance testing for a new multi-channel voice and web-enabled counselling service for children and young people. This session will briefly set out the issues he faced and the course he took before opening up to the floor for discussion.

Susan Windsor, WMHL Consulting Limited
We “battle” to get our message across. We continue to improve our communications skills. We obtain certification for greater credibility. We strive for earlier exposure to stakeholders and management. TESTING is considered a major area for improvement in many organisations, but why is testing still undervalued?

Management's widely held perception is still, "Testing just slows my project down, costs too much money and we still have faults in production!"

Where did this perception and (and others like it) come from? If we can understand that, maybe we’ll be able to better tailor our messages and demonstrate our value.

Julian Brook, SQS. Code Quality – Who’s responsible?
Code Quality Management, sometimes known as static analysis, differs from traditional testing by assessing (static) source code rather than (dynamic) running software. Historically, this has been the province of developer peer reviews and inspections. Today, there are sophisticated objective measures of code quality available. If the Test Manager’s responsibilities are largely the process of testing running software and the quality assurance processes concerning the creation of that software, where exactly does the issue of code quality lie? Is it helpful or even meaningful to assess the quality of source code independently of assessing the running software? What tools and techniques can we use to measure code quality, and by what criteria? How reliable is code quality measurement as an indicator of overall system quality?

Mike Bartley, Test and Verification Solutions: Improving Time-to-Market Through Software Test Automation
Getting test results sooner - how test automation can reduce time to market. A test team should always keep in mind their contribution to the business, and in this session Mike will concentrate on improving time to market through test automation.

Paul Gerrard, Gerrard Consulting: What are you doing? Why are you doing it?
What is testing really about? This session is a workshop that asks you to use critical thinking to reassess the purpose, meaning and consequences of some testing concepts. Three topics will be considered in this session: test design techniques (test models), exit criteria and independence. Are they gospel, flawed, fake or folklore? Bring your own sacred cows for the slaughter.

Biraj Nakarja, Sogeti: Testing Challenges in an Agile Environment
An Agile environment poses many challenges to a testing function compared to the traditional Waterfall model. Biraj Nakarja would like explore this further with particular focus on; The Characteristics of an Agile Tester - does the 'traditional' tester become less influential in Agile?, Agile Testing using Off-Shore Capabilities - does off-shoring in Agile actually save you money in the long run?, Merging Waterfall with Agile on the same project - can this work, or is it a train crash waiting to happen? Biraj would like share his experiences, and find out more about yours, to help answer these Agile questions.

Testing Tags: 

2010 Meetings

April 2010

The 25th Test Management forum took place on Wednesday 28th APRIL at the conference centre at Balls's Brothers, Minster Pavement.

The meeting was sponsored by our patrons: SQS UK and Original Software and was as usual FREE to attend.

Downloads of the presenter materials are available at the botom of the page for registered, logged in users.

PROGRAMME

Designing a killer agile test management system - David Evans, SQS and Gojko Adzic, Neuri Ltd.
Join David Evans and Gojko Adzic in an interactive session to design your perfect agile test management system. We will run a variant of the Product Box innovation game to work out together what features the community would like to see in such a tool. While we are doing that, we'll discuss new features and limitations of current tools, the ways teams use them and tips and tricks for successful agile testing.
Gojko has written a blog describing the session.

Finding Quality Assurance Harmony in Agile – A Practical Road Map to Success - George Wilson, Original Software
Agile is a methodology that requires a change in the way QA and development work together. The use of technology and automation are much more difficult and finding a practical approach to testing is critical for successful Agile projects. George Wilson will explore how testing in Agile is different and give pragmatic advice to ensure application quality within an Agile environment.

Performance management: from black art to process - Peter Holditch, Dynatrace Software
Gartner state that 60% of priority 1 and 2 production issues arise from the application level, not the underlying infrastructure, and as complexity continues to increase this trend is set to continue.

We will be discussing what information needs to be gathered from production, test and even earlier, what tools need to be used – and in which of the dev, test, prod silos they apply, in order to systematically manage this without distracting developers by making them into part time ‘ninja operators’. We will also look at the common bugbears of troubleshooting: reproducing issues in test and effectively narrowing down the root-cause of complex application issues, with a final reflection on how some of these elements might be brought together into systematic processes. Essentially, how can we attempt to put an end to today’s requirement for ad-hoc non-functional wizardry and foster harmony between developers, testers and operators?

For those interested, a detailed discussion of dynaTrace’s use of our own technology for continuous performance integration can be found here: http://blog.dynatrace.com/2010/01/27/eating-our-own-dog-food-how-dynatra...

There is also an article about “how to get developers to write performance tests” here: http://blog.dynatrace.com/2010/03/11/week-6-how-to-make-developers-write...

And finally, an article about evolving functional testing toward validating implementation architecture and performance considerations: http://blog.dynatrace.com/2009/06/24/do-more-with-functional-testing-tak...

Seven Things That You Might Not Know – (But may find really useful) - Graham Thomas, Independent
This workshop will take you on a magical journey through some very useful but mostly unknown tools for perception and comprehension which will aid you in your daily testing life.

The tools, and their techniques are easy, fun to learn, and very powerful to use. And they will help you to master testing in the industry’s currently very demanding transition from that of a structured V-model history to a leaner, more agile and exploratory approach.

Effective Load Testers. Who are they? How are they created? - Gordon McKeown, Facilita
This session is an exploration of the qualities, experience and career trajectories of successful load testing personnel. Are load testers best if they are “ex something else”? Are ex-developers the best pool of talent? Should load testing be a part-time role for current developers? Can functional testers become load testers? Is “trainee load tester” a realistic role for a fresh graduate? What is career progression for a load testing specialist? Is a good team the sum of disparate parts and are there in fact various types of effective load tester? All this and more will be put under the spotlight as we exchange experiences and views.


A Framework for Testing in Scrum Projects - Paul Gerrard, Gerrard Consulting
Scrum seems to have spread like a virus in organisations wishing to adopt an Agile approach. In fact, many teams seem to think that because they use Scrum they ARE agile. Naive Scrum adopters make a familiar, foolish mistake: "Doing less IS agile". Based on recent project work, this session introduces a framework for testing in Scrum projects that addresses some problems facing Scrum adopters.

If you have a suggestion for a future session topic, or want to facilitate a session for us, please do get in touch.

July 2010

The 26th Test Management Forum took place on Wednesday 28 July 2010 at the conference centre at Balls's Brothers, Minster Pavement.

The meeting was sponsored by our patrons: SQS UK and Original Software and Tricentis Technology & Consulting and was, as usual, FREE to attend.

PROGRAMME

Susan Windsor, Gerrard Consulting: Getting the best results out of Test Support functions

As Test Managers, we put lots of effort into planning, preparing, executing and reporting our testing. Do we put enough effort into the Test Support functions that may be outside our direct control, but upon which our testing effort is totally dependent? What about Test Environments, test data and tools support, for example? We all know when these are not in place when needed; our testing stalls or our results are invalid. Why don’t we spend more time getting our Test Support requirements correct? Do we get the test environments we deserve as a result?

This session explores what all these Test Support functions are and how we can improve the service we get from our “suppliers”.

Richard Roy, Tricentis: Breaking the mould in Automated Testing

Functional test automation fails due to two problems: The maintenance problem of test scripts/frameworks and the problem of test data. In order to achieve high automation for the functional regression tests both areas, script-maintenance and test data management must be solved. This session will elaborate on how these two seemingly different aspects depend on each other and how some Blue Chip companies have already solved both problems and achieved automation levels beyond 80%.

Jonathan Pearson, Original Software: The Dark Side of Application Quality Management Ten Black Holes to Avoid For Successful Application Delivery

The quality of application delivery is at the heart of many of the challenges faced in IT projects, and this session will review some of the most common pitfalls and pain points that often beset development projects. With the help of Yoda, Obi Wan and others from the Star Wars cast, you will learn how best to avoid these challenges and deliver your projects on time, on budget and most importantly with quality.

James Wilson, Secerno: Why is testing in an Agile with Scrum environment hard? (And what can we do about it?)

James says, "During the last few UKTMF events I have attended Agile with Scrum has come up in a number of presentations and whether people like it or not as a project management methodology it is here to stay. I would like to cover a number of areas that I believe this methodology makes particularly hard for testing teams, including:

  • Agile with Scrum fixes the time and normally fixes the cost by virtue of having a fixed team size. When considering the Quality, Cost, Time triangle the only remaining area that can be changed is quality.
  • Short sprints where a releasable artefact is required at the end of each sprint makes the following areas of testing difficult as there is typically only a short period of time (days) between development finishing the code changes and the end of the sprint.
    • Soak testing
    • Stress testing
    • Regression Testing

Frank Puranik, iTrinegy: SaaS: Should we be worried?

The Cloud, Infrastructure/Software as a Service, remote hosting, data centre consolidation: they all imply applications' operation across a variety networks, some out of our control. Frank Puranik from iTrinegy will discuss the typical issues with external hosting of applications, the implications for system performance and our performance testing approach.

Michael Blatt, SQS: Performance Testing: A Great Leap Forward?

Has the essence of performance testing changed over time? How was performance testing in 1994, as opposed to 2010? What are the implications for the next decade and what do we need to be doing now to prepare? Michael Blatt from SQS will lead the discussion and make his predictions.

October 2010

The 27th Test Management Forum took place on Wednesday 27 October 2010 at the conference centre at Balls's Brothers, Minster Pavement.
The meeting was sponsored by our patrons: SQS UK and Original Software and was, as usual, FREE to attend.

Links to downloadable files can be found at the bottom of this post

Session Abstracts

Ian Parker, Facilita: “The many sledge-hammer approach to load testing”
Sometimes the only or least-cost method of load testing is by automating the client User Interface and running many client instances instead of the more usual API or Network interface approach. This topic will discuss the pros and cons of this approach based on experience with real world projects.

Gojko Adzic, Neuri Limited: "Continuous Validation, Living Documentation and other tales from the dark side"
Agile testing practitioners have traditionally been very guilty of using technical terms to confuse both themselves and everyone else who tries to implement these practices. Join Gojko Adzic for an interactive session on what we can do about this. Gojko will present the results of similar recent discussions at the UK agile testing user group meeting, the Agile Alliance Functional Testing Tools group and Agile 2010 conference. Then we will talk about what we can do about the agile testing terminology to help business users and developers get more engaged.

Gojko's Prezi presentation can be found here

Paul Gerrard, Gerrard Consulting: "Re-Thinking Test Design: Requirements, Acceptance and Regression"
Are requirements a "point of departure" or a "continuously available, dynamic vision of the destination?" Is it possible to increase confidence in requirements and the solutions they describe? In this session, Paul sets out a vision for achieving Tested Trusted Requirements, Improved Business Acceptance and Intelligent Regression Testing and illustrates a refined test process with examples from the Testela Business tool.

Jonathan Pearson, Original Software: “Jump Start Your Manual to Automation Strategy”
Since the first automation tools appeared on the horizon over 20 years ago, software test automation has been heralded as one of the saviours of IT development teams. Yet even in today's world of pervasive automation, manual testing is still a vital part of the software testing process 80% of testing is still carried out manually due to the failure of automated testing processes. This session will explore what determines the short and long term success of test automation, what to automate and what to leave as manual and how you can jump to automation in a seamless and painless manner.

Jump Start Your Automation Strategy
Here's another interesting paper: Manual testing is Here to Stay

Nigel Rayner, SQS: "Performance Issues - Reducing the Pinball Effect"
It can be immensely frustrating to watch performance issues being batted around a dozen support teams, all of whom deny culpability. The skills of the performance tester can make a big difference in reducing these costly project delays (and preserving their own sanity). In this session Nigel Rayner will discuss various means of empowering a performance tester with the techniques needed to quickly pinpoint the right resolution team, with enough detail to make the issue stick.

Mike Bartley, TVS: "The testing challenges associated with distributed processing"
Distributed processing is already widespread but is set to increase with more processors on the underlying hardware. This raises significant challenges for the SW tester: the complexity of the code that we test increases dramatically and the metrics that we used to judge completion are no longer sufficient.
Mike's presentationc can be found here

In this session Mike will first outline the main differences in distributed software. He will then facilitate a discussion on the implications for SW testing, the main challenges that we will face and how we can potentially overcome them.
By the end we should all have an improved understanding of the testing challenges associated with distributed processing.

Testing Tags: 

2011 Meetings

Testing Tags: 

April 2011 Forum

The 29th Test Management Forum will take place on Wednesday 27 April 2011 at the conference centre at Balls's Brothers, Minster Pavement.
The meeting is sponsored by SQS UK, Electromind and Tricentis and is, as usual, FREE to attend.

TIMETABLE
13.30pm Register/Tea/Coffee
14.00pm Introductions
14.15pm Sharath Byregowda and Francis Balfe, SQS,
Zero formal Test Cases – Extreme Exploration Experiences

Mark Crowther, BJSS, Moving from
traditional to agile testing, without tears or lactic acid

Remco Kwinkelenberg, Tricentis, Automation in
Agile Testing

15.30pm Tea/Coffee
16.00pm

Gary Wilson, Qualisystems, Automation. Not
just for testing software...
.

Paul Gerrard, Gerrard Consulting, Specification by Example: Does it Scale?

Gordon McKeown, Facilita, Load
Testing. Who? When? Why?

17.15 Drinks Reception

To book a place click here.

SESSION ABSTRACTS

Sharath Byregowda and Francis Balfe, SQS, Zero formal Test Cases – Extreme Exploration Experiences
Francis and Sharath shall talk about the way by which our team in an agile environment was able to successfully deliver quality releases for acceptance every sprint without writing or executing a single test case. We shall illustrate the process, techniques, tools, test approach which led to successful deliverables and the plans ahead. We shall talk in detail about the feature files, demos, session based tests, capturing sessions, reporting bugs, observations from the sessions.

Paul Gerrard, Gerrard Consulting, Specification by Example: Does it Scale?
In this session, Paul sets out the key principles of SBE and presents a brief demonstration of SBE in action. SBE can clearly work in an Agile context, but does it scale to larger and more complex environments? What are the barriers to SBE? Can it be used tactically? Can non-Agile developers 'live with the pace'?

Remco Kwinkelenberg, Tricentis, Automation in Agile Testing
As Agile testing races up the agenda, Automation of functional testing looks increasingly hard to manage effectively. Teams of individuals made up of people, the majority of whom, have little experience in Test, let alone Automation are going to be confronted by old problems that are only going to be enhanced in the new dynamic environment. However modern approaches to Automation can bridge this gap, providing the kind of clarity and speed that will be required within the tight sprint timescales the Agile approach demands. TRICENTIS show how this is possible during this session.

Gary Wilson, Qualisystems, Automation. Not just for testing software....
This session argues that while many organisations have yet to perfect their automation practices, software test automation is relatively well understood, mature and well supported by a growing number of testing tools on the market.

By way of contrast, system test automation (whether it be, for example, a network infrastructure, an embedded system or a consumer electronics device) does not seem to attract the same level of interest and investment in automation yet it offers the same benefits and much more. The difference could be explained by a number of technical issues and challenges facing those looking at system test automation. What are these issues and challenges and how can they be solved? What does an organisation need when facing the new frontier, "Systems Test Automation", and what lessons can they learn from the software testing world?

Mark Crowther, BJSS, "Moving from traditional to agile testing, without tears or lactic acid"
Many test teams have already made the move to a more agile centric testing approach. While it can often be clear what a team doing or not doing agile testing looks like - how a team looks when making the transition can often be very unclear. A common concern is what a test team does day to day if they 'go agile' and how that varies from what they do in a more traditional testing environment. There can be concerns going-agile is a way to dumb-down testing and become more development centric, teams can worry about what skills they may need or tools they need to learn about.

In this session Mark will share ideas, techniques and lessons learned from helping teams make the transition from a traditional testing approach to a more agile one. He'll discuss what can be done so the team take on change in a way that's manageable and doesn't unsettle the business, blending in agile test approaches to current practice so they are seamlessly adopted, phasing out traditional practices where it makes sense to do and progressively becoming more integrated with other teams.

Gordon McKeown, Facilita, "Load Testing. Who? When? Why?"
This session will examine current load testing practice. Who is conducting it? When does testing take place within the system development and deployment lifecycle? What are organisations trying to achieve? We will try to identify good practice (patterns) and bad practice (anti-patterns) so we can fit approaches that work to the complex and often contradictory requirements of real organisations.

To book a place click here.

Testing Tags: 

July 2011 Forum

The 30th Test Management Forum took place on Wednesday 27 July 2011 at the conference centre at Balls's Brothers, Minster Pavement.
The meeting was sponsored by SQS UK, Thinksoft Global and Electromind and was, as usual, FREE to attend.

Downloads can be found at the bottom of the page.
SESSION ABSTRACTS

Andy Redwood, "Test Psychology – the Tester’s Identity"
This is the next chapter in a series of Test Psychology slides that Andy has been presenting for a number of years. This chapter specifically looks at what makes a tester a tester? Looking at memory models, thought process and problem solving as a cognitive function and finally group and individual identities and power struggles between IT roles from a testing perspective.

Peter Herriott, SQS, "The Long and the Short of Technical Test Plans"
Technical test plans suffer from split personality: are they addressed to the test and project managers, or do they target the techies preparing to deliver performance, security, operational acceptance testing? In most instances the result is an unhappy compromise. How do we fix this? Can we come up with an effective solution?
Peter Herriott from SQS will share some of this tips and tricks to ensure his performance test plans convey the right information to the right people, in the right way.

Steve Allott, Electromind, "Agile Testing: In the Large?"
I will ask the Forum if they agree that “there is no such thing as Agile testing”.
This session is intended to pull together people’s experiences of different approaches to testing and debate how we use our testing techniques, methodologies and frameworks, call them what you will, in order for testers to deliver real value to their stakeholders.
In the 1980s Bill Hetzel, wrote a book called The Complete Guide to Software Testing and coined the phrases “integration testing in the small and integration testing in the large”. Having worked with Bill on the EuroSTAR and STAR conferences, I thought it appropriate inspiration for the title of this session.
With some real-life examples from projects I have worked on, I shall briefly contrast one of the most popular Agile methodologies, SCRUM, with the new Agile variant of the DSDM (Dynamic systems development methodology) called Atern. I shall also share with you my experiences of what it feels like to go back from using an Agile approach to a Waterfall or V-Model project.
What works on a small project (what do we mean by small?) and what works on a large project (what do we mean by large?).

Paul Rolfe, Thinksoft Global, “Does the bit in the middle really matter?”
A somewhat light-hearted but semi-serious facilitation session which asks some questions about the whole process of testing and expects a variety of answers from the audience.
No products and no sales pitches allowed! Audience participation positively encouraged!

Matt Robson, Capita, "Delivering better business outcomes - can the testing function lead the way to align IT and Business?"
Is the testing function in your experience viewed as "necessary" rather than
"strategic"?  And can it also be "exciting"?
This discussion explores how the testing function can be more strategic and
therefore increase kudos within your organisation.
We'll also explore ideas on how to help your organisation achieve this.

Stevan Zivanovic, BJSS, "The Perfect Non-Dunctional Requirement?"
This will be a collaborative session looking at what are the core components that make for a good non-functional requirement. Starting from a proposed model based on a performance requirement, we will, as group, look at the validity of the model, refine it and see if we can make it fit other non-functional areas. The output from this session will be put up on the TMF website. Be prepared to put your thinking hats on!
Stevan's slides can be found here

October 2011 Forum

The 31st Test Management Forum took place on Wednesday 26 October 2011 at the conference centre at Balls's Brothers, Minster Pavement.
The meeting was sponsored by SQS UK, Thinksoft Global and Dynatrace and was, as usual, FREE to attend.

SESSION ABSTRACTS

Paul Gerrard, Gerrard Consulting, "Re-Thinking the Role of Testing in Agile and Structured Projects"This special session will discuss the serious undercurrents that are forcing a re-think of where testing fits in systems projects. Agile is here to stay, but what is testing's role in it and can it be scaled? Developers are getting used to ‘test-driven’ - are they going to take over testing? There seems to be a methodological shift from staged to iterative and now towards continuous ‘Specification by Example’ approaches. Testing is being squeeze from the ‘low-value’ norm to upstream, exploratory, user-experience and tool-based execution.

This sessionThis session discusses the use of workflows, stories and examples to capture and replay and validate the user experience; the revival of test modelling as a discipline; the use of test automation for all functional testing; the auto-generation of tests with behaviour-driven development approaches; the use of automation for (anti-) regression testing and the emergence of specification by example as a viable future.

Paul's Slides can be found here.

Vishnu.V.S & Paul Rolfe, "Innovations in Test Automation" Though recession is the time when the economy retracts, it's also a time when savvy organizations and their CIOs can push for innovation in IT, transform IT processes and achieve the most required Cost savings.

The session would be an open discussion on how to achieve cost reduction using innovative methods in Test automation. Critical points for discussion includes:
  • How to overcome testing cost challenges by bringing innovation to Test Automation?
  • Does the automation tool’s capability limit the degree of innovation that can be delivered?
  • Is innovation possible using open source automation tools as well?
  • Why innovate more? What are the benefits out of innovating more on Test automation beyond cost?
Ian Thompson, Dynatrace and Intechnica, The Emergence of Performance in the Test Cycle
Poor performance online means poor performance on the bottom line. If your organization relies on its website and the Internet for business, or you are launching a service that thousands of individuals could be using at any one time, then you need to know how your application performs under load before launch day.
With service-orientated architecture, virtualized and cloud deployment, the way that applications are developed and load tested must fundamentally change. Current load testing approaches are bust: they often take too long and are too late in the lifecycle. This presentation highlights state of the art ways to transform load testing strategy, with application performance management and troubleshooting diagnostics at the heart of the test phase. Release delays can be reduced by pinpointing the cause of performance issues more quickly and a moving to a collaborative, agile, evidence-led problem-solving dynamic.
Lessons learned, customer feedback and project results at ASOS.com, Travelport, T-Systems, Thomson Reuters around best practices for proactive quality engineering.

Paul Littlebury, JaffaMonkey, "The Agile Pretence" "We are working Agile!" is a common cry on projects, but what does that statement actually mean? The application of Agile methodologies largely ignores the human element - the best methodologies will not work unless the team works cohesively. The major issue around Agile is when it is followed like a user guide, rather than a philosophy, that should be applied according to company culture and human resources available. Does that mean that we are approaching Agile all wrong by ignoring the human element? And how can QA help bridge these gaps, when they exist?
Using real-world examples, Paul will illustrate how damaging a poor application of Agile and/or SCRUM can be - and how to make it more successful, whatever the constraints - company culture, human resources or confused stakeholders.
This session will be based on his experiences in the field, and the constant surprises thrown to QA in the modern web development world.

Paul's slides can be found here | Paul has posted a blog about the session here.

Giulio Saggese, SQS, "Performance by design: the AGILE way" We generally spend more time dealing with software performance issues when they hit production than we do at the design stage. Often, the problem is that those who build the software will say “Well, you didn’t tell me that it had to perform that way” to which the business replies “It is obvious that it had to perform that way”. Sounds familiar? Would it not be nice to develop applications that perform by design?

Giulio will share his experience in the performance testing space of how this can be addressed within an AGILE framework.
Giulio's slides are here



Susan Windsor, Gerrard Consulting, "Using Acceptance Expertise in Contract Negotiation – can we help avoid disaster?" We’ve all known projects can take years to spend huge amounts of money and don’t deliver the anticipated value. In today’s economic climate, what is being done to avoid these mistakes?
Lets discuss the history and understand why this has happened. Susan will share her views on some of the key reasons, with some real-life examples.

Then she’ll Then she’ll look suggest how testers with their experience can actually make a different to the success of a project by being involved in contract negotiation.

2012 Meetings

Testing Tags: 

2013 Meetings

2013 Quarterly Forums

Testing Tags: 

April 2013

The 38th Test Management Forum took place on Wednesday 24 April 2013 at the conference centre at Balls's Brothers, Minster Pavement.

The meeting was as usual, FREE to attend.

PROGRAMME

13.30pm Tea/Coffee
14.00pm Introductions
14.15pm Paul Gerrard, Gerrard Consulting, Goldfish Bowl: Ad-Hoc Discussion - bring your HOT Topics Michal Janczur, Deutsche Bank: BA and QA Communities of Practice at Deutsche Bank - Working to improve Quality of Requirements Susan Windsor, Gerrard Consulting, Delivering Business Value not just Software!
15.30pm Tea/Coffee
16.00pm Paul Gerrard, Gerrard Consulting, How Techy Should a Non-Techy Tester Be? Graham Thomas, Independent Consultant: Software Testing Secrets We Dare Not Tell Susan Windsor, Gerrard Consulting, Transitioning to Outsourcing? What’s the impact on your role?
17.15pm Drinks Reception

Session Abstracts

Graham Thomas, Independent Consultant: Software Testing Secrets We Dare Not Tell I firmly believe that “There are some fundamentals of software testing that we really don’t understand or know the answers to yet.” Here are some simple questions. They are very easy to ask. Unfortunately they are very difficult, if not impossible to answer.

  1. What is the purpose of Software Testing?
  2. Just how effective is the way we test – and how do we know? (Trad, V-Model, Structured Testing, agile or any other form of testing for that matter?)
  3. If checking isn't software testing, then why is it that ‘checking’ is what our stakeholders are paying us to do?
  4. If software testing is so difficult, demanding and challenging, then why is it that we keep on assigning the least skilled or experienced to perform it?
  5. Why do software testers spend so much of their time running tests that do not find bugs?

These questions are important because they drive at the very heart of what we are doing in the software testing industry today, and understanding the answers will surely prime the future direction that our industry will move in. This session has been designed to be a highly interactive discussion which many people might find challenges their basic understandings. I will act as facilitator, give an introduction to each question, then actively moderate the debate and if needed take on the role of arbiter. Come along, expect to be involved, and if you have a view then please share it. Help to drive forward the discussion – and potentially the software testing industry.

Susan Windsor, Gerrard Consulting, Delivering Business Value not just Software!

When a business initiates a project, they have a vision of what value will be delivered. Frequently projects have to compete for budget and so the value of potential benefits is critical in securing budget. A great deal of time and effort goes into identifying these potential benefits and defining measures to track their realisation. But, how often is that information lost once the software element of the project kicks off?

There is a lot of discussion in our industry about delivering business value rather than just software, but what does this really mean? How can the role of Business Analyst and Test Analyst work together in support of that goal? What techniques really work in terms of delivering value? We’ll explore all these questions and then I’ll share some ideas on how we can demonstrate that delivered software components relate back to the original project benefits. We’ll then open up a discussion to share ideas and experiences on how we can collaborate together to improve the value we deliver to our business stakeholders.

Michal Janczur, Deutsche Bank: BA and QA Communities of Practice at Deutsche Bank - Working to improve Quality of Requirements

  • DB is an organisation of over 90,000 people with thousands in technology delivery
  • DB have promoted Communities of Practice within Technology groups
  • Collaboration across COPs should result in improved practices across the organisation
  • Quality and timely requirements are essential to quality software delivery
  • What are the priorities and opportunities for cross BA and QA professions?

Paul Gerrard, Gerrard Consulting, How Techy Should a Non-Techy Tester Be?
There seems to be an increasing demand for testers who have technical skills. The range of skills being requested varies very widely in technologies and depth of knowledge and capability. Some are the traditional technical oriented testing skills like automation tools and performance testing, but other roles sound like 'developers who test'. Recent roles I've seen include:

  • Performance testing (proprietary and e.g. JMeter, The Grinder etc.)
  • GUI test automation and frameworks (proprietary and e.g. Selenium, Watir, RobotFramework etc.)
  • Database manipulation, query and other SQL-based products (e.g. Oracle, DB2, MySQL etc.)
  • Scripting languages for text processing, job control, build and release processes (e.g. Python, Ruby, Perl, PHP)
  • Programming languages proper e.g. Java, C#, C++ and JavaScript
  • Developer/unit testing frameworks e.g. xUnit
  • Behaviour-Driven Development tools e.g. Fitnesse, Cucumber, RSpec, JBehave etc.
  • Oh, and did I mention open source?

In this session, we'll explore the range of technologies testers are increasingly needing and how non-techies and teams can acquire transferable technical skills to enhance their capabilties (and CVs).

Susan Windsor, Gerrard Consulting, Transitioning to Outsourcing? What’s the impact on your role?
When considering outsourcing or off-shoring activity, it’s normally development and/or testing activity that will transition over to the supplier. Whatever the relationship, there is always an impact on the role of Business Analysis.

One aspect to consider is where the ownership of requirements sits within a new operating model? Where is the analysis activity actually performed? If the supplier has domain knowledge, is it reasonable for them to undertake analysis and guide development and test activity? If all the analysis is still performed in-house, how can the supplier obtain the level of support required to fully understand requirements? Is there a hybrid solution where ultimate responsibility lies in-house but much of the analysis is performed by the supplier?

During this session, we’ll explore these different models and share experiences of supporting the transition to an external supplier. In addition to these questions, how else is the role of the BA affected by outsourcing? Is there a role to perform as part of the supplier selection? Is it necessary to more fully define your method? Do you need to change your method to align with your new supplier? What about additional skills you may need to support your new role, particularly with respect to management skills?

At the start of this session, we’ll review the types of models and additional skills; prioritise those we’d like to discuss as a group and then open up for discussion on our views; knowledge and experiences.


Testing Tags: 

July 2013

The 39th Test Management Forum took place on Wednesday 31 July 2013 at the conference centre at Balls's Brothers, Minster Pavement.

The meeting was as usual, FREE to attend.

Timetable

13.30 Tea/Coffee
14.00 Introductions
14.15

Graham Thomas, Independent Consultant and Phil Isles, HSBC, Flowcharting workshop using the Raspberry Pi Paul Gerrard, Gerrard Consulting Big Data: What is it and why all the fuss? Blll Matthews, Target Testing, "Security Testing for Non-Techies"
15.30

Tea/Coffee

16.00

Stefan Zivanovic, Breaking Through the Barriers, "Agile Technical Testing - Reality or Myth" Adam Knight, Rainstor “Experiences in Testing a Big Data Product” Gordon McKeown, TestPlant, “The ideal load testing tool – what would it look like?”
17.15 Drinks
Reception

Session Abstracts

Graham Thomas, Independent Consultant, and Phil Isles, HSBC, Flowcharting workshop using the Raspberry Pi
By now many of you will have heard about the Raspberry Pi, the $35 British computer that is helping schoolchildren to learn how to write computer programs. To date over 1 million have been produced. A real success story.

Some of you may also know that over the last 18 months I (Graham) have been actively trying to reconvert the world to using flowcharts.

Well now Phil and I have brought these two themes together in the form of a highly interactive flowcharting workshop presented using the Raspberry Pi.

This session should be informative, fun, and productive. Informative in that you will find out how really powerful a $35 computer can be. Fun because we will use the Penguins logic puzzle game on the Raspberry Pi as the basis for the flowcharting exercise. And productive because you will learn or relearn how powerful quick and easy it is to generate flowcharts to aid in your daily work.

To play an active part in this workshop you will need something to draw flowcharts with, be that notepad and pencil, computer, tablet or phone.

Paul Gerrard, Gerrard Consulting Big Data: What is it and why all
the fuss?

Big Data seems to be the latest buzzword that seems to be trending.
The term has been around for a while but now, the largest
corporations are promoting Big Data products and services very
strongly, so something big is on the horizon. Right now, it still
looks like a load of hype, but scratching beneath the surface, it
seems to me that it has the potential to affect every person in
society and there's no getting away from it. What is all the fuss
about?

Big Data isn't really just about 'big'. Depending on who you ask,
mnemonics "V3" or "V4" summarise it well. Volume - is the quantity -
and it's big. Velocity - the rate of arrival/capture of data, and
that's big too. Variety - the sheer variety of data and formats to
be used. Veracity - the accuracy, truth or value of that data.
Volume and velocity are driving the technical aspects - relational
is out, NoSQL (not only SQL) is in and the relational data skills
out there are not enough. Variety and veractiy are the real
challenges: device instrumentation, social feeds, government,
location, financial, voice, image and video and all the data
captured by any (and I mean ANY) device that we use or encounter or
that monitors us and the gadgets we use are being stored, because
some day, it might be useful to a data analyst working for a
start-up, a corporate or our government.

If you don't know anything about Big Data, this session will provide
a basic introduction to what's happening out there, right now.
Adam's session will follow, and take a more practical look at a real
Big Data product.

Adam Knight, Rainstor “Experiences in Testing a Big Data Product”
I am currently working on testing a “Big Data” storage product for a
small agile company based in the UK. In this talk I’ll share my
experiences in testing a product that is targeted at the big data
problem. I’ll examine what big data means to us as a company and to
our customers and how it has emerged as a market in recent years.
I'll examine the technologies that characterise big data, for
example the increasing popularity of Hadoop and its associated
tools, and will highlight some of the issues faced when testing a
product that integrates with these.

As a group we'll discuss the implications of big data for testers
with reference to experience gained from working in my specific
context. I'll look at some of the approaches that we have had to
take to ensure that scalability and performance targets are tested
when the data sizes involved in the live implementations exceed that
which can be easily recreated in the test lab. I will present some
approaches that we have adopted to test the product quickly and
effectively within the constraints of agile sprints. By discussing
practical experiences you will gain valuable insights into the
testing issues that are emerging in the fast growing Big Data
market.
Blll Matthews, Target Testing, "Security Testing for Non-Techies"
Cyberspace is becoming an increasingly hostile environment to do business and in recent years many large players have admitted to being “Hacked” in some way or another. However this is only the tip of the iceberg. Less than 50% of detected breaches are reported to authorities and less than this are made public. Increasingly, many breaches are a result of software vulnerabilities. In line with this trend there is an increase in the need for test teams to undertake some form of security testing. So if it’s not already part of your test team’s remit to conduct security testing it’s likely that it will be in the coming years.

Security Testing is a complex topic and much of the literature focuses on the techniques used but this is only one side of the coin. In security testing, how you think about security is just as important as what you know about security.

In this interactive session I will present the approach that I frequently use to thinking about and communicate ideas about security testing based on Threat Modelling. The focus on Threats has its advantages:

  • Stakeholders understand the concepts of Threats and Risks.
  • We can better frame the testing we are doing and the problems we find.
  • Focuses attentions on specific areas of value and attack vectors rather than blanket coverage approaches.
  • Encourage creativity and avoids tunnel thinking that can stem from a techniques approach.
  • Encourages us to select techniques that fit the test, rather than select tests that fit the techniques.

We’ll cover the basics of Threat Modelling and interactively create a Threat Tree for a portion of a system (no expertise required – honest) and discuss how we might use this to communicate security testing upwards, to stakeholders, and downwards to testers.

We will wrap up the session with a discussion covering your thoughts and experiences of security testing and how we can prepare our teams for this type of work going forward.


Gordon McKeown, TestPlant, “The ideal load testing tool – what would it look like?”
What would the ideal load testing tool look like? Would it be a combination of the best features of existing tools minus their annoyances? If so what features should be included and what annoyances should be removed? Is there something missing from all current tools? What do you want (need) a load testing tool to deliver? What will you want (need) in the future?

Here is your chance not only to get your frustrations off your chest but also to contribute to a constructive discussion rooted in reality and what is possible! We’ll examine automation approaches, scripting versus scriptless, scalability, test resource management, Cloud delivery, SaS versus installable tools, licensing and anything else that emerges during the session.

Stefan Zivanovic, Breaking Through the Barriers, "Agile Technical Testing - Reality or Myth"
The session will look at trying to define what we mean by technical testing, then look at the often quoted reasons why some aspects seem to be difficult for Agile teams and finally work through some possible solutions. The aim of the session is to investigate the experiences of the attendees and look at how these could be applied.

October 2013

The 39th Test Management Forum took place on Wednesday 30 October 2013 at the conference centre at Balls's Brothers, Minster Pavement.

The meeting was sponsored by TESTPLANT and was, as usual, FREE to attend.

Session Summaries

Brindusa Axon, Unreasonable Minds, "Deriving scope from goals using stories"
User stories are a universally adopted technique for agile teams. They represent customer wants and needs and are units of valuable change of the system under development.

The main problem with stories is that we're using them to fit something we're used to work with, requirements. What we often ignore is that they are not a simple requirements fix, they are rather negotiable expressions of intent, meant to satisfy a particular need. The negotiable characteristic enables us to look for better options, pivot and even discard features that are not truly needed.

In this workshop we're going to try our hand at using stories to derive scope from business goals. This interactive session will not only give you the opportunity to experience story writing, but also explore the plethora of options to achieve a particular business goal.


Gordon McKeown, TestPlant, “Is GUI automation now an essential part of performance testing?”
Automating the application GUI is being used increasingly for performance testing. There are several compelling reasons for this:

  1. New Web technologies such as HTML5 (WebSocket API) and asynchronous AJAX requests are difficult to handle using the traditional HTTP “replay” approach.
  2. The rise of Mobile clients and innovative device types.
  3. The realisation that client-side behaviour may vary depending on server behaviour under load.
  4. Increasing application complexity.

It has also been suggested that the need for tester programming and technical skills can be reduced and that functional testing scripts can be re-used for load testing.

To keep the discussion concrete we will compare 3 different test scripts for the same Web application transaction using Image based GUI Automation, Selenium and HTTP “replay”.

The session is part briefing, part the pooling of experience and part the airing of views on an important topic for testing professionals.

Richard Neeve, “Software Developers In Test: The Supply-Side Crisis Facing Agile Adopters”
In 2001 I attended a SIGIST conference where a speaker stood before the audience, many of them weary veterans of previous methodology wars, and prophesied that the agile philosophy of which he was a devoted advocate would, unquestionably in his view, take hold and spread into the mainstream in a sustained way. Many scoffed at the notion, but he was right. Whatever you think of agile - and it has its detractors - the undeniable market reality is that many (perhaps even most) working environments are either genuinely agile, think they are, are actively moving towards being so, are seriously planning to be so soon or least genuinely aspire to be so in the future.

For a while we testing folk seemed collectively quite unsure about whether or how we fitted in to the agile paradigm but that time is long past and the conflation of development and test remits is one aspect of the consensus outcome, for good or for ill.

And yet despite the writing having been on the wall for so long and not withstanding deliberately provocative talks at the TMF about the limited future of vanilla manual testers, we (the testing community) have been sleep walking into being caught almost completely flat-footed in attempting to supply what is now a broad-based and rapidly growing demand for SDITs (Software Developers In Test). We simply are not geared up to supply these resources in the volume that can satisfy this demand and we are all chasing a very small pool of candidates. I believe this is one of the top challenges facing the testing discipline today; arguably the biggest.

This session will offer a reality check on where we are, identify the dynamics that led us to this position and seek ideas on what we can do to scale SDIT supply. Be warned, I don’t have the answers on that last point so if ever there was a TMF session that relied on active participants, it’s this one.

Richard's slides can be found here

Mike Bartley, “Requirements Management – turning compliance to business advantage”
Requirements-based testing aims to design a necessary and sufficient set of test cases derived from the product requirements to ensure that the design and code fully meet those requirements. It is both important from both a commercial and standards perspective.

A number of standards, such as ISO26262, mandate that certain safety related requirements have a demonstrable audit trail to implementation and signoff. A requirements-based testing approach helps with this.

From a commercial perspective it helps to ensure that a product meets all of its requirements. However, it can also help to ensure that every testing activity is associated with a product requirement. This helps to eliminate over testing which adds additional cost and can delay market entry.
In this talk we first explain requirements-based testing and the legal obligations of requirements based testing for some industries. However, rather than seeing it as a cost the talk explain how this can be turned to commercial advantage.

Mike's Slides can be found here.

Davidson Devadoss and Chris Comey, Testing Solutions Group, “Provision of Assurance on a Data Centre move in less than 90minutes!”
One of leading global law firms decided to move their large Asian datacentre from one service provider to another. This would involve physically moving hundreds of servers and a vast amount of storage, backup systems etc. to a new location over a weekend. As the firm operate 24/7/365, they have other datacentres in the UK providing service during the move. It was imperative that the new datacentre was able to provide service to all of their Asian fee-earners by the following Monday.

A Test Manager was engaged to develop a migration test plan and coordinate all of the testing activities to ensure that the client would not:

  • Lose business
  • Lose revenue
  • Lose productivity
  • Breach regulations.

By using an innovative, practical and collaborative approach to testing, TSG made it possible to successfully complete all of the planning, preparation and testing from the UK, confirming service availability within 90 minutes of handover from hosting team. Overall giving the confidence within astonishing time frames to stakeholders that the data centre move was successful indeed!

Paul Gerrard, Gerrard Consulting, “Introducing Test Analytics”
Paul defines Test Analytics as: “The capture, integration and analysis of test and production monitoring data to inform business and software development decision-making”.

Production monitoring data can be integrated with test (execution) monitoring data. Can this aggregated data *all* be regarded as ‘test monitoring’ data? There is only a small leap of imagination to treat the logging of system use in a production environment as the same as system tests in a test environment. The source is different but the nature of the captured data can be very similar – if we choose it to be so.

Paul will describe how data captured throughout a test and assurance process could be merged and integrated with definition data (requirements and design information) and production monitoring data and analysed in interesting and useful ways.

Clearly, it is easier to imagine Test Analytics fitting into a Continuous Delivery regime, but the discipline could and should be appropriate to some degree for all Agile and Structured approaches – at least in principle.

2014 Meetings

2014 Quarterly Forums

Testing Tags: 

February 2014

The 41st Test Management Forum took place on Wednesday 5 February 2014 at the conference centre at Balls's Brothers, Minster Pavement.

For the first time, we charged a fee of £20 plus vat to attend. we had a fantastic response and despite a troublesome tube strike over 70 people attended. Thanks for your continued and future support.

Timetable

13:30pm Tea/Coffee
14:00pm Introductions
14:15pm Craig Hodgson, "Test Automation – driving testing efficiencies throughout the delivery process." Susan Windsor, "Step Up Your Career!" Joanna Newman, "Continuous Improvement in Testing
15:30pm Tea/Coffee
16:00pm Peter Adrian, "Corporate Strength Agile" Richard Neeve, "Addressing The Apparent
Under-Supply Of SDETs: Take Two"
Mike Jarred and Matt Cardle, "Test Execution Prediction
Modelling and Reporting"
17:15pm Drinks Reception

Session Abstracts

Joanna Newman, Ericsson, "Continual Improvement in Testing – Using Test Exceptions and other data to improve your coverage, grow your team and improve the quality of your releases"

Test groups generate enormous amounts of data from test tools and as outputs from manual testing. We know that buried within it are indicators and trends that could help us improve the quality of our releases, reduce costs, or a combination of the two.

Following on from the July 2013 session on Big Data, this session will look at different techniques one can use to analyse test exceptions and other types of data for improvements that can contribute to a continual improvement program.

In this session we’ll discuss:

  • The differences between continual and continuous improvement programs
  • What types of data can lead to improvements
  • How long to spend on retrospective analysis
  • How to adopt a culture of review without slowing the team velocity
  • “What Now?” - What to do with the output of the analysis, and how determine and prioritize various initiatives, and how to monitor them for success

This will be an interactive session so I hope to hear of your examples of using test exception and other data to improve your test processes.

Susan Windsor, Gerrard Consulting, "Step Up Your Career!"
When you think about developing your career, do you think of technical skills or do some of the softer interpersonal skills come onto your radar? As IT professionals, it’s all too easy to prioritise our technical skills. However, with the emphasis today on successful collaborative working, good communications skills are fast becoming essential to maximising your contribution and achieving the recognition and reward you deserve. But where do you start?

This session explores the different aspects of interpersonal skills that really can make a difference to career progression. I’ll outline a number of topics that relate to my own career and then we can discuss and share experiences on the most relevant ones to the group. The topics I’ll introduce are:

  • Recognising communications styles, for yourself and your team mates
  • Understanding cultural and skill differences
  • Adopting a coaching leadership style
  • Re-inventing your personal goals
  • Identifying effective team attributes.

I’ll share some techniques to assist your development in each of these areas and include some hand-outs for you to take away and practice back at base.

With the pressure on testers in the current market, specialisation is critical. If you role includes leadership tasks or team working, you can benefit from improving your interpersonal skills. Softer skills are not a “nice to have” any more they are becoming essential.

Why not come along and identify your personal development action plan!

Richard Neeve, Independent, "Addressing The Apparent
Under-Supply Of SDETs: Take Two"

At last October's TMF I ran a session about the apparent under-supply of Software Development Engineers In Test (SDETs) and offered an analysis of its causal factors but there were two problems:

  1. I got a bit carried away with my analysis which meant that
    although I suggested some potential solutions, we didn't have
    enough time to discuss them in any meaningful depth.
  2. Despite making a promise to the contrary, my facilitation was
    too generous in allowing the discussion of more nebulous
    questions like "What is agile anyway?" and "Is the SDET concept
    just a marketing thing?".

This second talk will be kick-started with a brutally cut-down
version of my previous slide deck and will have a relentless focus
on what the solution(s) might be. The full-fat slide deck from my last talk and my notes from the discussion can be found here.

Mike Jarred, Director of Software Delivery and Matt Cardle, Test Delivery Manager, IDBS, "Test Execution Prediction
Modelling and Reporting"

Test departments have been criticised as providing information too late in the SDLC to influence outcomes. This contributes to testing being viewed as not adding value to projects teams by senior stakeholders. Accurate estimation, monitoring & controlling of test execution can provide early warning signals that testing may not deliver to time and scope targets. The same data that provides this information can also be used to model options that can be used by project teams to take corrective action to ensure successful delivery.

Matt Cardle and Mike Jarred work at IDBS and have been evolving usage of predictive modelling in their test projects, and know it provides valuable information to stakeholders. They are keen to progress this work further by discussing with other practitioners their experiences, or their views of how to bring greater benefit to IT projects by improved modelling.

Based on their experience this session will commence with the following:

  • Rationale of why prediction modelling adds value to projects and organisations
  • The data required to enable modelling to take place
  • The metrics provided by the data, and how the data and metrics are validated
  • An example of a prediction model for testing

This session will then explore the following:

  • Understand what other techniques for predictive modelling are currently being used
    • Understand what other inputs / metrics could be used when
      modelling outcomes
    • Challenges and constraints of using predictive modelling

    This discussion is aimed at senior practitioners interested in
    having visibility, control and predictability in non trivial test
    execution projects, where scope and time are fixed.

    Craig Hodgson, Centre4testing, "Test Automation – driving testing efficiencies throughout the delivery process."
    The role of test automation has shifted. Historically, adopters focused on latter stage system regression testing, largely against the GUI;
    replacing the manual regression testing burden and facilitating
    greater test coverage. Today, there is a drive to automate much
    earlier - whether at unit test level, integration testing or at
    build and deployment stages in CI environments

      • Let’s consider the benefits and challenges of earlier adoption
        of automation, including:
        • Facilitating more efficient testing from the first line of
          code
          • If we automate earlier and reduce risk do we even need to
            automate system regression testing?
            • Who should implement and own test automation?
              • What’s in our toolbox – is a ‘one tool solution’ really
                achievable?
                • How do we measure success?

                In this session, we encourage participants to share their own
                experiences and consider how they might leverage test automation in
                ways they may not practice today.


                Peter Adrian, Sogeti, "Corporate Strength Agile"

                In the corporate environment we are faced with the challenge of delivering programme level objectives using highly flexible Agile teams, without undermining that flexibility.

                This is very similar to the problem faced by the military in the modern highly mobile combat environment. They need to allow troops at the front the flexibility to respond effectively to tactical situations whilst maintaining the overall operational objectives.

                This session takes how the U.S. Marine Corp have addressed these issues and how their approach could be employed in Agile development programmes. The session will consist of a short presentation and the majority of the time being a open discussion of the ideas raised.


                July 2014

                The 43rd Test Management Forum will take place on Wednesday 30 July 2014 at the conference centre at Balls's Brothers, Minster Pavement.

                Eventbrite - Test Management Forum - 30 July 2014

                Programme/Timetable

                13:30pm Tea/Coffee
                14:00pm Introductions
                14:15pm

                Mike Bartley, Test and Verification Solutions
                Testing the Internet of Things

                Mike's Slides

                Joanna Newman, Ericsson

                Approaching Technically Complex Test Projects: Techniques and Lessons from the Coalface

                Tony Bruce, Associate from Equal Experts
                Remaining Relevant
                15:30pm Tea/Coffee
                16:00pm

                Paul Gerrard, Gerrard Consulting
                Future Organisation of the TMF

                Thorsten Heinze, EC Interactive
                What can we Learn from Testing in the Games Industry?

                Thorstens's Linkedin profile

                Declan O’Riordan, Testing IT
                Security Testing – Where are we now?
                Declan's Prezi
                17:15pm Drinks Reception

                Abstracts

                Mike Bartley, Test and Verification Solutions, Testing the Internet of Things
                The Internet of Things (IoT) refers to uniquely identifiable objects and their virtual representations in an Internet-like structure. It is a fairly loose definition but potentially represents the next generation of computing devices as analysts estimate an installed base of approximately 212 billion interconnected things by 2020.

                So how do we test such an amorphous object? In this session we will discuss the types of software involved, the attributes we want for such software (such as Correctness, Performance, Safety, Reliability, Availability, Resilience, Security) and how we can test for such attributes.

                Joanna Newman, Ericsson, Approaching Technically Complex Test Projects: Techniques and Lessons from the Coalface

                Interested in improving your Test Management capabilities? In this session, I'll review approaches to dealing with technically complex test projects:

                • how to determine a 'pass' when everything is still in beta
                • sampling techniques for big data projects
                • determining robust and realistic test cases for scalable functions
                • managing performance targets and deliverables when the answer really is about the length of the string

                This will be an interactive session so please bring your stories too!

                Tony Bruce, Associate from Equal Experts, Remaining Relevant

                The times they are a changing. Constantly.

                What can we.....should we be doing to ensure we remain relevant?

                Is it a case of screaming to be heard to prove our value?

                Forgetting about job titles and utilising our skills where needed?

                Remembering and using all the buzz words you can?

                Up-skilling? What would you focus on?

                This will be an interactive session within which you can share your thoughts.

                Paul Gerrard, Gerrard Consulting, Future Organisation of the TMF
                This is a special session for those who want to be involved in the organisation of the Forum in future. About fifteen people offered to help with the Forum, and on Wednesday 10th, eight of us had an informal chat about what could be done to improve the organisation (which is barely existent at the moment), to broaden the net to find good topics and session facilitators and so on. But I also want to suggest the Forum could be used as a vehicle to promote other testing-related initiatives that align with our principles (such as they are) and advance our profession.

                I suspect we won't be creating a working party or formal committee. For the time being, we'll call the group of volunteers, "Friends of the TMF".

                Thorsten Heinze, EC Interactive, What can we Learn from Testing in the Games Industry?
                The games industry is a constantly changing and evolving beast and you need to keep up with the newest developments at any given moment, with all the multitude of types of games, genres, hardware and business models this industry is one of the most complex and tightly knitted community.

                The games changed from simple Jump'n'Runs to complex virtual worlds where players are free to explore and interact which also change the development requirement radically.

                Several project management types are merged and mended to fit the needs of the project and teams which work on these titles that you don't have a clear agile or waterfall you face anything in between with a huge focus on flexibility, at the beginning games were all developed in a waterfall environment and had clear separate phases while today's projects are often started with a vertical slice to please investors and publishers then the content and more gameplay is added on top of those.

                Additionally Free 2 Play started to grow and this involves a different whole strategy in itself as the development creates only a base game and the interaction of the players and the resulting data decides which direction the development heads as content is developed in accordance to players preferences due to the business model relying on players buying extra content not the whole product like in the traditional model. What did all this mean for QA?

                At the beginning all games were linear and very easy to test single or maximum local multiplayer, todays games offer a whole different set of challenges. QA is faced with:

                • Open world gaming where the player decides in which order he takes on parts of the game which creates a multitude of possible buggy areas which do not exist in linear games
                • Procedural games where the game is every time different which makes this product not 100% testable only to a certain degree which requires different testing strategies
                • Localization got more complex as it is not only language but full internationalization is required to adhere to cultural sensitivities and law regulations

                Roughly the direction with a elaboration on what this fluid environment can provide to other industries as learning base.

                Declan O’Riordan, Testing IT, Security Testing – Where are we now?
                Before anyone can seek help, they need to recognise they have a problem. The evidence that many organisations and individuals have a problem with computer security is everywhere, but do any of us have a security problem too? If we have, can we help each other cure our security testing problems? The TMF are a self-selecting group at the most passionate and professional end of the testing industry spectrum, but is it possible some of us have an aversion to personally dealing with security testing? Are we projecting too many of our security testing responsibilities onto others, and if so to whom?

                Let’s undertake a security health-check and talk openly about our capabilities and goals. Since the subject is rapidly changing, a journey to complete security probably has no end point, but it does have a direction of travel. Before we can move in that direction we should ask ourselves: “Security Testing - Where are we now?”

                October 2014

                The 44th Test Management Forum took place on Wednesday 29 October 2014 at the conference centre at Balls's Brothers, Minster Pavement

                Timetable

                13:30pm

                Tea/Coffee

                14:00pm

                Introductions

                14:15pm

                Chris Ambler, Chris Ambler Consulting: Good, Fast, Cheap - The Eternal Conundrum

                Paul Gerrard, Gerrard Consulting: The New Model of Testing – identifying the core skills of future testers

                Download slides

                Ernesto Abad: How strong is your team? The 4 pillars of good testers

                Download slides

                15:30pm

                Tea/Coffee

                16:00pm

                Gordon Appleby, TrustIV: Performance Testing in the Project Timeline – Gotchas

                Sam Clarke, nFocus: Where do test professionals fit into Application Lifecycle Management in Agile and Scrum?

                Mohinder Khosla: Sketching as a tool for visual thinking and Notetaking

                17:15pm

                Drinks Reception

                Programme

                Sam Clarke, nFocus: Where do test professionals fit into Application Lifecycle Management in Agile and Scrum?

                The transition from traditional software development methodologies and processes to Agile and SCRUM has raised the issue of the role of Testing in small team. The Agile Manifesto and Scrum framework make no direct mention of Testing or Quality Assurance.  The term “working product increment” is the key to this omission as “working” implies tested (it is rare to develop working products without some element of testing).  In addition Sprint retrospectives and continual improvement should cover some aspects of Quality Management such as process improvement.

                But we all know that theory and practice don’t always match hence the rise of the Testing Profession and this can be just as true with Agile and Scrum as it was with waterfall methodologies.  

                Shift left (testing early prevention is better than cure) has widened testing into validation of requirement and test models (for example early performance modelling). DevOps demands continuous testing in the production environment. Application Lifecycle Management is now high on the agenda.

                So what’s the problem? Its seems we have an identity crisis, some of us feel threatened, some of us see great opportunities to use our skills in a small dynamic teams whether it be in  Quality Management or business and technical testing.

                Any development team needs a good mix of skills and specialists. We need to understand our strengths, weaknesses and opportunities so we can show our contribution to the team is valuable, necessary and we are not seen as blockers to the process.

                This will be an interactive session and ALL test professionals are welcome, we have to break the Test Manager/Tester model and come up with something more relevant to this way of working. It is our collective knowledge and experience that will make the difference between success and failure.   

                Mohinder Khosla: Sketching as a tool for visual thinking and Notetaking

                We visualise information in a number of ways through mindmaps, graphs, charts, imagery and sketches. Visual language is a bed rock of visual thinking, a way of integrating information, communicating and sharing knowledge and insight. Therefore, developing these skills improves our thinking, our communication and our collaborative capacity. On the other hand sketching is considered to be one of the best ways to give life to our ideas that aids visualisation and develop and validate new concepts.

                In the workshop, a short introduction on how we process information, left-right brain functions metaphor, the importance of visualisation in our work and 3 basic steps of visual sketch notetaking will be introduced.

                This is a hands-on workshop on sketching therefore don’t forget to bring your favourite markers and sketchnotes although we will have some supplies handy in case you forget them. Exercises are included so you can start building your own library of images that you take home for visual notetaking. For visual notetaking, text and pictures are used to illustrate as examples. You may be asked to think of a work or personal situation to sketch when we introduce analog drawing. Therefore come prepared.

                A list of tips will be given as a guide along with selected imagery for your sketch notetaking.

                Gordon Appleby & Richard Bishop, TrustIV: Performance Testing in the Project Timeline – Gotchas

                Performance Testing can be one of those unapproached subjects in a programme plan. To some because it's a black art, to others because it can take more effort and cost than the perceived benefits.

                When internet bank, IF.com went live in 1999 they delayed the heavily marketed launch of the 'internet only' bank by 3 months and opened as a call centre bank without an online presence. Why? - Because the web site could only scale to support 2 users. The loss of credibility they faced because of this delay was huge - but imagine the impact if they hadn't performance tested the site. 

                Performance Testing is a specialised topic, for sure, but with careful planning, and knowing where the potholes are in the road to delivery, the value can hugely out way the cost to the business.

                Gordon and Richard are holding an open debate on the approaches and pitfalls experienced in delivering Performance Testing, and offer a wealth of experience in avoiding delivery failure. Let's see how your approach can be optimised to perform in this surgery.

                Ernesto Abad: How strong is your team? The 4 pillars of good testers

                This session has 2 parts. The first one is about discovering whether you have a strong team, a weak one or something in the middle. I want to discuss what actually makes a strong team.
                For me a strong team:

                • consistently delivers features and releases with quality and in time.
                • delivered products get very little negative feedback from the field
                • the team can take any testing task (functional, performance, usability testing, automated and manual)
                • other project stakeholders have high regard for the test team. They feel confident they can deliver
                • the test team feels confident too. There is a can do attitude.
                • team has positive, motivated and hard working people. Team relies on strengths and mitigates weaknesses.
                • team can work with other teams, cross-functional, cross-companies.

                So how strong is your team? What can you do if your team is not strong enough? One thing is to work on each of your individuals. How strong are they? Do they even know what makes a good tester?

                In my opinion the 4 pillars of good testers are:

                • Finding loads of good bugs
                • Test plans: plan, write, execute
                • Equipment/infrastructure
                • Automation

                I will expand on those 4 areas and also provide other points that I think are important.
                Summary: we will talk about what makes a strong team and what are some of the traits of good testers.

                 

                Chris Ambler, Chris Ambler Consulting: Good, Fast, Cheap - The Eternal Conundrum

                I would like to discuss the challenges behind achieving the perfect solution to the above conundrum. It's said that you can have any 2 out of 3: cheap and good won't be fast, good and fast won't be cheap and cheap and fast won't be good. How can we deliver all 3 together to a customers' satisfaction? It would be good to discuss peoples' experiences and thoughts on this and come up with some answers.

                Paul Gerrard, Gerrard Consulting: The New Model of Testing – identifying the core skills of future testers

                The 'New Model for Testing' that Paul published in July suggests that the core skills of testers are somewhat different to those being promoted by many of the leaders in the testing field and, of course, the certification schemes. In the paper, only a superficial list of skills that might be relevant to the new model were identified. These included both the obvious skills of critical thinking, interviewing and modelling, but also some perhaps unexpected skills: the Socratic method, computer forensics, predicate logic and proof.

                The model identifies technical skills, but as all practitioners know, it's the interpersonal skills that a tester has that are, perhaps, the most critical.

                In this session, Paul will present a refined list of both technical and impersonal skills and map these to the capabilities that testers in the new world must have. The goal of the session is to discuss and refine this list of skills and mappings to capabilities to create perhaps a new 'Tester Skills Manifesto'.

                Testing Tags: 

                2015 Meetings

                Testing Tags: 

                Test Management Forum - 28 January 2015

                The 45th Test Management Forum took place on Wednesday 28 January 2015 at the conference centre at Balls's Brothers, Minster Pavement

                Generously sponsored by Quotium and Grid Tools.

                Timetable

                13:00pm Tea/Coffee
                13:45pm Introductions
                14:00pm

                Llyr Wyn Jones, Grid Tools, "A Critique of Testing (Design)"

                Niels Malotaux, Project and Management Coach, "How to Move towards Zero Defects"

                Adam Brown, Quotium, "Agility in Security Testing"

                15:15pm Tea/Coffee
                15:45pm

                Stephen Janaway, The Net-a-Porter Group, "'How to Focus On Testing When There Are No Test Managers"

                Mark Gilby, Sopra, "Non Functional Testing in an Agile world"

                 Joanna Newman, Ericsson, "Tapping millennials: Techniques to attract, retain and motivate millenials"

                17:00pm Drinks Reception

                Programme

                Joanna Newman, Ericsson, Tapping millennials: Techniques to attract, retain and motivate millenials

                As Test Managers, you probably already have some millenials* on your team with cutting edge skills. Or perhaps you will be recruiting soon, and want to understand what millenials look for in a role to ensure you’re attractive to them. Regardless, the millenials are coming and bring with them skills in high demand coupled with very clear requirements from the workplace (not least of which is the opportunity to jump to new roles and opportunities much faster – which in some ways makes them even more suitable for test roles). Come to this session to hear the latest research on millenials and tips to integrate them into your existing team seamlessly.

                This session will also have a substantial Q and A for us to share our experiences and learn from one another.

                *Millenials are the renamed “generation Y”, sometimes called the “digital generation”.

                Llyr Wyn Jones, Senior Programmer, Grid Tools

                There are many tools and techniques out there for designing the perfect test cases, but up to now there have been few objective techniques and how they compare to each other. Part of the problem is the absence of a generic framework where objective analysis is possible. Using a mathematical framework based on information theory, this presentation will talk about as many test case design techniques as possible. One key criteria is the idea of measuring the amount of application knowledge that can be encoded into the test case design: it can be shown that the quality of testing is directly proportional to the amount of information encoded. This talk will give a brief overview of the framework (with no intimidating mathematics) and will go on to talk about how each of the techniques under review fare according to the criteria. Unsurprisingly, formal models fare very well under this treatment, and further benefits of such modelling will also be outlined.

                Stephen Janaway, Net-a-Porter Group, How to Focus On Testing When There Are No Test Managers

                About 3 months ago my employer decided to remove the Test Management role and transition to a purely product management based structure. This has meant many changes to testing, touching everything from test management to test planning and ​test ​execution. In this session we will look at what changes were made, why and how the changes happened​, ​and discuss whether this new view of test management is a better fit for the future​ in general​.

                I believe test management needs to adapt more quickly to the new world of cross-functional, agile teams and continuous delivery. I hope through t​h​​is workshop and​ structured discussion that the audience ​will be able to compare the changes that I've experienced to their own situation, and decide whether there's a need for change in their own roles and the Test Manager role in general.​

                Key Points:

                • The role of Test Management in the agile world today.
                • How Test Managers can adapt.
                • Ways in which the whole team benefits from a new approach to the Test Management role.
                • ​How to form a strong​ testing community and foster bottom up learning.
                • ​Sharing of​ experiences of having gone through the ​move away from Test Manager.
                • How you can prepare for the future.​

                Niels Malotaux, Project and Management Coach, How to move towards Zero Defects

                How many defects would you like to find? How many issues would you accept the users to experience? If you don’t think the answer should be “Zero!” you’d better come and discuss, otherwise your team may gradually be put out of business by those who did learn to achieve Zero Defects.

                Of course it’s the development team that should prevent defects and make sure the users don’t experience any problem. Some testers fear that if there are no defects, they may not be needed anymore. Don’t worry. We should better discuss what the testers can do to help the developers to achieve this goal. Testing becomes even more challenging and interesting once no issues are found.

                In this session I’d like to discuss with you what Zero Defects actually means. That it is not ‘turning a switch’ upon which we suddenly don’t make mistakes anymore. That it is an attitude, which results in improving the quality of the deliveries dramatically almost overnight. What ‘Root Cause Analysis’ really means, as, also among testers, there seems to be a misunderstanding here. And how a simple mantra: “No questions, no issues” proved a good technique for turning the Zero Defects concept into practice.

                Adam Brown, Quotium, Agility in Security Testing

                In the run up to Christmas we saw some interesting details about the affairs of a large media company, from internal emails to wage discrepancies and casual racism. As a member of the public, this might be interesting, but not being directly affected, it probably wouldn't keep you awake at night. But if Santa brought your kids a certain console and you tried to connect it to certain online games - you would think differently. At that point it becomes personal.

                Security is of course risk based; what's really at risk is data. All organisations have critical data and all that data has risks associated with confidentiality, integrity and availability (CIA). Testers have a lot of expertise when it comes to risk that I believe could be very valuable when calculating and managing application security risk.

                For every build and every release of every application that risk is present and must be managed. If we test an application at one point in time we can say we know the state of that application at that time, but what can we say about the next release? Like we did with testing, a process should be built around application security and judging by the leaks we see in the press that process isn't always there.

                In the talk I'd like to get us started with some real world examples from the press & some of the top 10 Application Security risks today, and then discuss:

                • How can we calculate the cost of data risk and what is the cost of security testing in an agile project?
                • What can we do to reduce risk from each release of an applications? Is it important to test each build?
                • What methods, technologies, services and methods exist to help evaluate the security of an agile application?

                Generously sponsored by Quotium and Grid Tools.

                 

                 

                Testing Tags: 

                Test Management Forum - 29 July 2015

                The 47th Test Management Forum took place on Wednesday 29 July 2015 at the conference centre at Balls's Brothers, Minster Pavement

                Session A

                Mike Bartley, Test and Verification Solutions, 'Constrained Random verification for Software'

                 

                Session B

                James Thomas, Linguamatics, 'You're Having a Laugh'

                Session C

                Mark Winteringham, MW Test Consultancy , 'What's so great about WebDriver?'

                 

                Session D

                James Walker, Grid-Tools, 'Is size everything? An investigation of testability in regards to software complexity'

                Session E

                Chris Ambler, Edge Testing Solutions, '1984 in 2015'

                Session F

                Paul Gerrard, Gerrard Consulting, 'State of the Automation'

                 

                Programme

                James Thomas, Linguamatics, 'You're Having a Laugh'

                As a test manager I don't test as much as I'd like to so I try to find ways to stay loose and ready for those occasions where I get the chance.

                In this talk I'll describe one activity based on joking that I think can fit the bill. How? Well, the punchline for a joke could be a violation of some expectation, the exposure of some ambiguity, an observation that no one else has made, or just making a surprising connection.  Jokes can make you think and then laugh. But they don't always work. Does that sound familiar?

                It started with the weekly caption competition at Linguamatics where I noticed parallels in my approach to it and testing. For instance, I might take each of the key entities in the picture and "factor" them - generate a list of features, related concepts, synonyms and so on. In testing I might then look for overlapping factors for potentially interesting test ideas, in the quest for a caption I might try to use the same approach to find an ambiguity and hence a joke. Doing this, I've found analogies between joking and concepts from testing such as oracles, heuristics, factoring, stopping strategies, bug advocacy and the possibility that a bug, today, in this context might not be one tomorrow or in another.

                I'm interested to find out from the audience what things they "just do" that they feel helps them. James' slides can be found here.

                Mark Winteringham, Surevine, 'What's so great about WebDriver?'

                'As testers we seem to be doomed to make the same mistakes again and again when it comes to test automation, because we have limited ideas in our approach to solving problems with automation.  But what if these limitations aren't the fault of the tester but the testing community and industry, what if we aren't doing enough to promote different tools and ideas.  In this discussion I will explore this issue in detail before opening up to a discussion about how we can improve as a community and industry in promoting different tools and approaches.'

                Mark's slides are here: ttp://www.mwtestconsultancy.co.uk/presentations/webdriver/#/

                Chris Ambler, Edge Testing Solutions, '1984 in 2015'

                Sometimes things come together in my mind and end up as big questions. The whole Big Data concept has made me think about George Orwell - 1984 and how powerful information really is in our technology driven world. We are already seeing how data can be used to manipulate habits and drive people's thinking. The Internet of Things is leading us towards a 'big brother world' where gathering information is becoming easier and more apparent with storage methods and technology increasing every day. I'm not trying to create a bleak picture of the future, more attempting to provoke thinking about things we need to get control of to ensure our future technologies work for us and not against us. This talk discusses these issues and how we need to deal with this as testers.

                Mike Bartley, Test and Verification Solutions, 'Constrained Random verification for Software'

                Based on this paper: Constrained Random verification for Software

                Slides are here. And a paper on the topic is here http://www.testandverification.com/wp-content/uploads/2015/Whitepaper%20-%20Coverage%20based%20verification%20for%20Software%20testing.pdf?f2fd1d

                James Walker, Grid-Tools, 'Is size everything? An investigation of testability in regards to software complexity'

                Testability is formally defined as the capability of a software product to be validated, or in other words, the level of effort required to test a system against the requirements. A high level of testability is desirable to improve quality and detect defects utilising the minimal amount of resources, however, often this is unrealistic to achieve given the ever increasing number of inter-dependencies which arise in software through maintenance and enhancements, this is referred to as software complexity. Improper design can lead to an increase in software complexity, and therefore lower testability. Many methods have been proposed for measuring complexity. However, measuring testability is still often regarded as an unsolved problem in the testing domain. In this session we explore methods for increasing the testability of a system (to localise faults) with the minimal number of resources by exploiting good system design.

                Paul Gerrard, Gerrard Consulting, 'State of the Automation'

                Approaches such as Behaviour-Driven, Acceptance Test and Test-Driven Development are becoming increasingly popular. DevOps, Test Analytics and production experimentation are emerging and influencing more and more software businesses. The Internet of Things (soon to be 'Everything') is on the horizon but will soon be upon us.

                The disciplines of DevOps and dependency on automation (of build, test, deployment and monitoring) are succeeding where structured/Waterfall approaches failed and the 'softer' methods of Agile are inappropriate.

                "Automation is the future!" But what exactly is possible and impossible with automation, right here, right now? Where are the DevOps and Continuous Delivery crazes leading us? Where do testers fit? How do testers work in high-paced, automation-dominated environments?

                Let's look into the near future and discuss how we survive or thrive.

                Test Management Forum - 28 October 2015

                he 48th Test Management Forum took place on Wednesday 28 October 2015 at the conference centre at Balls's Brothers, Minster Pavement

                Timetable

                13:30pm Tea/Coffee
                14:00pm Introductions
                14:15pm

                Session A

                Paul Wilford, Exploratory learning - throwing away the slideware

                Session B

                Graham Thomas, A hierarchy of Software Testing Measures and Metrics – Discuss?

                Graham's Slides

                Session C

                Gordon Baisley, Let’s Do Everything All At Once!

                Gordon's Slides

                 

                15:30pm Tea/Coffee
                16:00pm

                Session D

                Emily Fielding, You’ve crowdsourced your hotel and your taxi, what about your testing?

                Emily's Slides

                Session E

                Margaret Edney, Keep Your Head When All Around Are Losing Theirs

                Margaret's Slides

                Session F

                Paul Gerrard, What's all the fuss about DevOps?

                Paul's Slides

                17:15pm Drinks Reception

                Programme

                Margaret Edney, Thompson Reuters, Keep Your Head When All Around Are Losing Theirs

                As a Test Manager it so often seems that whenever things go wrong it’s always the Test team’s fault. Why is it that the Test Manager is expected to explain why the release is late and has bugs?

                We missed a test. We put the bugs in, didn’t we. We’re told we should automate all our tests.. I wish I had a magic wand that would automate all our tests – instantly.

                Then there’s the Project Plan. The PM has scheduled testing account changes, starting on Tuesday. Unfortunately, the page wasn’t ready but something else was ready early. Tester started work on that and upset the Project Manager’s plan.

                Everybody (and his dog) has suggestions for how the project should be tested. And, to finish the day, two people from the test team got seconded to another project for 6 months.

                My first question is – what can I control? If  I have no control over it, I don’t waste time trying.

                • I can push for the team to be involved in the design meetings. We have the concept of smiley faces – designers, product owners, developers, users and testers. If there’s a face missing from the meeting then there’s a potential problem.
                • I can control our test automation, though I do have to push for enough time to build the framework.
                • I can control the quality gates.
                • I will control the order of testing and what we test.
                • I can’t control losing people.
                 

                Gordon Baisley, Independent, Let’s Do Everything All At Once!

                As the UK’s biggest government department the Department for Work and Pensions administers the State Pension and a range of working age, disability and ill health benefits to over 22 million citizens. 

                Triggered by the cross government ‘Digital by Default’ policy, the model for IT provision at DWP is going through wholesale change. From being very managed service, using major suppliers only, very waterfall, a supplier aligned organisation, and commercial tools only; current direction is towards in-house resourcing or small supplier, agile focussed, with a business unit aligned organisation, and using open source tools. And this is happening all at once and at large scale. In Testing for example, including suppliers, there is around 800 people.

                We’d like to talk about how we're tackling this change in the DWP Test community. We hope we’ll share some ideas that will be useful to people making one or more similar changes in direction. Equally, we hope to hear recommendations from others who have been successful by doing things we've not considered.

                Graham Thomas, Badgerscroft, A hierarchy of Software Testing Measures and Metrics – Discuss?

                This session has been inspired by following the journey of Solar Impulse 2 (SI2), an aeroplane powered only by solar power, as it circumnavigates the globe.

                Last June SI2 flew from Japan to Hawaii. It took 6 days, using only solar power. A remarkable feat which smashed all solar powered flight records.

                Even more amazing, was that as the plane flew it transmitted real-time data which was published, live and real time, on the internet at solarimpulse.com, giving a real insight.

                This got me thinking. Firstly, wouldn’t it be amazing if our software testing projects could broadcast live, real-time metrics and measures just like Solar Impulse? Looking at the information being sent back, the mix of flight control data, route information, and overall journey around the world, there was a clear alignment with Maslow’s Hierarchy of needs. From safety through to self-actualisation. And I wondered if there wasn’t an opportunity for us to revisit our testing measures and metrics, and look at whether they also aligned with Maslow, and what we could learn from that.

                 

                 

                Emily Fielding, Testbirds, You’ve crowdsourced your hotel and your taxi, what about your testing?

                In the same way that Uber crowdsources its drivers, crowdtesting is a relatively new form of software testing that leverages a filterable online community of many thousands to cover core device and demographic requirements for QA and UX insights. This allows for a greater diversity of devices included in pre- or post- release testing, as well as the opportunity to have the end users themselves to contribute feedback to the software development roadmap. Many crowdtesting companies position themselves as a complement to in-house testing by QA professionals, but what does the TMF think? A valuable tool, a possible threat, too difficult to implement? Emily from Testbirds Ltd will lead a discussion around the pros and cons of crowdtesting as a methodology, explore where in the SDLC it might fit, its limitations and best practices.

                And what of the crowdtesters themselves? Crowdsourcing as a form of employment revolves around the ability to provide a variety of solutions in the form of services, ideas or content by soliciting contributions from a group of people, particularly using the internet. However, due to a lack of regulations in this young industry, there are risks of exploitation, poor treatment and a lack of accountability, as has been seen in high-profile Uber lawsuits. What measures are needed to protect crowd sourced testers, while still delivering value to the clients that indirectly engage them?

                 

                Paul Wilford, Hiscox, Exploratory learning - throwing away the slideware

                Much Agile training seems to be focussed on learning about specific methodologies, all of which need a degree of rigidity about them in order maintain an identity. This presents a rather interesting situation where Agile teams ‘learn’ certain things that could be considered contradictory to some of the fundamental Agile principles.
                So what is the solution? In this session we will discuss our experiences of running Agile training courses which have only concepts and principles driving them. Not a single slide in sight.

                 

                Paul Gerrard, Gerrard Consulting, What's all the fuss about DevOps?

                DevOps is hot news nowadays. Should testers care, and if so, why?

                Some years ago, Continuous Integration (testing) emerged and became all the rage as developers began to recognise that testing using tools gave them a more comprehensive safety-net for Test-Driven or Behaviouur-Driven-Development approach. Continuous Delivery (founded on CI) was heralded as the only way to provide the flexibility and speed of delivery to satisfy innovative businesses. CD provides the flexibility for businesses to continuously innovate through experimentation and feedback in the 'digital world'.

                DevOps is becoming the preferred approach to build, deploy and release into test and production environments in thecloud. Recent innovations such as 'infrastructure as code' attempt to completely eliminate the human factor in these processes. Does process naturally migrate to code or is DevOps a power-play? Developers have the tools to implement most of the testing; are they now taking over Operations?

                The quesion arises yet again (after Agile, CD, CI and DevOps) - where do testers fit?

                 

                 

                 

                2016 Meetings

                Testing Tags: 

                Test Management Forum - 27 January 2016

                The 49th Test Management Forum will take place on Wednesday 27 January 2016 at the conference centre at Balls's Brothers, Minster Pavement

                Sponsored by CA Technologies

                Eventbrite - Test Management Forum - Wednesday 27 January 2016

                Timetable

                13:30pm Tea/Coffee
                14:00pm Introductions
                14:15pm

                Session A

                Rod Armstrong, EasyJet, Asia Shahzad, Hotels.com, "Brave New World – a Workshop on Fostering Change"​

                Rod and Asia's Slides are here

                Session B

                Rob Lambert, New Voice Media, "Our Industry Needs Better Managers. Period"

                Rob's Slides are here

                Session C

                David Parker, Metaswitch Networks, "Early lessons in testing the Internet of Things"

                15:30pm Tea/Coffee
                16:00pm

                Session D

                William Sault, CA Technologies, "Requirements from Data – Auto-Finding the Logic in the Data"

                Will has written a blog post on the discussion here.

                Session E

                Paul Gerrard, Gerrard Consulting, "Digital, Testing and Automation"

                Session F

                Richard Bishop,TrustIV, "Network Virtualisation in Testing"

                Richards slides are here and here's a link to a video of him presenting at the North West Test Gathering earlier this month.

                 

                17:15pm Drinks Reception

                Abstracts

                Rob Lambert, New Voice Media, "Our Industry Needs Better Managers. Period"

                If we, as a community of testers, don't pull our thumbs out and our socks up, will be left behind as "development" managers will predominantely come from a development background (which may not be a bad thing, but who's going to advocate at an exec level for good testing?)

                The idea of a "Test Manager" and a "Development Manager" running side by side is out-dated and managers of the future (and now) will manage many disciplines/skills - but where does this leave "Test Managers" who mostly only know how to manager "Testing" and not "Testers".

                It will be an interactive discussion which will cover these topics:

                • Is the world of Test Management really that bad? (I believe it is)
                • Is the world of Test Managers really being disrupted by multi-discipline Dev Managers? (I believe it is)
                • Why are our conferences not focusing on creating good managers who can manage people, and instead tend to focus on the process of managing "testing". Testing and Testers are two different things.
                • And what topics could we suggest to conferences
                • What can we collective do within the testing community to help nurture the next generation of Development Managers who care about good testing?
                • Why are so many people suffering in companies with bad management and not revolting on the streets and starting riots? Or at least protesting slightly louder than they appear to be? (life is short and all that)

                It's clear from my attendance at conferences over the last year that people are being strangled in their careers and growth by bad management.....but how can we solve this problem?

                Paul Gerrard, Gerrard Consulting, "Digital, Testing and Automation"

                Digital is the buzz-phrase of the moment. Almost all the people that I know working on projects of some size say their project title is some part of a 'Digital Transformation'. Transformations in business and IT of any flavour are ambitious and not undertaken lightly. Digital must be important then. But what is it?

                Could be labelled: "Software Development at the pace of marketing"?

                Marketers see IT as the vehicle for doing rapid, continuous, relentless experiments in parallel with rapid, continuous, relentless delivery of new features to keep up with the Digital competition.

                The barrier to achieving success in the software delivery process is likely to be the inability of testers to align testing and automated testing in particular to the development processes. Our track record in test automation is not good enough.

                Paul will describe what the 'Digital Revolution' is and how it affects testing. There are some serious issues to consider, not least of which is that testing may be seen to be the bottleneck. How will we avoid that?

                William Sault, CA Technologies, "Requirements from Data – Auto-Finding the Logic in the Data"

                Applications are growing ever-more complex, and are producing more live system data than ever before. This is only getting worse, while testers sometimes find that the documentation is scarce, and knowledge of these complex systems is confined to subject matter experts. Imagine you are facing a massively complicated system, where solid documentation and subject matter expertise is unavailable. One available source of information is the data that goes in, and the data that comes out. This data can be found in numerous forms, such as live traffic, web logs and user interactions. How do you move from this to sufficiently testing a system?

                This session will consider how, starting with complex live data, the knowledge of a system needed for rigorous testing can be derived. It will consider how data visualization can be used to model complex data, before applying rule-learning algorithms to reverse-engineer the constraints which led to the data flow. Not only does this help resolve the challenge of “technical debt”, but can be used to construct an accurate picture of the logic behind live systems, which might then be used to drive Model-Based Testing. Test cases and automated tests can be systematically derived from this model, in effect moving from unwieldy data to the smallest set of test cases needed for rigorous testing.

                David Parker, Metaswitch Networks, "Early lessons in testing the Internet of Things"

                Over the last decade, the introduction of Voice over IP technology has resulted in millions of phones in homes and businesses, being replaced by smart IP-enabled devices connected via the internet. This early wave of the “Internet of Things” has presented a number of specific challenges for testing, many of which will be applicable as the Internet of Things explodes to new levels of scale and complexity in the coming decade.

                In this session, I will share some of our experiences testing Voice over IP solutions and IP networking software over the past decade. I’ll pull out the themes which I think will be particularly relevant for those testing IoT devices and services in the future. Here’s a taster of some areas which we may cover.

                • Testing without a GUI (or with a very limited one)
                • Machine-machine interfaces - Embedded software
                • Complex interoperability requirements
                • Testing at massive scale (millions of connected devices)
                • Bulk failure or load spike scenarios (e.g. citywide power failure and recovery)
                • Complex distributed systems
                • Tooling/automation. These can be specialist and therefore expensive - building your own can be more cost effective.
                • Software/firmware upgrade of remote devices
                • Security concerns for non web application software
                • Impact of underlying network performance
                • Managing configuration on remote devices
                • Analytics and diagnostics

                Rod Armstrong, EasyJet, Asia Shahzad, Hotels.com, "Brave New World - workshop"

                In our interactive session Brave New World we will introduce a thinking tool that will allow you and your organisation to not only accept change, but to actively embrace it and follow a flow that will take that change through to a conclusion. We will illustrate the concepts with some examples of successes and also with some of significant failures within organisations that did not successfully apply this kind of model.  

                We anticipate that this tool will be relevant to everyone no matter what their specific level or role is within an organisation and when applied can lead to you looking at all situations, both professionally and personally with a new perspective.

                Richard Bishop,TrustIV, "Network Virtualisation in Testing"

                Network virtualisation helps you to emulate real-world environments adding realism to your functional and performance tests. In this session, Richard will introduce the concept of network virtualisation in testing and include a short video demonstration of HPE Network Virtualization (formerly Shunra) in use for both performance and functional tests.

                After the demonstration and an overview of the potential “use cases” for this technology, Richard will host a Q&A session and discussion with the attendees to discuss the impact of the network in production and test environments. Both from a customer-facing perspective as well as when considering the performance of third-party links.

                The presentation and subsequent discussion will draw on real world examples of mobile network performance based on the use of network performance monitoring tools within the UK as well as anecdotal information from other sources.

                Eventbrite - Test Management Forum - Wednesday 27 January 2016

                Test Management Forum 27 July 2016

                The 51st Test Management Forum took place on Wednesday 27 July 2016 at the conference centre at Balls's Brothers, Minster Pavement

                Sponsored by CA Technologies

                Timetable

                13:30pm Tea/Coffee
                14:00pm Introductions
                14:15pm

                Session A

                Stevan Zivanovic, "New Technology, New Approaches"

                Stevan's Summary of the session is here.

                Stevan's slides are here.

                Session B

                Paul Gerrard, "Will Robots Replace Testers?"

                Paul's slides are here

                Session C

                Andy Redwood, "Quick wins in Testing that actually work"

                15:30pm Tea/Coffee
                16:00pm

                Session D

                Huw Price, CA Technologies, Manual Testing is Dead: A Real World Story of moving from 5% Functional Test Automation to 95%

                Session E

                Paul Gerrard, "Surveying - the New Testing?"

                Pauls slides are here

                Session F

                Declan O'Riordan, "Blended Security Testing"

                17:15pm Drinks Reception

                Abstracts

                Paul Gerrard, "Will Robots Replace Testers?"

                A recent study from the University of Oxford makes for interesting reading:

                • Over the next two decades, 47% of jobs in the US may be under threat.
                • 702 occupations are ranked in order of their probability of computerisation. Telemarketers are deemed most likely (99%), recreational therapists least likely at 0.28%. Computer programmers appear to be 48% likely to be replaced.

                If programmers have a 50/50 chance of being replaced by robots, we should think seriously on how the same might happen to testers.

                Machine Learning in testing is an intriguing prospect but not imminent. However, the next generation of testing tools will look a lot different from the ones we use today.

                For the past thirty years or so we have placed emphasis on test automation and checking. In the New Model for Testing, I call this 'Applying'. We have paid much less attention to the other nine - yes, nine - test activities. As a consequence, we have simple robots to run tests, but nothing much to help us to create good tests for those robots to run. 

                The tools we use in testing today are limited by the approaches and processes we employ. Traditional testing is document-centric and aims to reuse plans as records of tester activity. That approach and many of our tools are stuck in the past. Bureaucratic test management tools have been one automation pillar (or millstone). The other pillar – test automation tools – derive from an obsession with the mechanical, purely technical execution activity and is bounded by an assertion that many vendors still promote – that testing is just bashing keys or touchscreens which tools can do just as well.

                In this session we'll discuss the capabilities of the tools we need in the future.

                Andy Redwood, "Quick wins in Testing that actually work"

                As a senior test manager I don’t just have a responsibility to prove or disprove the changes or attempt to provide evidence of quality, although these are important. I also have to maximise services from the available budget and look for savings and efficiencies.

                In reality I’m trying to achieve all of these objectives at the same time and have had some wins on small quite niche small pieces of work and also when creating economies of scale across life-cycle changes to policies or working practices.

                I will share some of my experiences large and small and explain the enablers and the constraints and provide some indication (within the confidentiality agreements) of the value of these initiatives.

                Stevan Zivanovic, "New Technology, New Approaches"

                From a recent discussion with a C Level executive: "I want to Digitally Transform, using micro-services, in the cloud and I need a big data solution".

                This all appears to be big shift from an more traditional application stack, that we as tester know, understand and have delivered against. So what do technology changes such as Digital, micro-services and big data actually mean for the tester?

                This session will look to explore what some of these terms mean and the strategic approaches to testing, whilst encouraging participation from the group.

                Declan O'Riordan, "Blended Security Testing"

                ‘Blended Testing’ sounds attractive and harmonious, like ‘intelligent testing’ or ‘world peace’, but what does blended testing really mean? Which ingredients are being combined, and how? Is it possible to blend what have traditionally been categorised as ‘functional’ and ‘non-functional’ tests? The very separation of security, performance, usability, and functional testing into different spheres indicates their test approaches are naturally separate.

                Until recently, anyone attempting to simultaneously blend functional tests with non-functional tests would only be able to exploit small overlaps in the coverage, while the majority of tests would still require a bespoke approach. The acceleration of deliveries driven by agile and DevOps frameworks has led to large areas being left out of scope, particularly in security testing. The lack of skilled staff and abundance of inaccurate tools has created a security testing bottleneck which either stifles the delivery pipeline, or security testing is bypassed as huge risks are simply ignored.

                Fortunately, a new breed of sensor technology has recently emerged that provides amazing levels of security and performance attribute details at tremendous speed. Best of all, these security vulnerability detection tools don’t need specific security tests or skills to achieve 92%+ accuracy. Any test data prompts the sensors into action. Is blended testing becoming a reality? Let’s discuss!

                Huw Price, "Manual Testing is Dead: A Real World Story of moving from 5% Functional Test Automation to 95%"

                It is a mistake to think that automating test execution will deliver a “silver bullet” to testing woes, eliminating testing bottlenecks while detecting the most number of defects possible. Even with a good automation engine, the manual effort of deriving test scripts still often remains, and the defects created by ambiguous requirements will persist. The lack of systematism involved in any manual approach further means that test coverage will remain low, and defects will continue to be detected late.

                In this talk, Huw Price will provide a real world example of how Model-Based Testing has been used to make testing more rigorous, fully automated, and more reactive to change.

                The session will set out how fully automated testing started with the BAs modelling system requirements as unambiguous, “active” flowchart models. These were taken by testers and developers, verified, and iteratively refined to more closely match the end-user’s desired functionality. At the same time, every test case needed to cover 100% of a system’s functionality were derived from the mathematically-precise model, and optimized to reduce execution time.

                The right data and expected results were further derived automatically from the model, while the automated tests were updated automatically when the requirements change. This approach therefore automated the bulk of the testing effort, so that rigorously tested software could be delivered, even as user requirements constantly change.

                Paul Gerrard, "Surveying - the New Testing?"

                The pressure to modernise our approaches, to speed up testing and reduce the cost and dependency on less-skilled labour means we need some new ideas. I have suggested a refined approach using a Surveying metaphor. This metaphor enables us to think differently on how we use tools to support knowledge acquisition.

                The Survey metaphor requires new collaborative tools that can capture information as it is gathered with little distraction or friction. But they can also prompt the user to ask questions, to document their thoughts, concerns, observations and ideas for tests. In this vision, automated tools get a new role – supportive of tester thinking, but not replacing it.

                Your pair in the exploration and testing of systems might soon be a robot. Like a human partner, they will capture the knowledge you impart. Over time they will learn how to support and challenge you and help you to navigate through your exploration or Surveying activity. Eventually, your partner will suggest ideas that rival your own. But that is still some way off.

                Paul will outline the survey metaphor and, technology willing, demonstrate how a bot might be used as your pair.

                Test Management Forum 26 October 2016

                The 52nd Test Management Forum took place on Wednesday 26 October 2016 at the conference centre at Balls's Brothers, Minster Court

                Sponsored by Cognizant

                Timetable

                Session A

                Thomas Crabtree, Ten10, "Is Mobile test automation pointless?"

                 

                Session B

                Ian Howles, Cognizant, "SMAC Your Quality, How Social, Mobile, Analytics and Cloud technologies are reshaping QA and Change"

                Ian's slides are here

                Session C

                Susan Windsor, WMHL Consulting, "Is there value in Test Assurance when Testing is Outsourced?"

                Session D

                Mike Bartley, TVS, “IoT Device Testing: can we provide assurance in the new wild west?”

                Session E

                Scott Tolley, Synopsis, "Security Test Optimisation Through Prioritisation"

                Session F

                Paul Gerrard, "Does Test Management Exist? Should TMF Move with the Times?"

                Pauls slides are here

                Programme

                Ian Howles, Cognizant, "SMAC Your Quality, How Social, Mobile, Analytics and Cloud technologies are reshaping QA and Change"

                This session will be a discussion on our futures as testers and how we deal with the massive changes ahead.

                The world we live in today is changing faster than ever, the trends for change are evolving on a daily basis, starting with Social, Mobile, Analytics and Cloud, and evolving into the Internet of Things (IoT). 

                We are living in an increasingly connected, always on world.  Estimates suggest that in the next four to five years there will be over 50 billion connected devices and sensors.  The services we consume based upon this hyper connectivity is morphing to become more and more customer centric.

                In this session will look at the effects of the ‘Digital’ change and poses the following questions for debate:

                • What does the immediate future hold?
                • Were next?
                • How or even can we assure quality in the future?
                • How do we as individuals deal with the changes?

                And finally...

                As we move to autonomy and AI, how do we assess the ethics of the products that will be making decisions on our behalf in the future?”

                Mike Bartley, TVS, “IoT Device Testing: can we provide assurance in the new wild west?”

                The Internet of Things (IoT aka M2M) refers to an expanding network of interconnected internet-enabled devices. In the future everything will be connected, the current thinking is that only 2% of items in the world are connected and there is still 98% to go!  Estimates suggest that by 2020 there will be in the region of 50bn IoT devices – all talking with one another on a constant basis.

                If you are a manufacturer, solution builder, or service provider, then how will you ensure that your solution works, will it stay connected to the different access points? IoT is going to drive the importance of interoperability between different markets/sectors and technologies. Customers today vote with their feet and are not loyal with poor performing products. How will you test your product with varying network conditions? How will your device work in the wild?

                In this talk we will discuss what is required by a “IoT device testing lab” – e.g. conformance to all the standard protocols, connection with the standard network providers, some basic security testing?

                We will also discuss whether we should consider an “IoT Kitemark” to provide some level of assurance?

                Scott Tolley, Synopsis, "Security Test Optimisation Through Prioritisation"

                Optimising testing through prioritisation : there are rarely enough hours in the day, and enough QA engineers on the payroll, to run every kind of test, on every part of the system, after every tiny little change delivered by the developers.  Words and phrases such as "agile methods", "DevOps", and "continuous anything" generally mean shorter QA cycles.  And that means making more choices.  Not just about what to test but also where to start.  Scott Tolley shares some personal experience (and opinions) on technical approaches to improving test optimisation through prioritisation both internally at Synopsys, his employer, and with clients in the field.  

                Susan Windsor, WMHL Consulting, "Is there value in Test Assurance when Testing is Outsourced?"

                As many organisations have chosen to outsource their testing (and an increasing number of suppliers offer this service), I want to explore the role of testing within the client organisation.  Does it exist and if so, what is its role?

                Test assurance is increasingly an option for client organisations seeking to improve the management and delivery of their supplier’s outsourced service.  However, is this just a trend or can it really add value? 

                To introduce this session, I’ll look at both sides of this argument and propose pro’s and con’s for each.  Then we’ll open up the discussion in true TMF fashion for debate.

                If your organisation buys or delivers outsourced testing, this session will really benefit from your experience.  

                Thomas Crabtree, Ten10, "Is mobile test automation pointless?"

                Automating your testing is seen as a de facto practice, especially with the rise of Agile and more recently DevOps. So the natural solution to the challenge of mobile testing is to automate your mobile tests. Right? Well, no, not always! If your only solution is a hammer, then every problem is a nail. Automation encourages sticky checking, which can have real shortcomings in the rapidly changing landscape of new devices, browsers and operating systems. This workshop will debate the pros and cons of automated mobile testing and look at scenarios where other possible test approaches, such as exploratory testing, may prove a better solution.

                Paul Gerrard, "Does Test Management Exist? Should TMF Move with the Times?"

                For the past 10 or more years, I've been having conversations with test managers who have said things like, "We've been agiled - I don't seem to have a job anymore". The first few years of Agile, there didn't seem to be a role for testers at all. Over time, it became clear how testers fitted into Agile teams and they settled in as 'generic tester', 'technical tester', 'programmer support tester' or they take an assurance role to manage risk, advise on testing and be the general conscience of an Agile team. So test practitioners over time, are finding their feet.

                But the Test Manager's journey is much less certain. There's perhaps no need to manage a team at all - the team is dispersed to different teams and smaller projects. There is no need for a single focused project test plan; there are no phases to manage; there is no need for day to day management of people, progress or reporting. Now the Digital revolution is taking off, it's even worse. Teams are shifting left, adopting DevOps, Continuous Delivery, analytics, SMAC and even shift-right.

                Where does Test Management fit? Does Test Management as a disciplien even exist anymore?

                This session explores the role of test management and managers and how the TMF might support this role in the future.

                2017 Meetings

                Testing Tags: 

                January 2017

                The 53rd Test Management Forum took take place on Wednesday 25 January 2017 at the conference centre at Balls's Brothers, Minster Court

                Programme

                13:30pm

                Teas/Coffees

                14:00pm

                Introductions

                14:15pm

                Stevan Zivanovich, Service Delivery Director, Infuse Consulting Limited, "Quality Engineering - same old thing rebranded?"

                Ben Havill, Senior QA Analyst, SITA, "Usability testing, a manual human test"

                Ben's slides

                Danny Quilton, Owner, Co founder, Capacitas, "The challenges of delivering performance for Agile and Continuous Delivery"

                15:30pm

                Teas/Coffees

                16:00pm

                Steve Jefferies, Head of QA - everyLIFE Technologies, "Bringing quality and agility to startups"

                Mehmood Hasan, Enterprise Agile Coach, "Creating a culture of continuous learning"

                Daryl Searle, Ovate Limited, "Agile delivery - Why does testing get left behind?"

                Daryl's slides

                17:15pm

                Drinks Reception

                 

                Abstracts and Bios

                Stevan Zivanovich, Service Delivery Director, Infuse Consulting Limited, "Quality Engineering - same old thing rebranded?"

                The move to Agile and DevOps in many organisations is significantly changing the testing activities. This session will be an open debate to explore what Quality Engineering is and if it is different to either Quality Assurance or Testing. The session will start with trying to define what each of these is, compare and contrast each and debate where testing should go.

                Ben Havill, Senior QA Analyst, SITA, "Usability testing, a manual human test"

                As a tester I have always been fascinated in the psychology of testing, when given an opportunity to watch how people use software and explain what they are thinking I jumped at it.

                In this talk I'll give an overview of usability testing and key aspects of the User Centred Design methodology. I'll give some examples of how a small change can make a big difference to the end users experience and discuss some of the challenges we face in incorporating this into our culture. In a world where automation is key to delivery Usability Testing is a very hands on human to human interaction, how can we use the human element to compliment our test suite and validate the big data being produced from user trends.
                 
                I'm keen to hear the audiences experience of usability, do we need it? I mean we have the requirements - just delivering that is enough right? 

                Ben got into testing from a science background, having an MSc in Chemoimformatics. He has always enjoyed hypothesising and observing, but less of the write up. He has worked in retail, the pharmaceutical domain and is currently working in the air transport industry; Ben still gets excited when going to airports. He is also a keen cyclist and cake eater which help to balance each other out.

                Danny Quilton, Owner, Co founder, Capacitas, "The challenges of delivering performance for Agile and Continuous Delivery"

                IT departments are under enormous pressure to deliver business change faster, resulting in the adoption of Agile and Continuous Delivery methods. At the same time many businesses have stringent requirements with regards to the performance and stability of key IT services. Indeed, in ecommerce, speed is seen as a market differentiator.
                Traditional performance testing approaches and skillsets are no longer relevant in this problem domain. A common complaint when using traditional performance testing in these delivery methods is that it slows down the rate of delivery.
                Given these challenges, this session will explore the following questions:
                • What are the best methods for managing performance risk in the Agile/CD context?
                • How can performance risk be quantified?
                • How can performance testing be integrated into automation frameworks?
                • What tools are needed to support automated testing?
                • What skills does the Agile performance engineer need?
                • How can the performance engineer span both development and operations?

                I am the co-founder of Capacitas, a consultancy which specialises in reducing performance risk and cost in business-critical IT systems. I have 20 years’ experience in field of performance engineering and capacity management, working with clients such as easyJet, BSkyB, RS Components and major investment banks. I am interested in the challenges around scaling IT systems for peak and delivering software fast without breaking things!

                Steve Jefferies, Head of QA - everyLIFE Technologies, "Bringing quality and agility to startups"

                Startups move fast. Driven by customer need and investor demands, traditionally functionality was key, with quality a distant second. Today though, it is a blend of quality and agility which defines a company's ability to rise above other startups and grow into a successful long-term business.
                This session will look at the key challenges faced when bringing a quality assurance function into a startup company, specifically how quality can be increased without impacting the agility of the company and its ability to ship products to customers. These challenges are broad and when combined with the concentrated nature of a startup environment require innovative solutions to be implemented.
                We will explore these challenges whilst also looking at the role of a test manager and where this role fits in a modern aspirational organisation. Building on experience gained, we will look at the benefits of diversifying the test manager role, where this pivotal role now sits and the issues that blurring this role can bring.

                Having recently moved into the role of Head of QA at a rapidly growing startup, I have found myself thrown into uncharted waters. Having a background primarily as a test automation architect, I now lead a QA function at one of the fastest growing healthcare startups in the UK. My role today combines test manager, engineer, analyst and architect, a role which is both incredibly challenging and rewarding

                Mehmood Hasan, Enterprise Agile Coach, "Creating a culture of continuous learning"

                The session focuses on the importance of creating and fostering a culture of learning and the role Test Management can play in this endeavour. It is widely recognised that a culture of learning is absolutely key for organisational competitiveness in the current landscape. Unlike technology and tools, culture cannot be bought or acquired. It must be developed and nurtured, which takes time and effort. However, the question is how do you change an ingrained culture or “reset” it? This is a hard challenge, which becomes even harder depending on the wider operating context of the organisation. For example, in a naturally innovative environment of start-ups or relatively small technology organisations, there is innate tendency to experiment and fail fast. However, on the other end of the spectrum, there are organisations such as regulatory firms, who are risk-averse due to the nature of their role in the society. This makes creating a culture of learning and innovation that much harder. In this talk, the presenter would share his experiences of cultural change within organisations and facilitate a discussion on the role of Test Management in this context.
                Mehmood Hasan is an Enterprise Agile Coach with expertise in coaching, mentoring and training product development teams and the wider organisation, supported by extensive experience with full software development life-cycle in a variety of roles. Mehmood is a firm believer that agile and lean approaches deliver a win-win-win situation: more effective delivery of value for organisations, better products for customers and a more enjoyable way of working for team members.

                Daryl Searle, Ovate Limited, "Agile delivery - Why does testing get left behind?"

                In this session, we take a look at the expectations of testing within an agile delivery. We've heard all about the need for organisations to change mindsets and commit to supporting a new promised land of faster time to market, less documentation and increased cooperation, but does that really include the tester? I mean really include?

                Does this new world allow the test team to avoid highly constrained exit gates? Can the test team avoid the creation of detailed strategy and plan documents? And does a project team really get to 'Done' at the end of a sprint?

                The session will highlight some real-life scenarios where testing has been the forgotten son in the agile transformation, but also provides some solace in examples of where change has been allowed to occur, and the role of the tester has become more of a 'puppet master' and an engine room for setting business expectations

                Presented by Daryl Searle, Ovate Limited. Daryl has spent the majority of his 17 years working in and heading up testing teams, as well as UX and development, both for end users and within a Consultancy environment. Daryl has been involved in various forms of agile delivery since 2008, and has coached many transformation activities, covering test teams as well as organisational change.

                 

                Mike Jarred, Programme Chair
                Please get in touch using one of the following methods; TMF Contact form or http://www.linkedin.com/in/mikejarred