James Thomas' blog
Updated: 22 hours 41 sec ago
"Do you have generic strategies for the prioritisation of tasks?" When Iuliana Silvăşan asked the question I realised that I probably did, but I'd never tried to enumerate them. So we went to a whiteboard for 20 minutes, and we talked and sketched, and then afterwards I wrote brief notes, and then we reviewed them, and now here they are...
We think these factors are reasonably generally applicable to prioritisation of tasks:
Yes, they are generic and, yes, they will overlap. That's life.
The last three perhaps merit a little additional explanation. Time is a compound factor and covers things like resource availability, dependency planning, and scheduling problems which could be split out if that helps you. Goals cover things like experience you want to get, skills you want to practice, or people you want to work with. This might not be a primary factor, but could help you to choose between otherwise similar priorities. Commitments are things already on the schedule with some level of expectation that they'll be delivered. That thing you promised to Bill last week is a commitment.
We think this method is a handle that can be cranked to generate task priorities:
- Put each of the factors as columns in a table.
- If you know some are not relevant, don't use them.
- If you have context-specific factors, add them.
- Put the tasks to be prioritised as rows.
- Use data where possible, and gut where not, to score each of the factors for each of the tasks.
- Unless there's a good reason not to, prefer simple numerical scoring (e.g. 1, 2, 3 for small, medium, large).
- Try to have a consistent scoring scheme, e.g. low score for something more desirable/easier/better to do sooner.
- Don't agonise over the scores.
- When you're done, add a final column which combines the scores (e.g. simple addition as a starting point).
- Sort your table by the scores.
- Your scores are your prioritisation.
- The prioritisation you have created probably doesn't fit your intuition.
- If so, wonder why.
We think these are some possible reasons why:
- You weren't right in your scoring. The table can help you to see this. Simply review the numbers. Do any look wrong now you have them all?
- You weren't consistent in your scoring. The table can help you to see this too. Sort by each factor in turn.
- You need to weight factors in the overall score. Perhaps the downside of a delay is really big so the urgency factor needs to dominate the overall score.
- You have factors that correlate. This is essentially also a weighting issue, and you can always remove a column if you think it is serving no particular value in the analysis.
- You have missed an important factor. The order you have feels wrong. What factor should be here but isn't?
- Your intuition is wrong. Perhaps you have uncovered a bias? Well done!
Once you've got an idea why your intuition and the prioritisation you have don't match, update the table and rescore.
We think a couple more factors are relevant, but in a different way to the others:
Politics says that there may be reasons outside of reason which determine the work that gets done, who does it, and when. If you suspect that, then perhaps you should do something else ahead of prioritising these tasks.
Categories: Software Testing
George Dinwiddie recently delivered a webinar, Distilling the Essence, on the topic of crafting good examples of acceptance criteria. From the blurb:
When creating our scenarios, we want to fully describe the desired functionality, but not over-describe it ... Which details should we leave visible, and which should we hide? ... [We will see how we can] distil our scenarios down to their essence and remove distracting incidental details.
I loved it and, naturally, wondered whether I could distil the essence of it. Here goes:
- Not just examples, but essential examples.
- An essential example promotes shared understanding, testability, clarity of intent.
- Remove incidental details; they obscure the important.
- Highlight essential details; they have the most communicative value.
- Essential details don't change with user interface, workflow, implementation technology.
- To help: name scenarios, abstract specifics, note explicit rules, conditions, and boundaries.
- Bottom-up is OK; you can iterate from the specific to the essence.
- Don't extract too much; totally generic is probably worse than too specific.
If that seems short, the webinar itself is admirably only about 15 minutes long, and that's mostly George giving worked examples of the approach.
Categories: Software Testing