Alan Richardson's Blog

Subscribe to Alan Richardson's Blog feed
Updated: 14 hours 35 min ago

Use your malevolent powers for good

Mon, 17/07/2017 - 23:22
TLDR; I can fool myself into comfortable complacency about code when programming. I can use testing to banish this false glamor.



“Why might we be villainous? First, because we can be… that’s a big deal…”


Thus spoke Jordan Peterson in this Maps of Meaning Lecture:

youtu.be/I8Xc2_FtpHI?t=1h54m16s

One of the reasons for adopting a testing role on a project is to make sure we use this capability in a positive way on Software Development projects.

The act of writing code can fool us into thinking that we have explored the functionality of the system. We spend time with it, we see it runnning, either by debugging or through Unit Tests and we spend so much time with the code that we are familiar with it and we feel comfortable with it, and we might then believe that we have explored the code and its surrounding environment (The System) and its interaction within that system. We view it as an explored territory.

I fooled myself recently when developing an HTTP based REST application. I relied extensively on coding, reviews, Unit Testing and even my Automated Integration execution misled me into an overly comfortable sense of confidence because of assumptions that I had encoded into my test code.

You can see me explain the above in this YouTube video

Very often our exploration has not been detailed enough because we have been busy building it, creating the foundations, clearing the area around it, such that it can be used in The System, but we’ve generally explored it locally and over a short period of time - given the length of time this code will function in The System.

Our familiarity can fool us into believing we have explored it.

This is a risk.

We can mitigate some concerns by conducting the type of exploration that we believe users will use, or that the functionality has been coded to handle. This might well mean following paths that we have already walked i.e. we performed that test during code creation (even if no Unit Test exists to objectively demonstrate that).

Following paths that we’ve walked before can count as exploring territory if we are observing the territory in more depth or traversing it in a different order. It might not offer up as much information as the unexplored territory, but it might still be useful. But that is still a comfortable form of exploration.

This still leave risk.

Risk we might try to mitigate with a dash of malevolence, but it has to be aimed and focused, otherwise people will brush it off or disregard it as unnecessary.

We don’t always need to harness malevolence, sometimes simple exploration will do the job (as my video above demonstrates). But malevolence is an easy way for me to conceptualise pushing the system hard, exploring its edges, observing it in ways that support in depth examination rather than superficial observation.

The more that we learn to aim our malevolence effectively, to create an objective model of explored territory, the better our testing will become.

“We can Aim our Malevolence, and we’re really good at it”  Jordan Peterson, at 1:55:40

Free Bonus Video
Categories: Agile, Software Testing

Using Browser Dev tools to investigate and bypass GUI error reporting bugs

Thu, 13/07/2017 - 12:18
TLDR; Learning to use browser dev tools can help you investigate defects that have no visible output on the Web GUI, and they can help you bypass problems in the real world.

One common bug that I find a lot with web applications are errors that do not get reported to the user.






The user knows that something has gone wrong because the front end hasn’t responded they way they wanted but they have no information that helps them understand:
  • was it them?
  • was it the network?
  • has there been a validation error?
  • etc.
I write this now because I noticed a problem in Instagram this morning that fits this pathology, which I used my Technical Web Testing experience to investigate.
  • With Instagram it is possible to add comments on a post. And the comments can contain hashtags.
  • Instagram has a limit on how many hash tags a post can have in the post and in the comments.
I think the limit is 30 tags in the post and 30 in the comments, leading to 60 in total.

But if you try and add a comment, which contains hashtags, and the hashtags exceed the total number of hashtags allowed then:
  • Instagram sits there looking at you,
  • the comment is not accepted,
  • you, the user, wonder why
To find out why, we have to open up the network tab in developer tools where we see that the API interaction in the form of add/web/comments POST request receives a 400 response:

{"message": "Too many tags.", "status": "fail"}



How many tags are too many? I can’t tell from the error message. But at least I know from the dev tools what Instgram reported as the problem.

How many other users would know this?
Note this next section written in real time as I investigated the bug, hence the change in writing tense.I pursued this a little further.

My post instagram.com/p/BWe0wa0gaiI has 29 comment hashtags.

Let’s push the limits:

I’m going to try the following data set {“#attitude”, “#motivationalquotes”, “#inspiredquote”, “#motivationalquote”, “#wordsofwisdom” }.

Why yes, my middle names are “Captain Self Help Guru Motivational Life Coach” why do you ask?
  • add them all at the same time through the web gui
    • ’#attitude #motivationalquotes #inspiredquote #motivationalquote #wordsofwisdom’
      • 400, too many tags
    • ’#attitude #motivationalquotes #inspiredquote #motivationalquote’
      • 400, too many tags
    • ’#attitude #motivationalquotes #inspiredquote’
      • 400, too many tags
    • ’#attitude #motivationalquotes’
      • 200
{"id": "17888232877003952", ...blah blah blah...
"text": "#attitude #motivationalquotes",
"created_time": 1499939442, "status": "ok"}

I added the “…blah blah blah…” just in case you were worried about Instagram’s sanity

And lo’ 31 tags.

According to help.instagram.com/161863397286564
You can’t include more than 30 hashtags in a single commentSo that would be an off by one error as well as a GUI reporting bug.

Now I’m off to try the mobile app.

At least I see an error, although “Couldn’t post. Tap to retry.” doesn’t seem accurate.

But, clearly, I “tap to retry”



Interestingly I have to click the red error message. But I have fingers rather than a super accurate stylus so I first:
  • click the post entry which selects the entry
  • click the hashtag, which takes me to a hashtag view
  • then finally manage to click the red text, which repeats the error cycle
I didn’t hook up a proxy to the phone so I don’t know if the same message is coming back from the server to the phone.

But I’d count that as a usability bug since the error is misleading.

If I was a normal user, I’d contact support at this point, or give up.

Because I test things, I write a blog post and then contact support, sending them a link to this post.
I’m beginning to think that boundaries might be a pathology in Instagram though, because on the mobile app, when you create long description, there comes a point at which you can continue to type, but your letters are not visible in the editor. I assume they are there because my auto-complete keeps matching the words and when I delete letters I do have to press delete a lot, and auto-complete suggests I’m deleting, even though none of it is visible on screen. Again I haven’t fed this through a proxy so I don’t know if the mobile app truncates the description (because it isn’t all accepted and a description limit is enforced), or if the truncation happens on the server side. Again I’d count this as a usability bug because it impacts my usability of the editor.

Once again, knowledge of the dev tools helps identify defects, and supports you in your daily life.

PS:
Categories: Agile, Software Testing

Are you stable, or complacent? Is it time to experiment yet?

Mon, 10/07/2017 - 10:51
TLDR; If you are not sure that you should experiment with new techniques then find ways to monitor the domain first, you might be able to learn from someone else’s experience.



When things are stable, and they are going well, a hard question to answer is “Is it time to experiment?”.

I have default tools and techniques which I know I can use to quickly achieve good results. As an example, when I want to start a new web server project for myself or as a training exercise I will use Java and Spark Framework. I know how to get results with the framework, I know Java, I can achieve results quickly. Have I found my peak and ideal solution? Or am I complacent?

Perhaps there is a better solution? Perhaps I should experiment?

There are some obvious points at which I will experiment:
  • when my chose approach or solution has obvious limitations for the task I am about to undertake
  • if I’m not sure that my current solution will handle the next task
With known limitations I’m forced to experiment. I have no choice.

If I’m not sure, then the first experiment I’ll engage in is with my current solution, and if it works then I probably won’t go looking for a new solution.

What about the middle ground when - things are working fine - but what if there is a better way that I am yet unaware of? Should I go hunt down other solutions to experiment with them?

I used to…
  • I used to monitor for new tools
  • I used to try out new tools all the time to see if they were better than those I was using.
But the problem was that I would then:
  • spend a lot of time monitoring tools
  • spend a lot of time superficially evaluating tools,
  • spend time switching between tools
I didn’t measure the time all this was taking. I didn’t consider the Opportunity Cost of this experimentation - what I could have done instead. I didn’t evaluate the benefit of having conducted the experiment i.e. having found and switched to another tool - how ‘much’ am I better off? how much faster is this task to complete?
  • measure the time experimentation takes,
  • consider the Opportunity Cost of experimentation,
  • evaluate the benefit of having conducted the experiment
I decided to cut down on the tool monitoring and evaluating and instead go deep with the tools I had, and to find a way to monitor - not the tools, but instead monitor - other people’s experiences with using tools.

This means that instead of monitoring new lists of tools I would find people in the domain that I was interested in, and monitor their experiences of using tools and techniques.

So if I was interested in new lightweight Java HTTP Servers I would:
  • find blog aggregators for web development, HTTP servers, Java libraries and subscribe to those,
  • create a google alert for search terms such as “lightweight http server java”,
  • subscribe to specific blogs for products, tools and techniques that I already know about.
I changed the monitoring approach and then tried to find ways of learning from other’s experience rather than direct experience (which is generally more costly in time, but clearly you learn more when you engage in it).

You can also use this with your current work by publishing what you do and what you think. Then you can receive additional comments from other people’s experience.

Prior to experimenting:
  • set an aim for what you are trying to learn,
  • decide on the fixed parameters in your experiment e.g. what other tools you will use, what design approach, etc.
  • investigate for any prior work that has tried to achieve the same aims and with the same parameters, (this might mean you don’t have to actually conduct the experiment)
  • set a time deadline,
  • keep your experiment focused on what you are trying to learn,
  • if you spot opportunity for new investigations then note them down, but don’t change the terms of the experiment yet, consider those objectively later
“Is it time to experiment?” might mean - “How can I better monitor this domain?”.

You can use those nagging doubts to expand your signal monitoring and ongoing learning and education.
Categories: Agile, Software Testing