17 September 2016

Risks of Testability

  What potential problems should you be willing to risk in order to make your application more testable? A testable app is one that allows you to question the application in anyway required and to get the answers to those questions quickly.
  A story.
  We were creating an iPad application and we decided to run some tests through the application user interface. Perhaps the main issue with UI test runners is the speed. With browser tests we are usually able to shortcut to the bit of the application we are interested in, generally by using a URL. We couldn't find an equivalent way of doing this in app without introducing code that is test specific (I added configuration flags that allowed skipping certain views, like log in). One of the developers didn't like this.
  We discussed it and agreed to remove it. On the proviso that it would be looked into further and if a better way could be found we would use it, otherwise, we would revert (which was the first mistake on my part).
  Shockingly, it never got revisited. Every time I brought it up it was "We'll look into it as soon as the work on this feature is complete" and I let it slide (which was my second mistake). I couldn't get them to prioritise testability over features, because despite what is said at the start of a project about the 'ilities', when all the PO sees is a list of features required for MVP to be completed, those 'ilities' are never prioritised (which is a rant for another day).
  So we ended up getting stuck with a test suite that, every test, would take at least 6 seconds (log in uses a six digit pin, and the UI Automation framework could only 'press' a button a second) to get to the bit you are interested in. This meant every test we added put at least six seconds onto the run time of the suite. It led to reluctance to add anything as it wouldn't provide fast enough feedback. It led to the tests not being run locally, because of the time it would take.
  Now here is the question. Is the risk of a user being able to get the app into an unusable state because of our testability code greater than the risk of missing issues because it is too painful to test?

09 September 2016

Repo Reviews

It is very easy to get git repositories into horrible states. Summaries that are too long, having non-descriptive messages, or using the same message a number of commits in a row. Why then can we not fix untidy commits as part of code reviews? It happened at work recently and the main objection seemed to come from the fact that we had the branch pushed to remote (which you have to do to raise the merge request) and couldn't change the history because someone may have used it (in this case, the someone was probably Jenkins). I have a number of arguments against that. Firstly, if you have created a merge request, that implies that there is no more work to be done on that branch. If you change the history of the remote branch that invalidates someone else's work, the question should arise why was the merge request raised in the first place. Secondly, if you communicate that the remote history is going to change, anyone who might still be using it for whatever reason can sort it out themselves. Jenkins can be made to checkout a fresh copy of the repo each time. Yes it takes longer, but is a small price to pay for a clean log.

I use http://chris.beams.io/posts/git-commit/ as my guide to creating good commit messages. Obviously, you'll have your own practices, but why should you not be able to refuse merges if people have not followed them? You can use merge request tools to educate others in how to write good commit messages, they will learn to apply it, and we may end up with readable logs.

24 July 2016

A Hole in the Net

  I happen to think unit testing is the bees knees and is absolutely required in the creation of readable, maintainable working code. However, things can go wrong. Here is something that bit me in the arse not too long ago.

  The code itself was trying to create a hash with a key for every hour in a 24hr clock (the project I was using this for is here if you'd like to know the context). This means I needed a key from 0 up to 23. I know a number of ways of doing this in Ruby, but I settled on using the `times` method. My first mistake was I used the wrong number of times in my unit test. My second and frankly worse mistake, I used this method in my unit test but then used the same code in my implementation. So my unit test was happily passing because my production code was written to it, but it wasn't actually doing what I had intended.

  Fortunately, as I hope we are all aware, unit testing is but one of the many nets we use to catch bugs. I found this one with my exploratory testing net.

P.s. Keen-eyed readers will notice that the unit test code and the implementation code in the project are still the same and will be wondering if I've truly learnt anything from this experience. The reason is, the destination may be the same, but the route was different. Having used a different method to confirm the implementation was correct, I refactored the unit test to be more readable for me.

23 July 2016

Setup Assertions

There is a thing in automated tests, be it unit, integration or UI tests, that annoys me every time I see it. That thing is the use of assertions in your Act/Given sections of your tests. When you do this, all I can think is that you do not have enough confidence in your software that it will actually do what it is supposed to do. The response I have always had from people when I question this behaviour is, "Well, I have to make sure it is in the correct state". And my response is always the same, "There is a way of doing that. It is called testing." If your test is relying on a particular function of your system, why isn't that function tested well enough that you can rely on it working? Stop putting assertions in your tests where they do not belong!


Why I'm a Context Driven Tester...

  I was drawn to the context-driven testing community because when I started in testing, I was constantly questioning the way things were done, why they were done that way, and how we could do things better. I found Rapid software testing to align very much with the ways in which I was finding myself thinking. The bloggers I discovered (Matt Heusser and Eric Jacobson deserve special mention here) were giving me great new ideas and ways to talk about how I tested and why. From all these places and more, whether the people identified themselves as context-driven testers or not, I kept hearing this message:
  "When testing, the areas you target, the tools you use, the skills you bring to bear, are all dependent upon the context* in which you are working."

  It is a message that I fully endorse, and is the foundation of all my testing work and learnings. I learn new skills and new tools. I learn about how others have applied them. I learn in what contexts they have worked well, and in what contexts they did not. This is closest I come to a dogmatic belief.

Check your privilege


  I have heard a lot about privilege in recent times, so I think it is worth stating mine. This is my background and as such forms part of the context in which I work. 
  I am, by any reasonable measure, very privileged. I'm a straight white male, from a middle-class family and was born in one of the richest countries in the world. I've never struggled with poverty, I've never been the subject of racist or sexist abuse, and I'm old enough not to be dismissed as inexperienced, but not too old to be considered past it. This is important as it plays heavily into my experiences of the CDT community.

My CDT community experience


 I was reading the blogs and the books for a good couple of years before I met anyone face-to-face who was part of the CDT community. I drove for 3 hours, getting from Bristol to Nottingham for the first Software Testing Club meetup back in 2011 because James Bach was going to be there. I knew of his reputation, both the good and the bad, but here was the chance to hear one of the guiding voices of the community I closely identified with, live and in person. And there, I met many people I still look forward to seeing and talking testing with today. I belonged here.
  More meetups followed through which I found a company that saw testing the way I saw it and I got a job there soon after. My first trip to a Testbash was in 2013, it was such a great experience for me that I have been back every year since (as an aside, I am very glad to see the Testbash brand growing). I'm a shy person, and the first couple of years I was hesitant to talk to anyone. But everyone I did talk to was open and welcoming and it made the experience the great memory it is and now. This is the person I try to be when at conferences. I want people to see what I saw when I first found this community.

 The challenge


  So you can imagine I don't react well to seeing my community attacked? But I don't think it is being attacked. I do think it is being challenged. There are perceptions those outside the community have of it and those are being brought to our attention. One of the tenets of my CDT community is to accept challenges, not to dismiss them. This acceptance that anything can be challenged is important to me, it is another one of the reasons I align myself with CDT. I'm happy to have my ideas challenged and will put my strongest arguments for them. I try very hard not to hold on to ideas once they have been shown to be weak. If I haven't changed my mind on something recently, that shows I haven't learnt anything recently and that I am becoming closed minded. My community promotes forward momentum, not stagnation.

 The response


  All that is a great set of high-minded ideals but, it is just so much talk. I'm aware that very little of my ideas are out in public, so it is easy to say I welcome the challenges when I have nothing to be challenged on. What am I actually going to do about it? There are perceptions that I think I personally can help to change:
  1.  CDT is anti-automation - I'm evolving into a tester who is getting more and more technical tools in his toolkit. I plan to blog about these tools and techniques I am using. I'm very much into showing how to write good unit tests, so expect to see much more content from me on this. I also pledge not to write a post about how Test Automation can be misused, but will point out where the things I write about are appropriate or not. I have a life goal to present a talk about technical testing at a testing and a non-testing conference.
  2. CDT is a self-congratulatory echo chamber - I plan to get out more. I've got an ever expanding list of people to follow on twitter who are not CDT identified. I will pursue blogs and other readings that are from other disciplines and communities. I have already been to my first of what I hope is many non-testing conferences (Brighton Ruby) and will continue to participate in non-testing meetups where I can.
  3. CDT is a cult - The perception is we are all moths to the flame of James Marcus Bach and that anyone who does not toe the party line is cast out (being blocked on twitter being the official ceremony). That anyone who is introduced to the community must not dare to question, because although we say we like to be challenged, we only like to be challenged by certain people about certain things. And that anyone who hasn't lived to our ideals must be persecuted and hounded forever more. This, more than any others is the most dangerous perception. It is what prevents new and diverse voices from being heard. It is a cause of great animosity towards the community from those outside of it. And it will be the hardest one to change. For my part, I will do it by adhering to the following ideals: 
  • Assume any challenge to my work or my ideas is being done in good faith. That the person asking the questions is trying to learn, or trying to help me learn and not just trolling.
  • Never dismiss somebody's experience. It may be their experience is not the norm, but that doesn't make it invalid.
  • Call out behaviour that reinforces the cult perception

 

A dark time for CDT?


  I am not keeping any statistics, but I do believe that the CDT community is growing, or at least is still getting new people who identify themselves as context-driven, so I don't think it is all bad. But as has been rightly pointed out, I don't get to hear about those who were made to feel uncomfortable or rejected, because I'm still in the bubble. Maybe there were hundreds who found us but were frightened off? I can't know. All I can do is try to show why this community means what it does to me and hope some of that rubs off.
 
  *Context considerations include your own skills, that of your team and your organisation structure, among many other considerations.

15 June 2016

An Exercise from Matt Heusser

 The Exercise

  Matt Heusser, asked on twitter if anyone was up for a "test thinking exercise" and I accepted. He asked me to read this Softengi white paper on Testing in Scrum and asked, "What do you think of their advice? Is it GOOD?" Good in this case is a bit subjective, and there is nothing saying what about it is supposed to be good (i.e. their approach to testing is good, the spelling, punctuation and grammar is good etc.) So my approach was to read it and pick out where my experience differs. It is presented as an experience report, not a One True Way and should be taken as such. This means I will not be answering the first question, since I don't see this paper as giving advice, as such.

  One last point to make before we start, I know nothing about the company, the processes or the people that work at Softengi. All my impressions, thoughts and assumptions come from my own experiences and what I've read within this paper.

Thoughts on the paper 

  The first thing that stood out to me was the first line:
"It is a commonly held belief that testing is useless"
Commonly held by whom? It isn't a common conception from people in the places I have worked, either developers, project managers or senior managers. It is a fairly sweeping statement to make without a citation.

  On a similar note, there was the line:
"a short-cut today raises the probability of a low-quality, error-filled solution tomorrow"
which got a 'hell yeah' from me. But I realise I have the same problem I've just criticized the writer for. I have no independent study I can point to that supports that statement. Whereas the previous line goes against my experience, therefore grates, this one bolsters my own view points. If I wasn't doing this exercise, I might not have noticed I was falling into the same trap.

  Whilst we are on the subject of lines that stand out, in reference to demoing to the customer:
"In the classic Scrum approach, the QA team does it." In the classic approach I know, the team does it.

  There are a couple of places I took exception to the wording that was used within the paper. For example, where Software Development is used in such a way that doesn't include testing. That is not development in my mind, that is just programming. Similarly, it talks about testing finished parts of the software. I'm sorry but in my mind, if it hasn't been tested, it is not finished.

  It exemplifies just how difficult it is to talk about estimation without referring to time, since it has the line "Our team measures tasks not in hours, but in Story Points", then goes on to talk about how many hours particular task sizes are. Then "the team members provide the customer with real timings for completing each task", which is basically a time estimation.

  Before I get onto my biggest issue with the paper, let me reiterate that this is an experience report, so is a description of what Softengi are doing and presumably is working for them. However, just because it works for them does not mean I would recommend this approach to others, and the reasons why are summarized by Pic 1.


  Everything about this screams that the process is not how I would describe an agile approach, but is in fact a mini-waterfall development process. It is telling that they state "at least 40% of sprint hours should be dedicated to functional testing and stabilization". Note this is specifically the end of the sprint (Bug fixing in last week of sprint. "~40%, if we have a three weeks[sic] Sprint"). And there is a hardening sprint, which to me is a strong indicator that your in sprint testing is lacking something, and is more mini-waterfall. I get the impression they took how they tested and forced that into scrum, rather than adapting how they tested to best fit scrum. I'm not saying it is what happened, but it is the impression I get from this paper.

Conclusion

  So is it good? Were someone talking to me about how their company was transitioning to using scrum, and they were having trouble testing within that framework, I would not point them to this paper. I think that sums up my view.