There is a thing in automated tests, be it unit, integration or UI tests, that annoys me every time I see it. That thing is the use of assertions in your Act/Given sections of your tests. When you do this, all I can think is that you do not have enough confidence in your software that it will actually do what it is supposed to do. The response I have always had from people when I question this behaviour is, "Well, I have to make sure it is in the correct state". And my response is always the same, "There is a way of doing that. It is called testing." If your test is relying on a particular function of your system, why isn't that function tested well enough that you can rely on it working? Stop putting assertions in your tests where they do not belong!
Hello Andrew,
ReplyDeleteI am not entirely sure if I can follow you, but this maybe due to a misconception on my side.
Hence I have a question: What exactly do you mean by assertions in your given stage? Can you give me (pseudo) code examples or anecdotes to shape a clearer picture for me?
I am asking since I do see value in assertions while setting up your checks in some occasions, for example we have some end to end checks in our mobile app. The general workflow here looks something like this:
- create the needed test data via several web service calls to the backend
- bundle the created data for the current use case
- fire up the app and perform your actions and checks on the set up data
Since I am in an end to end environment and from my team can only really control the app and not the backend it actually can happen that the backend throws me an error during my data creation steps. Hence I have some assertions on the responses from the backend where I know from the past that we had some trouble. And in case I have trouble again I want my check to explicitly tell me that something went wrong directly during data generation and not run half way through its scenario and then fail for some obscure reason, which I need to investigate to understand what happened. Using assertions the check fails earlier and I can directly spot why.
I could of course confer with the backend tester or look into the backend check results to have an overview of current changes and deployments and see if they affect me. But I have no problem with my checking code doing this for me with a handful of assertions.
Then again my test data creation is even before my Given stage. ;-)
Moving away from my specific case I still can imagine that in end to end automation some assertions as preconditions are very useful when scripting checks, since during one checking scenario you might need several systems available and end to end testing can be quite slow. So asserting if all needed systems are available for the check and failing if not might be better than running a test for several minutes and then failing when trying to use a system, which is currently down.
Maybe I am talking about something completely different than you, therefore I asked for some examples. 😃
Best regards,
Marcel
End to end are certainly the most likely culprits for this kind of behaviour. So let us try an example.
DeleteWe have a system that consists of a database, a number of web services and a UI.
Firstly, all this needs to be deployed and running before you can run your tests. This is outside of the tests themselves. Your tests are not responsible for setting up this environment (you may deploy from scratch before hand, or have an always on environment for example), so anything you need to do to check that it is up and running is not what I am talking about.
My issue is, you start a test that as part of the setup uses one of your rest services to populate some data. If you have got to this point and you feel the need to assert that the data got saved to the DB correctly, you have problems somewhere in your tests of the service. Likely some coverage is missing at the unit or integration levels.
There is a second option, which is you don't have a service for what you want to do and have to do the setup via the UI (not nice, but it happens). Why would you not have a specific test to check you can do what you want?
I will add that I do not class waiting for something to happen as an assertion. Selenium especially has waits for good reason. The difference is assertions are there to test, waits are not.
As with most things, you may find that you have to use assertions where they do not belong. But you should recognise that it is a workaround for other problems that you are ending up suppressing.
Hello Andrew,
Deletethank you very much, this clarifies a lot for me, especially when you say "so anything you need to do to check that it is up and running is not what I am talking about.” Since the examples I provided are in that area. And just yesterday they proved useful, because after a new backend deployment I found some errors while setting up my test data, most likely not all servers behind the load balancer have been deployed correctly.
You will state now why I have to find this when I want to test something completely different and I will ask that very same question when I am back at the office. Yet the rather comprehensive error messages I got in my setup steps still proofed helpful to the people, who are now looking into it. So they did add value to someone.
I agree with you, that it is not good if you have to check almost every little step while setting up your actual test, if you have to do this most likely you have other issues. I think what makes me a little uneasy is when you say you never should do something.
I recognize in your discussion with Sergey that you are talking about testing a system, which you have under full control, so you yourself or your team can actually perform all the tests: unit, integration, full stack. More than once I was in a situation where I could not do that. An application consumes output from another application, which is build by a completely different company in a different country. I can influence their coverage only to some degree.
Usually Consumer Driven Contract Tests are a good idea here, yet sometimes they are hard to sell (“you want to test that other system? why? I don’t see it”). In this context I understand why people add assertions in their setup, especially when they made bad experiences in the past. In this case assertions might not solve a problem - in fact, they might even indicate there is one - but it is all the safety net these people have and it makes sense they use it, rather than doing nothing.
In fact assertions may even help to better the situation, because they now have regularly failing checks in their reports and can point to these as an argument to perform consumer driven contract tests.
Basically I think we don’t really disagree. I just imagine that situations may arise where people either see value or no other way than adding assertions and “generally don’t do that” is maybe not what helps those people in their context. Just because I have not seen a reason for something it does not mean there is none.
Best Regards,
Marcel
While it's true that your setup part should be covered by some other tests, you now have a dependency between tests. If your setup part fails, the appropriate test will show you that, but you will also have a bunch of other failing tests too. Figuring out which one is the true failure and which are incidental can be tricky. You can:
ReplyDelete1) Express dependencies between tests explicitly so that your testing framework doesn't run dependent tests if the depended on test fails. Can be a pain to maintain all those dependencies, though.
2) Use mocking in your setup part. This can get very ugly and troublesome, but sometimes very easy and useful. Depends on the particular test and the abilities of the chosen mocking framework.
3) Use assumptions provided by certain frameworks like NUnit. Write Assume.That instead of Assert.That and you get nice readability plus when it throws the test is marked not failed but rather meaningless / inconclusive / ignored / you name it. This feels like the best option if your framework allows it.
What is your take on this?
As with most things, the answer comes down to 'it depends'. The ideal is to be able to trust what you are doing in your setup will work, by having tests earlier in your pipeline that shows it will work under the required circumstances. Most pipelines I have seen will run unit tests first, continue to integration tests if they pass, then full stack tests once all integration tests have passed. This is essentially your option 1, but is also how it is meant to work. If you have the case where tests are dependent on another test running at the same level, that is more tricky. You could create test suites that are dependent on others running first, but for full stack tests that will likely add a lot of time. In my experience, it is better to spend the time getting the tests at earlier levels to be trustworthy, and let the occasional failure slide until then. Unless there are hundreds of people all pitching in to sort out failing tests (if so, I'd love to know how you managed it), then you'll soon start to recognise the pattern of failures.
DeleteI was talking mostly about unit tests because, you know, sometimes they are not exactly unit. It's often simpler to rely on other classes rather than trying to mock everything just to get a true unit test.
DeleteI had no problems recognizing failures because unit tests are simple and a minute of debugging any of the failed tests is usually enough to figure out what's wrong.
Yet, there is always room for improvement. Even if it saves me just a couple of minutes, it would certainly be better if the test run tells me exactly what's wrong right away. And that's why I was looking at those setup assumptions, because as I see it, the only trade-off is that they will clutter tests a little bit, but unless you abuse it, it should be OK.