
Automation Sync: Exploring Strategy and Results
At Twitter, we currently have 14 automation engineers and more than a quarter of them are new college hires. All of us work on different projects which span various areas and platforms: web, iOS, android, web services, etc. In a shop like this, it’s very easy to get lost in your own world and end up learning in isolation. To avoid this, we have started an automation cohort of sorts where we meet periodically to share ideas, lessons learned, tools built, etc.
I led the first meeting and had a request to lead discussion around a couple of articles. As our new and junior automation engineers are beginning their journey in test automation, we reviewed two articles which explored strategy and meaningful results.
5 ways to simplify your automated tests by Paul Merrill
For Paul’s article, we looked at the five pieces of advice for simplifying automated tests and reflected on each one to see if it made sense to us, if we were currently exercising this in our own frameworks, and if not, was there a valid reason why we weren’t. Here’s what we found:
1. Decrease your scope
We agreed with Paul’s advice of limiting the purpose of the script to what we specifically want to test. We discussed ways to make sure we don’t get caught up. For example, a tester may go to a page and verify a bunch of things within that page. If they have to write up a test case, they might just lump all of that into one. If that test case is later passed on to an automation engineer to script, they should pause and find what’s the best way to break this up into readable, individualized tests. This may mean that the number of automated tests do not exactly correlate with the number of manual test cases. I’ve seen lots of people in management get confused by this, so we discussed ways to show coverage and communicate correlation to those who need to track progress.
2. Fail for one, and only one, reason
For this one, we acknowledged that most of us are verifying several things within a single test. For example, if we do a search for a tweet, we verify the number of results returned, as well as several important attributes and actions of the results. The test would fail if any one of these is incorrect. Is this wrong? We discussed it and agreed that the test should fail for one reason, but it may take multiple verification points to assert that reason. The single reason for the test is to verify that the search results are valid. That encompasses several checks to ensure that. Otherwise, we’d need to make tests that exercised the same actions but had a single verification in each one of them. This would cause our execution times to be longer than necessary.
3. Identify responsibility (and hold to it)
We agreed to watch out for flag words “and” and “or” when defining the responsibility of our tests. As Paul mentions, these can be indicators that your test is doing too much.
4. Ask, “What is the simplest thing that could possibly work?”
I used this opportunity to introduce our college hires to the Automation Pyramid introduced by Mike Cohn and we discussed alternatives to UI testing. I also shared with them my blog post on blurring the lines of the pyramid and not necessarily boxing your tests into one specific layer.
5. Avoid unnecessary dependencies
We discussed dependencies throughout tests, and I brought up the conversation I had with Paul and a few others on Twitter about this. I agree with this, and I always avoid dependent tests. However, I’ve found times where I needed to test the reverse of an action and instead of having a test that does A and then another test that has A minus A, I’ve sometimes just did it all in one. I don’t feel good about this and it feels like it violates the “single responsibility and fail for one reason” rules.
I preach against multiple validations in one test. So, that felt wrong, but also felt wrong to split them and have dependency between them
— Angie Jones (@techgirl1908) August 23, 2017
Overall, Paul’s article had lots of great advice and also led to lots of good discussion around our own practices and situations/reasons why we might step outside of these guidelines.
A look into a year of test automation by Maaret Pyhäjärvi
Maaret’s article was very interesting given that she identifies more as a tester than an automation engineer per se. However, admirably, she’s very involved with the automation process. The heart of her post was this quote:
I feel we do a lot of test automation, yet it provides less actionable value than I'd like
This is a state that a lot of teams find themselves in, and we were very curious to explore Maaret’s findings to identify possible trends that lead to this state, and attempt to avoid them at all costs.
Maaret essentially listed three expectations of their test automation and compared them with actuality:
Expectation 1: As opposed to testers waiting around for long periods of time waiting on random system crashes, the automation can catch these crashes which can be analyzed for patterns.
Actuality 1: There have been no new development that causes new crashes, or either they have not been uncovered by the automation.
We discussed this and considered if we had a similar expectation of our automation frameworks which was not actually being met. What really stuck out was the latter part of Maaret’s observation: “or either they have not been uncovered by the automation.” The key takeaway we got from this was to ensure that we are scripting scenarios that align with our overall expectation of what we want to get out of automation. For example, if we are testing for random crashes, are our scripts exercising the application in a way to provoke such a thing? And even if it did occur, would our automation even catch it? I’ve written about how to avoid test automation checks that are not aware of surrounding issues.
We also pondered on if it’s necessarily a bad thing if test automation doesn’t find new issues. We agreed that it depends on your expectation. We do not expect our automation to necessarily find new issues, but to serve as a regression checker to ensure old issues do not resurface.
Expectation 2: Test automation eliminates the need for people to constantly run regression tests on multiple operating systems.
Actuality 2: The issues that the automation uncovers rarely are related to an OS problem, and there are simpler tests that are also being run to determine if the backend system is down.
In discussing this, we realized that we rarely find OS-related issues on our web app, however we do extensive mobile testing and in this space, we find tons of variances between iOS and Android. Especially because we have different developers working on each of these. So, it’s valuable for us to run our mobile tests against different operating systems. However, is it a waste of time to run the web tests on different operating systems? It depends on if a human would be expected to execute these same tests. If the automation is not finding OS-related issues, but is saving a human the time and effort of executing these tests, we consider that a benefit. However, if the OS-related web tests are costing more (i.e. maintenance, etc) than the value they are providing, we can definitely see Maaret’s concern.
Expectation 3: Automated tests can detect dependency-related issues that developers may not have been aware of.
Actuality 3: The tests find the issues, but these issues are not related to the area of the application that the automation was written to for.
Maaret mentioned that maybe the automation written is better serving other teams and perhaps they should hand the tests over to those teams to execute themselves. Initially, we had a hard time relating to this notion because as automation engineers, any issues that our scripts find, we count as a win. It’s a moment of great pride when our scripts find an issue that otherwise would not be apparent. But then we remembered Maaret’s position in her company and considered that she may have a different view on this. We believe she sees automation as a tool to help her test her application. That’s certainly a valid perspective, and if the tool is doing more for others than it is for her, we can see why she’d want to hand it off to those teams. Especially, with such a quote:
I cannot remember more than one instance where the tests that should protect my team have found something that was feedback to my team
We really appreciated going through Maaret’s expectations vs actualities and taking a hard look at our own automation to determine if it’s providing the value we seek.
Pingback: Five Blogs – 26 September 2017 – 5blogs
shiv
Thanks for this thought provoking article and sharing your experience, I am motivated to do a similar exercise with my automation project I am currently working on after reading this article. I can relate my thoughts more with maaret’s article. I get these thoughts but postpone to think in depth later which never happens.
Kha
Thanks for a great article.I had one question around decreasing the scope.As mentioned ” This may mean that the number of automated tests do not exactly correlate with the number of manual test cases. I’ve seen lots of people in management get confused by this, so we discussed ways to show coverage and communicate correlation to those who need to track progress.” What can be different ways to avoid this confusion?