Adding a Peripheral View to Test Automation

Adding a Peripheral View to Test Automation

One reason why human testers are far better at finding bugs than automated scenarios is because testers aren’t just validating the expected results. They are also able to recognize unexpected surprises and anomalies within the application.

For example, take a look at this application and assume the test is to go to the ‘Create New App’ page of the Setup tab and ensure that these 12 options exist in the grid.


app-noexceptionThe tester would write a test case that says exactly that and maybe add a bit more detail about what the 12 options should be, including the title and description. The automation engineer would write a script to verify that these 12 options exist on this page and would likely add assertions to verify that the title and description matches what’s expected.

Here’s the problem. One day, this same page looks like this:



Did you notice the exception at the bottom? Congratulations, you’re a human and you’re much more alert than the automated script! While the test case only told you what to specifically look for, it also implies “nothing should be weird about this page”. A tester would have caught this and opened a bug, while the automated script would pass every time because it only follows the explicit instructions of the test case.

What the tester did was use her peripheral view. The test case didn’t have to say “ensure there are no exceptions shown in the message bar”; but she subconsciously ensures that in addition to the 12 options, there is nothing else out of place.

As an automation engineer, we need to be more conscious of our peripheral view and, as best we can, build this into our scripts. I’d argue that our peripheral view is not limited to things that may be hidden in error consoles and message bars, but also includes details that are hidden from the test case, itself.

For example, I teach an automation workshop for testers and make it a point to illustrate to them how automating straight from a test case can lead to faulty tests. I present to them a grid where they can enter text to filter the contents of the grid.

application for automation

Their test case is to enter in “Cucumber” and ensure that “The Cucumber for Java Book” is shown. I have them write a script that would verify that test case.


Their scripts initially always look like this

That script runs, the test passes, they see green and feel really good. But guess what? If the app behaved in the following way, the test – as written – would still run, pass, and show green – giving everyone that feel-good feeling but the application is broken!


What’s not explicitly stated here is that “The Cucumber for Java Book” should be the ONLY book that is shown! This was implied but not actually written in the test case. If we tested this as a human, we’d immediately catch the fact that the grid is not filtering at all and this might very well be a high-severity bug. However, as you can see from the script above, this bug would have gone uncaught by the automated script. We must be diligent about adding the explicit and implicit verification points to our scripts. One more line of code would save the day!


Angie Jones
  • Jim Hazen

    You just pointed out the main problem with testing in general. Typically people only test for the positive conditions, the happy path. When they should also test for the negative conditions, be they fault/error condition responses (validate an error message) or forced failure scenarios (input invalid data and check to see if the system caught it correctly) as examples. They also need to test for the boundary areas as well to see if the software behaves or not. And yes, a lot of that is done by hand and is typically best done by hand. You can automate to a degree, but there is nothing better than the human eyes and brains. That is why automation will never replace a human in the testing world. And I’m an Automation guy saying that. Good post with good points, well done.

    November 23, 2016 at 12:53 pm Reply
  • Beni Keyserman

    Hi Angie,
    I strongly relate to this article. However I have a question about the last statement.
    What happens if there is another book about Cucumber that was added to the store? The test will fail although it should have passed.
    I think that automation tests should have robustness, when possible, to reduce the data impact on those tests, for example different environments different data. I think that this issue is a pain point that not a lot address.
    For the Cucumber example I would add a test that return all the titles of the books displayed, into an array, and searched in each title the key word. This would make the test more robust to data change.


    November 27, 2016 at 2:19 am Reply
    • Angie Jones

      Beni, yes, great point. If I were writing this script for real, I would programmatically consult a source of truth (database, web service) within this test to get the number of books that should be visible and then write my assertion based on that. You’re absolutely correct, the staged data could change even from environment to environment and we don’t want brittle tests.

      November 27, 2016 at 2:41 am Reply

Post a Comment