Top

Verifying Entire API Responses

Verifying Entire API Responses

Most people do a pretty lazy job of testing APIs. In my ABCs of APIs workshop, I ask people to manually test an API. When looking at a response, they typically glance at it, and maybe carefully review some of the key fields. Same goes for automated tests – only the key fields are typically asserted against.

Let’s look at this GET call:

This returns the following response:

Given this response, many unit tests will make sure the response code is 200 and that the body is not null. Functional tests may go a bit further and verify some of the key fields of the body.

Most people won’t script a test that verifies EVERY bit of this response, mostly because it’s really tedious to do and arguably some of the fields may pose a lower risk if they are incorrect.

My argument is that you don’t know how every customer will use your API, and therefore can’t be sure which of these fields are most important to them.

Because of this, I’ve been seeking a way to verify an entire response with a single assertion. I’d gotten close to being able to do this by deserializing the API response into a POJO and comparing the resulting object with an expected object. But even with this approach, I needed to code up the POJO, and build the expected object in my code. Still a bit tedious.

Luckily, Mark Winteringham taught a workshop on Approval Tests (created by Llewellyn Falco) and showed us how to verify an entire response body! I went home and played with it more on my own and am loving it!

After the first run of your test, Approval Tests will save your expected result to a file. Then on each subsequent run, it will compare the new result to what’s saved as the expected result. If the results differ, the test fails and a diffing program such DiffMerge can show you the differences between the files.

 


Video by Clare Macrae

 

Dependencies

I created a pom file with Approval Tests, TestNG (as the test runner), and Rest-Assured (as the API executor). Note that Approval Tests can work alongside any other test runner and API executor, and there’s support for several programming languages as well.

 

Testing Response Body

In this test, I want to verify the entire body of the response. I can do that by calling Approvals.verify and passing in the body. While this verifies the body, the status code could still be wrong. So, I add one more call to Rest-Assured’s statusCode method to verify that as well.

When executed the first time, this will fail because there’s no approved file saved yet.

I take the result, make sure that it’s ok, and then save this into the approved file. This becomes my golden master.

When I run this again, Approval Test compares the received file (which now contains the new info) with the approved file. Since they both match, the test passes.

 

Testing Entire Response

While testing the response body along with the status code may be enough, it doesn’t hurt to go ahead and test the entire response – including the headers. I’ve certainly found bugs here before, such as with headers that contain pagination and such.

I created a new test to verify the entire response. Since the response as a whole also includes the status code, I no longer need a separate assertion just for that.

 

Dynamic Data

You’ll quickly run into an issue with using this technique: most responses are not static. For POST calls, the response may include a newly generated ID. Headers may include the date or cookie information that changes every time the test is run. Fortunately, I remembered Mark teaching us a little trick to deal with this.

Since we’re working with Strings here, we can simply do a replaceAll and use a regular expression to find the lines that we know are dynamic and then mask those lines.

In my response, there were three dynamic headers, so I masked them as follows:

The file contents are then saved as:

 

Download Code

I’ve cleaned this up a bit and placed it on Github for your convenience.

Get Demo Code

Angie Jones
19 Comments
  • David Garratt

    Really cool

    Feels like this could be expanded on to create something like Applitools Eyes for Apis (it rhymes!)
    Run tests
    Put the baseline responses into a DB
    Human review of initial responses to set the baseline
    Allow for optional/changing values through annotation of some sort
    Run tests
    Flag differences between baseline and actual for human review
    David Garratt
    June 24, 2019 at 10:21 am Reply
  • Nibs

    Thanks Angie, But for an API responses with lot of dynamic data, seems to be bit difficult. JSON schema validations should be right fit there.

    June 25, 2019 at 12:09 am Reply
    • Stéphane Colson

      Yes, great post (again) Angie. I agree with Nibs here.

      Instead of ignoring the dates as you explained, you can at least check that the date is really a date and is valid with a schema validator. Same for other dynamic values: check that an int expected is an int (with min/max values if needed), an email is valid, etc…It’s very easy to maintain and very powerful (done this only with Postman).
      April 14, 2020 at 5:38 am Reply
      • Angie Jones

        yep I agree

        April 16, 2020 at 8:30 pm Reply
  • Shawn Bacot

    This is a really interesting approach. I’m interested to hear what you think about using a schema validator like AJV for Javascript to validate responses. It allows you to check for the present of required properties, value types in each response and provide expected data values to validate against as well. I’ve been doing that coupled with focused assertions for expected response values, particularly when creating and updating objects with some success.

    June 25, 2019 at 11:49 am Reply
  • Lakshmikanthan

    This is really interesting.

    I have a use case where i am trying to see if this solution best fits
    My test class has single test method which calls 3 API’s ( 3 different scenarios with 3 different json output). I need to run this test method against different test data ( in excel)which i am passing to the test using the data provider
    How efficiently this can be achieved using this Approval tests? Thanks in advance 🙂
    July 24, 2019 at 5:55 am Reply
  • Will O.

    This is great. I’ve tried it out and it works as expected. However, I need some help figuring out how I can move the approved.txt file(s) to a separate location for better structure without breaking the link to the test 🙂

    July 26, 2019 at 3:34 am Reply
  • Vindhiyan
    Thanks for the article

    You had masked headers in the test.
    How do you mask dynamic values in the response body?
    August 18, 2019 at 5:31 pm Reply
  • Sam Spokowski

    It seems like a lot of the features you are describing are all bundled in Jest’s snapshot testing functionality. Jest is a unit testing framework for JS projects. You can use it for testing APIs by pairing it with an HTTP call framework such as Axios. Jest has built in type matchers and can handle regex for verifying fields that you would expect to change each time. It also tells you the differences between your expected results and your actual results without having to go to a different tool (like DiffMerge). I would check it out https://jestjs.io/

    October 25, 2019 at 9:57 am Reply
    • Angie Jones

      limited to JavaScript, right? I use Java.

      October 31, 2019 at 11:59 am Reply
  • Vladimir Belorusets

    In my article “REST API Test Automation in Java with Open Source Tools” (2015) (http://www.methodsandtools.com/archive/restapitesting.php), I listed four tools for comparison of JSON actual and expected responses.

    For one statement POJO assertion, I used Unitils and Shazamcrest – recursive assertion libraries. The code example is in the article.
    October 27, 2019 at 11:05 pm Reply
  • Vladimir Belorusets

    For Windows 10 and Eclipse, DiffMerge does not pop up. You need to use TortoiseDiff.

    October 29, 2019 at 10:21 am Reply
  • Nagesh Kumar

    A couple of issues here1. Whenever there is value mismatch in new response vs stored one, error thrown is “java.lang.Error: Failed Approval”. It doesn’t tell us exactly which field it is where we have a mismatch.2. The location where the approval files are getting stored makes the entire framework or codebase look messy. It can be moved to the resource folder or something OR we can use a annotation and save it in the respective folder. 3. API response keeps on changes frequently in APIs such as search, order details with many people in the org using the same database. So every time generating approved.txt is a pain

    April 17, 2020 at 8:32 am Reply
    • Angie Jones

      1. When the test fails, a diff editor pops up to show the difference. I’m sure this can be automated for CI reporting

      2. You can specify the directory where the files are output
      https://twitter.com/techgirl1908/status/1143401266707517441

      3. This seems like a test design issue. I wouldn’t automate tests for an API response that changes frequently. Sounds odd that a consistent request wouldn’t get a consistent response.

      Like any tool, this one has its purposes and shouldn’t be used for all situations. If it doesn’t fit your model, don’t use it.

      April 17, 2020 at 8:53 am Reply
  • swapnil Bodade

    Nice. looking forward more articles on API

    November 4, 2020 at 1:52 am Reply
  • Gaurav Khurana

    Very good approach for static APIs matching.
    if they can provide a way to add the dynamic field with some easy add function where we pass a list of attributes to ignore it would have been better

    good for learning. thanks for sharing

    May 26, 2021 at 11:32 pm Reply

Post a Comment