Fernando De Vega - IT & More
Fernando De Vega - IT & More

Welcome to my blog! This is the english version. Here I write about software development, technology and all kinds of stuff. Thanks for visiting!

Full stack developer: NodeJS, Ruby, PHP and others. Telecommunications Engineer and technopile. Love learning new stuff and technologies.

Share


Twitter


Fernando De Vega - IT & More

Functional Testing a SailsJS API with Wolfpack: Part 2

Fernando De VegaFernando De Vega

In the previous article we saw a boilerplate setup for functional testing a SailsJS API. In this article, we are going to use that boilerplate to start testing our app.

Time to GET dirty and start testing.

Testing GET transactions

Now that we've setup our test environment, we can proceed and start writing our functional tests for the API. So I'm going to go TDD here (Test Driven Development), and start writing my tests before some of the code.

Remember you can download all the code we are discussing here at https://github.com/fdvj/sails-testing-demo. Just clone the repository and you will be good to go.

With the directory structure I've already defined and explained briefly in the previous post, I'm going to start writing my tests. As you may remember, we created a spec/api/ directory where we are going to put all our tests. Personally, I like to order it even a bit more, and separate my functional tests from my unit tests. So, in the spec/api/ directory, create two new directories: functional and unit. For this article, we are going to be writing our tests in spec/api/functional.

For this article, my application will be composed of only two endpoints which I need to test:

In the following articles we will be covering the other common HTTP Verbs (POST, PUT and DELETE).

I've disabled Sails blueprints so that these routes won't get created automatically by Sails.

To keep things simple, the GET /posts should return me a 200 OK with an array of posts in the database. GET /posts/:id should return me a 200 OK with the given post and update a visit counter; if the post does not exist, it should return a 404 error.

Let's start by creating our test in spec/api/functional/PostsSpec.js. In there we will only test the Posts endpoint.

// spec/api/functional/PostsSpec.js
describe('Posts endpoint', function(){

});

The first endpoint we are going to test is the GET /posts endpoint. But first, we need to setup our tests. One important thing we need is to have the "database", in this case wolfpack acting as the db, reset everytime a test run.

The importance of resetting is that we can make sure that we are accurately testing the conditions happening in our test, and prevent data from other tests give us a false positive. To do so, we create a small helper function to reset wolfpack:

// spec/api/functional/PostsSpec.js
describe('Posts endpoint', function(){

  function clear() {
    wolfpack.clearResults();
    wolfpack.clearErrors();
  }

  beforeEach(clear);
});

Before continuing with our testing, I want to set up our models so that I know how will the fixtures look like when I create them. So far our app needs two models: a Posts model which will hold our posts data, and a Counters model which will keep track of how many times a post is viewed.

Our Posts model:

// api/models/Posts.js
module.exports = {

  attributes: {
    title: 'string',
    content: 'string',
    author: 'string'
  }
};

And our Counters model:

// api/models/Counters.js
module.exports = {

  attributes: {
    postId: 'integer',
    count: 'integer',
  }
};

Now that I have a better idea of how my data will look, I can create my fixtures. In spec/fixtures/ create a new file called posts.js and paste the following:

// spec/fixtures/posts.js
module.exports = [  
  {
    id: 1,
    title: 'My First Novel',
    content: 'This is my novel. The end.',
    author: 'William Shakespeare'
  },
  {
    id: 2,
    title: 'My first poem',
    content: 'Where is the question mark!"#$%&/()=?',
    author: 'Sappho'
  }
];

And now for our Counters fixture, create a new file in spec/fixtures/ called counters.js and paste the following:

// spec/fixtures/counters.js
module.exports = [  
  {
    id: 1,
    postId: 1,
    count: 3
  },
  {
    id: 2,
    postId: 2,
    count: 1
  }
];

The fixtures is the data that wolfpack will be returning from the "database". It is not required to use fixtures. If you like, you can also use factories for testing, but for simplicity, in these articles we are going to be using fixtures.

Great! We have our fixtures set up. Now we can do some actual testing and coding!

Our only expectation for GET /posts is for it to return a 200 OK status and an array of posts. So let's write that expectation in spec/api/functional/PostsSpec.js:

// spec/api/functional/PostsSpec.js
describe('Posts endpoint', function(){

  function clear() {
    wolfpack.clearResults();
    wolfpack.clearErrors();
  }

  beforeEach(clear);

  describe('GET /posts', function(){

    it("should return an array of posts", function(done){
      wolfpack.setFindResults(fixtures.posts);

      request(server)
        .get('/posts')
        .expect(200, fixtures.posts, done);
    });

  });

});

Let's back up a little and see what's going on here. The first thing I do is I set up Wolfpack to return the array of posts we have set up in our fixture for every find operation with wolfpack.setFindResults(fixtures.posts). Whatever we look for, Wolfpack will return that array. You can set multiple results depending on certain criteria with wolfpack, but for simplicity, let's just set it for all since we are going to have one find operation.

Next, I tell Supertest to make a call on the server. Remember when we set supertest to global.request and sails.hooks.http.app to global.server? Well, this is were we use them: request(server).

On the same object I tell supertest to connect to the server and make a GET request to /posts: request(server).get('/posts').

Next, I'm telling supertest to expect a 200 code in return, as well as the same array of fixtures I gave wolfpack, which in theory are all our posts in the database. Finally I pass the done callback which gets called when supertest gets the answer, whether its a success or a failure: request(server).get('/posts').expect(200, fixtures.posts, done).

It is important to test all supertest tests as asynchronous tests. These are asynchronous calls to the api. If you don't treat them as asynchronous operations, you can get random results in your test.

Before moving on, I just want to note that it is a bad practice to make the expectation the same as the input of the db, in other words, using the same fixture as the expectation. Tests are to be written so that if something changes that could affect your application, you should know about it when your test breaks. If you change the fixture, your expectation automatically changes, thus you may be unaware of a breaking change introduced into your database or codebase.

The proper way of testing this would be like this:

describe('Posts endpoint', function(){

  function clear() {
    wolfpack.clearResults();
    wolfpack.clearErrors();
  }

  beforeEach(clear);

  describe('GET /posts', function(){

    it("should return an array of posts", function(done){
      wolfpack.setFindResults(fixtures.posts);

      request(server)
        .get('/posts')
        .expect(
            200, 
            [{
                id: 1,
                title: 'My First Novel',
                content: 'This is my novel. The end.',
                author: 'William Shakespeare'
              },
              {
                id: 2,
                title: 'My first poem',
                content: 'Where is the question mark!"#$%&/()=?',
                author: 'Sappho'
             }],
            done);
    });

  });

});

Hard coding the expectation within the test is also not a good practice, but at least it is better. It is best to have the expectation in a separate file (it also serves as good documentation). I put it here because I want to keep the amount of lines and files in the article to a minimum and keep it as clear as possible. So, for the sake of your scrollwheel (and mine), to keep this post as small and readable possible, I will keep using the fixtures (which is the bad practice) in some of the tests.

Now run your tests in your console:

grunt test  

Great, it failed! Obviously! You haven't written any code to make your expectation happen. Let's make our tests pass then.

First things first, let's create our controller and the code we will use for the GET /posts route. We will call it PostsController.js:

// api/controllers/PostsController.js
module.exports = {

  getAllPosts: function(req, res) {
    Posts.find().then(function(posts){
      return res.json(200, posts);
    });
  }    
};

Now let's add our newly created route to our routes file:

// config/routes.js
module.exports.routes = {

  'GET /posts': {
    controller: 'PostsController',
    action: 'getAllPosts'
  }

};

Good. Now let's run the tests again. Type grunt test in your console and hit enter. Let's see the results.

Congratulations! You've just passed your first unit test, TDD style! Now let's move to something more complicated, the GET /posts/:id endpoint.

The GET /posts/:id endpoint has two separate tests we need to run. We need to test that when the post does not exist, it should return a 404. The second test is that, if it exist, it should return a 200, the post itself, and it should call a trigger invisible to the user that updates the counters. Let's see how we can do this.

In the same spec/api/functional/PostsSpec.js we are working on, let's add our new tests:

// spec/api/functional/PostsSpec.js
describe('Posts endpoint', function(){

  function clear() {
    wolfpack.clearResults();
    wolfpack.clearErrors();
  }

  beforeEach(clear);


  // ... The GET /posts tests go here  ...

  describe('GET /posts/:id', function(){
    beforeEach(function(){
      clear();
      Counters.increase.reset();
    });

    it("should return 404 if the post is not found", function(done){
      request(server)
        .get('/posts/1')
        .expect(function(){
          // Make sure the counter is not increased
          if (Counters.increase.called) { throw Error('Counter should have not increased'); }
        })
        .expect(404, done);
    });

    it("should return 200, the post, and increase the visited counter", function(done){
      Posts.setFindResults({id: fixtures.posts[0].id}, fixtures.posts[0]);
      Counters.setFindResults({postId: fixtures.posts[0].id}, fixtures.counters[0]);
      request(server)
        .get('/posts/1')
        .expect(function(res){
          // Verify that the correct counter was increased
          if (!Counters.increase.calledWith(1)) { throw Error('Counter for post should have increased'); }
        })
        .expect(200, {
          id: 1,
          title: 'My First Novel',
          content: 'This is my novel. The end.',
          author: 'William Shakespeare'
        }, done);
    });

  });

});

Woah! Lots of stuff going on there. What's happening?

Let's start with the setup in the beforeEach. We run the clear function we set up at the beginning to reset wolfpack. Now comes the tricky part.

If you read Wolfpack's documentation, you will see that wolfpack spies on every method in a Sails model. But wait, Counters is a model we created, but we haven't defined any class methods yet! No we haven't, but Im expecting to write a method called increase that will increase the counters for a given post when I call it. Once I write it, Wolfpack will inject Sinon spies into it which will let me know if it was ever called. Problem is, if I don't reset the Sinon's call counter after each test, I may get a false positive on a test. So that's why I'm resetting the call counter for a method I haven't written yet, but I'm expecting to write pretty soon.

Moving on, let's examine this part a little:

    it("should return 404 if the post is not found", function(done){
      request(server)
        .get('/posts/1')
        .expect(function(){
          // Make sure the counter is not increased
          if (Counters.increase.called) { throw Error('Counter should have not increased'); }
        })
        .expect(404, done);
    });

What's going on here? Well, for starters, Supertest allows us to chain multiple expectations for a request. I have two expectations in my test.

First of all, if the post does not exist, obviously it cannot increase the counter, so I kind of created a small unit test here with this expect (which Supertest allows me to test with a function). If the Counters.increase method of the model is called, then my test failed because it shouldn't have.

So to play that scenario, I throw an exception with my own verbose message as to why this test failed. Supertest will catch the exception, and fail the test. In the results, I will then be able to see my verbose message as to why the test failed, in this case, Counter should have not increased.

Next expectation is a 404. Since we are resetting our wolfpack database every time a test is run, the Posts table/collection will be empty, so our code in the controller should be able to handle this an interpret it as a 404.

Now onto the next part of the tests:

    it("should return 200, the post, and increase the visited counter", function(done){
      Posts.setFindResults({id: fixtures.posts[0].id}, fixtures.posts[0]);
      Counters.setFindResults({postId: fixtures.posts[0].id}, fixtures.counters[0]);
      request(server)
        .get('/posts/1')
        .expect(function(res){
          // Verify that the correct counter was increased
          if (!Counters.increase.calledWith(1)) { throw Error('Counter for post should have increased'); }
        })
        .expect(200, {
          id: 1,
          title: 'My First Novel',
          content: 'This is my novel. The end.',
          author: 'William Shakespeare'
        }, done);
    });

We are now testing for when the post exists. First, we set up wolfpack so that it returns a post from the fixtures.

Wait! I don't see we are using wolfpack. We are calling a method setFindResults in the model! Actually, wolfpack injected that method into the model, which is a query method that allows you to tell wolfpack what results it should provide in case the given query is performed.

So the method goes like this:

// Format
Model.setFindResults(condition, results)

// Example
Model.setFindResults({name: 'John'}, {id: 2, name: 'John'})

Model.findOne({name: 'John'}).then(function(user){  
  // user will be {id: 2, name: 'John'}
});

You can read Wolfpack's documentation for more details on query methods.

Back to our code, in our case, when the model asks for Posts with an id of 1 (which is the id of the first fixture), wolfpack will return the data for the first fixture to Sails.

The same happens with the Counters model. Most likely it will look for the entry that keeps track of the counter, in which case we will return the first fixture for the counters.

On the expectations side, first we make the call GET /posts/1 which we expect first to call the Counters.increase method. Here we are also doing a small unit test as we are using Sinon's calledWith method to test that the method Counters.increased was passed a 1, the postId, as the argument. In other words, the app should somewhere in the endpoint call Counters.increase(1). In case this does not happen, it should fail the test with the exception.

Finally, I'm expecting a 200 HTTP status, along with an object identical to what we have as the first post fixture.

Don't forget to pass the done callback at the end so that the test doesn't timeout.

If we now run the tests, they should fail. Good, let's make them green.

First, let's write our methods in the Counters model which increase the count:

// api/models/Counters.js
module.exports = {

  attributes: {
    postId: 'integer',
    count: 'integer',

    increase: function() {
      this.count++;
      return this;
    }
  },

  increase: function(postId) {
    return this.findOne({postId: postId}).then(function(counter){
      return counter.increase().save();
    });
  }
};

As you can see, our model looks for the entry of the post, then increase the count and saves it. Good! Just like we planned for with wolfpack. Now let's write the code in our controller that will handle this route:

// api/controllers/PostsController.js
module.exports = {

  getAllPosts: function(req, res) {
    Posts.find().then(function(posts){
      return res.json(200, posts);
    });
  },

  // This getPost handles the GET /posts/:id route
  getPost: function(req, res) {
    Posts.findOne({id: req.params.id}).then(function(post){
      if (!post) {
        return res.send(404);
      }
      Counters.increase(post.id);
      return res.json(200, post);
    });
  }

};

Just a small review of the code here. First, we find the post with an id given in the url (req.params.id). If it does not exist, return a 404. Otherwise, increase the counter and return a 200 plus the post. Basically what we are expecting.

Let's build the route then.

// config/routes.js
module.exports.routes = {

  'GET /posts': {
    controller: 'PostsController',
    action: 'getAllPosts'
  },

  'GET /posts/:id': {
    controller: 'PostsController',
    action: 'getPost'
  }

};

Great! Our shiny new route is ready to be tested. Let's run those tests and watch them turn green.

grunt test  

Beautiful! Our tests all return green. Now when we refactor or add a new feature and break something, we'll know for sure.

What's next?

In this article we saw how we can test GET transactions in our API. There are still three other verbs we need to cover: POST, PUT and DELETE. In the next article we are going to cover how to test POST transactions in your API. Let's move on then.

Functional Testing a SailsJS API with Wolfpack: Part 3

Full stack developer: NodeJS, Ruby, PHP and others. Telecommunications Engineer and technopile. Love learning new stuff and technologies.

Comments