<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Fernando De Vega - IT & More]]></title><description><![CDATA[Welcome to my blog! This is the english version. Here I write about software development, technology and all kinds of stuff. Thanks for visiting!]]></description><link>https://fernandodevega.com/en/</link><generator>Ghost 0.7</generator><lastBuildDate>Sun, 08 Mar 2026 22:46:33 GMT</lastBuildDate><atom:link href="https://fernandodevega.com/en/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Functional Testing a SailsJS API with Wolfpack: Part 3]]></title><description><![CDATA[<p>In the previous article we saw how to test <code>GET</code> transactions in our API. There's still a lot to cover so in this article we are going to be covering how to test POST operations; in other words how to test create.</p>

<p class="subtitle">Perhaps it is time we start POSTing stuff</p>]]></description><link>https://fernandodevega.com/en/2015/12/19/functional-testing-a-sailsjs-api-with-wolfpack-part-3/</link><guid isPermaLink="false">59eb0671-b39f-47ab-adb5-6ab68bf9ebb3</guid><category><![CDATA[javascript]]></category><category><![CDATA[sails]]></category><category><![CDATA[nodejs]]></category><category><![CDATA[wolfpack]]></category><category><![CDATA[testing]]></category><dc:creator><![CDATA[Fernando De Vega]]></dc:creator><pubDate>Sat, 19 Dec 2015 21:15:53 GMT</pubDate><media:content url="https://fernandodevega.com/en/content/images/2015/12/eBJIgrh3TCeHf7unLQ5e_sailing-5.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://fernandodevega.com/en/content/images/2015/12/eBJIgrh3TCeHf7unLQ5e_sailing-5.jpg" alt="Functional Testing a SailsJS API with Wolfpack: Part 3"><p>In the previous article we saw how to test <code>GET</code> transactions in our API. There's still a lot to cover so in this article we are going to be covering how to test POST operations; in other words how to test create.</p>

<p class="subtitle">Perhaps it is time we start POSTing stuff to our app.</p>

<h2 id="testingpost">Testing POST</h2>

<p>Picking from the work we've already laid on the previous articles, let's move on straight into writing our expectations for creating our "post". Let's write our expectations.</p>

<p>First, to keep things simple, we are not doing any validation on the Posts model. You can create empty posts if you like right now, but we really don't care in this article. All we care for is that you send the proper captcha.</p>

<blockquote>
  <p>In a future article we are going to do validations, but we are going to do it with custom errors and handlers, which is pretty cool actually.</p>
</blockquote>

<p>Back to the subject, if you send an invalid captcha, you should get a <code>400 Bad Request</code> with the message '<em>Invalid Captcha</em>'. If all is good, you should get a <code>201 Created</code>, with the recently created post. Let's write our tests then.</p>

<pre><code class="language-javascript">// spec/api/functional/PostsSpec.js
describe('Posts endpoint', function(){

  function clear() {
    wolfpack.clearResults();
    wolfpack.clearErrors();
  }

  beforeEach(clear);

  // ... All our previous tests should be here ... //

  describe('POST /posts', function(){
    var stub;
    beforeEach(function(){
      clear();
      stub = sinon.stub(Captcha, 'verifyCaptcha');
      Posts.create.reset();
    });

    afterEach(function(){
      stub.restore();
    });

    it("should return 400 and Invalid captcha if captcha verification fails", function(done){
      stub.returns(false);
      request(server)
        .post('/posts')
        .send({
          title: 'My First Novel',
          content: 'This is my novel. The end.',
          author: 'William Shakespeare'
        })
        .expect(function(res){
          // Verify post was not created
          if (Posts.create.called) { throw Error('Post should have not been created'); }
        })
        .expect(400, {message: 'Invalid captcha'}, done);
    });

    it("should return 201 and the created post", function(done){
      stub.returns(true);
      wolfpack.setCreateResults(fixtures.posts[0]);
      request(server)
        .post('/posts')
        .send({
          title: 'My First Novel',
          content: 'This is my novel. The end.',
          author: 'William Shakespeare'
        })
        .expect(201, fixtures.posts[0], done);
    });
  });

});
</code></pre>

<p>Tests seem to increase in complexity from what we saw in the previous article right? Go figures :P Anyway let's analyze the setup.</p>

<pre><code class="language-javascript">  describe('POST /posts', function(){
    var stub;
    beforeEach(function(){
      clear();
      stub = sinon.stub(Captcha, 'verifyCaptcha');
      Posts.create.reset();
    });

    afterEach(function(){
      stub.restore();
    });

    // ... Our tests are here ... //
  });
</code></pre>

<p>Let's start with the stub. What is that? The Captcha is going to be handled by a service we are going to create called <code>Captcha</code>. This service will have a method called <code>verifyCaptcha</code> that will do all the captcha processing and return either true or false, depending whether it succeeded or not. Obviously we need to test both scenarios so we created a stub for it which will act as the <code>Captcha.verifyCaptcha</code>.</p>

<p>We also need to reset the call counter for <code>Posts.create</code>. Wolfpack kindly enough injected Sinon into the <code>create</code> method so that we can know if it was called, so we must make sure the counter is always reset to 0 when we perform the tests.</p>

<p>And let's not forgot the <code>clear</code> on top that resets any results and errors we've previously set on wolfpack.</p>

<p>For the teardown of the test (the afterEach), we make sure that we restore the stubbed function to its original state.</p>

<p>Let's see the tests now.</p>

<pre><code class="language-javascript">    it("should return 400 and Invalid captcha if captcha verification fails", function(done){
      stub.returns(false);
      request(server)
        .post('/posts')
        .send({
          title: 'My First Novel',
          content: 'This is my novel. The end.',
          author: 'William Shakespeare'
        })
        .expect(function(res){
          // Verify post was not created
          if (Posts.create.called) { throw Error('Post should have not been created'); }
        })
        .expect(400, {message: 'Invalid captcha'}, done);
    });
</code></pre>

<p>The first thing we do is set our stub for the captcha to return false. That's the <code>stub.returns(false)</code> thing. Afterall, we want to test that the captcha failed, so the method should return false.</p>

<p>Next, we send our request to the server. We <code>POST</code> to <code>/posts</code>, and then send in the body of the request. Again, it is a bad practice to send the fixture as the body. We should type in what we are sending via the <code>POST</code>, and if this changes in the future, we need to update the spec accordingly, just as we did here.</p>

<p>Unlike expectations, which can be placed in a separate file, it is good practice to have the body parameters located where the test is written. This allows for better visibility as to what are the conditions for the test to either fail or pass.</p>

<p>Sending the fixture as the body and the results of an operation can bring problems in the future as it automatically changes an expectation, which should fail in the first place for the change. In fact, in a little bit you are going to see why.</p>

<p>Moving on to the actual test expectations, the first expectation we set with Supertest was to check that if the captcha failed, no post should be created. For that, we are going to check that the spy wolfpack injected into the Posts model's <code>create</code> method was not called in the operation. If it is called, we raise the exception with our own message.</p>

<p>If <code>Posts.create</code> was not called, we can then check that we got the <code>400</code> HTTP status, and that we get a JSON with the message <em>Invalid captcha</em>.</p>

<p>Time to account for when the post is created successfully.</p>

<pre><code class="language-javascript">    it("should return 201 and the created post", function(done){
      stub.returns(true);
      wolfpack.setCreateResults(fixtures.posts[0]);
      request(server)
        .post('/posts')
        .send({
          title: 'My First Novel',
          content: 'This is my novel. The end.',
          author: 'William Shakespeare'
        })
        .expect(201, fixtures.posts[0], done);
    });
</code></pre>

<p>So for starters, we are making our stub for the captcha return true. So basically, our captcha test is passing correctly.</p>

<p>Next, we are setting up the database results Sails is going to get when we create the post. We do that using the global <code>wolfpack.setCreateResults</code>, which will set the same result for all create operations. Since there is only one create operation, no need to worry here. In more complex operations where there is more than one create, we can use additional features of wolfpack that will allow us to test.</p>

<p>Now we make the <code>POST</code> request to the server. This time we don't have anything else we need to keep track like if <code>Posts.create</code> was called, although we can do that. We could also make sure that our stub is actually being called and that the <code>verifyCaptcha</code> operation indeed happened, but I'll let those as a homework for you. So right now we only test that we get the <code>201 Created</code>, and the post in return. Again, the way I did it here is the bad practice. Do it as I told you to do it.</p>

<p>Let's run the tests and see them fail. Great! They are red! Time to fix that.</p>

<p>First, let's create our <code>Captcha</code> service. Since this is just a demo, I won't really be implementing a Captcha. I'm just gonna create the service and method, and make it fail miserably.</p>

<p>Create a new service under <strong>api/services/Captcha.js</strong> and paste the following:</p>

<pre><code class="language-javascript">// api/services/Captcha.js
module.exports = {  
  verifyCaptcha: function() {
    throw Error('This error means you didn\'t stub the captcha correctly in your tests');
  }
};
</code></pre>

<p>Now let's write the code in our controller that will handle the route. The code below is the complete code that is contained within our controller (all route handlers included):</p>

<pre><code class="language-javascript">// api/controllers/PostsController.js
module.exports = {

  //.. All the GET stuff from the previous article here ..//

  createPost: function(req, res) {
    var body = {
      title: req.body.title,
      content: req.body.content,
      author: req.body.author
    };
    var valid = Captcha.verifyCaptcha(req.body.captcha);
    if (!valid) {
      return res.json(400, {message: 'Invalid captcha'});
    }

    return Posts.create(body).then(function(post){
      return res.json(201, post);
    }).catch(function(err){
      return res.json(500, err);
    });
  }

};
</code></pre>

<p>Noticed the scenario I coded but we are not testing? Yes, the <code>500</code> scenario. We are going to test that afterwards. Let's finish with the <code>create</code> first.</p>

<p>Now let's update our routes file. This is the complete route file with all routes included:</p>

<pre><code class="language-javascript">// config/routes.js
module.exports.routes = {

  'GET /posts': {
    controller: 'PostsController',
    action: 'getAllPosts'
  },

  'GET /posts/:id': {
    controller: 'PostsController',
    action: 'getPost'
  },

  'POST /posts': {
    controller: 'PostsController',
    action: 'createPost'
  }

};
</code></pre>

<p>Now let's run the tests. All green! Yay! Well, no. My dear friend, you just got a <strong>bad positive</strong> for setting fixtures as results.</p>

<p>The problem lies that in the setup you did (well I actually did it, but you get the point). The <code>400</code> test about the captcha is good, but the <code>201</code> test for creating the post is a bad positive.</p>

<p>Let's look at the setup of the test. We set the create results with Wolfpack:</p>

<pre><code class="language-javascript">wolfpack.setCreateResults(fixtures.posts[0]);  
</code></pre>

<p>Wolfpack's <code>setCreateResults</code> is a very powerful method that if used wrong it can bring false positives. Check now your expectation:</p>

<pre><code class="language-javascript">.expect(201, fixtures.posts[0], done);
</code></pre>

<p>So basically we are telling to expect <code>fixtures.posts[0] === fixtures.posts[0]</code>. The <code>setCreateResults</code> method ignores whatever data you sent in the body. It simply returns the exact data that you told him to return when the <code>create</code> in a model is called.</p>

<p>If you do not provide a <code>setCreateResults</code>, wolfpack will return the data processed through all hooks as if it was stored in the db. Comment the <code>wolfpack.setCreateResults(fixtures.posts[0])</code> line and run the tests again and see the actual results.</p>

<p><img src="https://fernandodevega.com/en/content/images/2015/12/failing_tests.png" alt="Functional Testing a SailsJS API with Wolfpack: Part 3"></p>

<p>See? We had a false positive because we rewrote the db results and we set the same fixture as the expectation.</p>

<p>This tells us two things. First, our fixture is wrong and is not representing the actual data as it is in the database, so we need to update it, and second, <strong>it is a terrible idea to use the fixture as an expectation</strong>.</p>

<p>First let's update our fixture with the two missing fields from the db: <code>createdAt</code> and <code>updatedAt</code>. Our posts fixture will now look like this:</p>

<pre><code class="language-javascript">// spec/fixtures/posts.js
module.exports = [  
  {
    id: 1,
    title: 'My First Novel',
    content: 'This is my novel. The end.',
    author: 'William Shakespeare',
    createdAt: '2015-11-01T12:00:00.001Z',
    updatedAt: '2015-11-01T12:00:00.001Z'
  },
  {
    id: 2,
    title: 'My first poem',
    content: 'Where is the question mark!"#$%&amp;/()=?',
    author: 'Sappho',
    createdAt: '2015-11-01T12:00:00.002Z',
    updatedAt: '2015-11-01T12:00:00.00Z'
  }
];
</code></pre>

<p>By the way, the <code>createdAt</code> and <code>updatedAt</code> are just random dates I picked to make writing the expectation easier. </p>

<p>We also need to update the counters fixtures because they also have <code>createdAt</code> and <code>updatedAt</code> values in the database:</p>

<pre><code class="language-javascript">// spec/fixtures/counters.js
module.exports = [  
  {
    id: 1,
    postId: 1,
    count: 3,
    createdAt: '2015-11-01T12:00:00.001Z',
    updatedAt: '2015-11-01T12:00:00.001Z'
  },
  {
    id: 2,
    postId: 2,
    count: 1,
    createdAt: '2015-11-01T12:00:00.002Z',
    updatedAt: '2015-11-01T12:00:00.002Z'
  }
];
</code></pre>

<p>Now we need to update our expectations and work them properly. It is a good practice to put the expectation in a separate file, but for this article, I'll just put it within the same test:</p>

<pre><code class="language-javascript">    it("should return 201 and the created post", function(done){
      stub.returns(true);
      wolfpack.setCreateResults(fixtures.posts[0]);
      request(server)
        .post('/posts')
        .send({
          title: 'My First Novel',
          content: 'This is my novel. The end.',
          author: 'William Shakespeare'
        })
        .expect(201, {
          id: 1,
          title: 'My First Novel',
          content: 'This is my novel. The end.',
          author: 'William Shakespeare',
          createdAt: '2015-11-01T12:00:00.001Z',
          updatedAt: '2015-11-01T12:00:00.001Z'
        }, done);
    });
</code></pre>

<p>Finally, let's run the tests one more time and see them turn green.</p>

<pre><code class="language-javascript">grunt test  
</code></pre>

<p>Red? What happened? Of course! Since the fixture changed, our expectation is different. We are now getting <code>updatedAt</code> and <code>createdAt</code> in our <code>GET /posts/:id</code> operation, which is good because it means our tests caught a breaking change.</p>

<p>Go ahead and update that expectation (I'll leave it up to you) and run the tests and watch them turn green.</p>

<p><img src="https://fernandodevega.com/en/content/images/2015/12/allgreen.png" alt="Functional Testing a SailsJS API with Wolfpack: Part 3"></p>

<p>There you go! Everything is green, including our previous tests! Yeah, I'm not gonna let you go that easily. Did you update the expectation in <code>GET /posts</code>? No? Well that's because you made the expectation the same as the fixture. The moment you changed the fixture, that test should have also break, but it didn't, which means a false positive. Go ahead and fix that so the expectation is not the fixture.</p>

<p>I hope this has been enough to show you why it is very important to have an expectation separate from a fixture. It can save you in the future. It should also have shown you the importance of knowing your data. Wolfpack is very powerful, and if used incorrectly, it can give you false results.</p>

<p>Now, onto testing the error condition.</p>

<p>If you recall, we have a special handler for <code>500</code> errors in our controller:</p>

<pre><code class="language-javascript">    //.. some code above for our captcha stuff ../
    return Posts.create(body).then(function(post){
      return res.json(201, post);
    }).catch(function(err){
      return res.json(500, err);
    });
</code></pre>

<p>So how can we test error an Internal Server error condition? Fortunately, wolfpack provides us with a useful method for mocking errors in the backend called <code>setErrors</code>.</p>

<p>First let's write our new expectation in our tests. We already have the code so we only need the expectation which is basically to show an error message in a specific format if there is any unknown error. Let's write it then:</p>

<pre><code class="language-javascript">// spec/api/functional/PostsSpec.js
  describe('POST /posts', function(){
    var stub;
    beforeEach(function(){
      clear();
      stub = sinon.stub(Captcha, 'verifyCaptcha');
      Posts.create.reset();
    });

    afterEach(function(){
      stub.restore();
    });

    // .. all our previous POST tests here ..//

    it("should return 500 and the error message if there is an unknown error", function(done){
      stub.returns(true);
      var errorMessage = 'Some error happened';
      wolfpack.setErrors(errorMessage);

      request(server)
        .post('/posts')
        .send({
          title: 'My First Novel',
          content: 'This is my novel. The end.',
          author: 'William Shakespeare'
        })
        .expect(500, {error: errorMessage}, done);
    });

  });
</code></pre>

<p>For starters, the error happens in <code>Post.create</code>. Than means we need to get past through the captcha validation first, so we set the stub for the captcha to return true.</p>

<p>Next, we tell wolfpack to return an error for any operation performed. The <code>setErrors</code> method will override any CRUD operation and immediately return an error when a CRUD is called.</p>

<p>Now we perform our call to the server with our expectation. Run it and you will see it will fail because, even though we are getting a 500 error, the message is not the proper format. We need to update it.</p>

<p>So, go back to the <strong>PostsController.js</strong>, and in the error handler for the create, update the code to the proper format:</p>

<pre><code class="language-javascript">// api/controllers/PostsController.js
module.exports = {

  //.. All the GET stuff from the previous article here ..//

  createPost: function(req, res) {
    var body = {
      title: req.body.title,
      content: req.body.content,
      author: req.body.author
    };
    var valid = Captcha.verifyCaptcha(req.body.captcha);
    if (!valid) {
      return res.json(400, {message: 'Invalid captcha'});
    }

    return Posts.create(body).then(function(post){
      return res.json(201, post);
    }).catch(function(err){
      return res.json(500, {error: err.originalError});
    });
  }

};
</code></pre>

<p>Now we run our tests again and everything is green. Great! We've achieved better coverage and our api is fully tested, even for when errors happen.</p>

<p>Just for review, this is how the tests for our POST will look like then:</p>

<pre><code class="language-javascript">  describe('POST /posts', function(){
    var stub;
    beforeEach(function(){
      clear();
      stub = sinon.stub(Captcha, 'verifyCaptcha');
      Posts.create.reset();
    });

    afterEach(function(){
      stub.restore();
    });

    it("should return 400 and Invalid captcha if captcha verification fails", function(done){
      stub.returns(false);
      request(server)
        .post('/posts')
        .send({
          title: 'My First Novel',
          content: 'This is my novel. The end.',
          author: 'William Shakespeare'
        })
        .expect(function(res){
          // Verify post was not created
          if (Posts.create.called) { throw Error('Post should have not been created'); }
        })
        .expect(400, {message: 'Invalid captcha'}, done);
    });

    it("should return 201 and the created post", function(done){
      stub.returns(true);
      wolfpack.setCreateResults(fixtures.posts[0]);
      request(server)
        .post('/posts')
        .send({
          title: 'My First Novel',
          content: 'This is my novel. The end.',
          author: 'William Shakespeare'
        })
        .expect(201, {
          id: 1,
          title: 'My First Novel',
          content: 'This is my novel. The end.',
          author: 'William Shakespeare',
          createdAt: '2015-11-01T12:00:00.001Z',
          updatedAt: '2015-11-01T12:00:00.001Z'
        }, done);
    });

    it("should return 500 and the error message if there is an unknown error", function(done){
      stub.returns(true);
      var errorMessage = 'Some error happened';
      wolfpack.setErrors(errorMessage);

      request(server)
        .post('/posts')
        .send({
          title: 'My First Novel',
          content: 'This is my novel. The end.',
          author: 'William Shakespeare'
        })
        .expect(500, {error: errorMessage}, done);
    });
  });
</code></pre>

<h2 id="whatsnext">What's next?</h2>

<p>We've finished our tests for <code>GET</code> and <code>POST</code> transactions. Still, there are two more common verbs we need to cover: <code>PUT</code> and <code>DELETE</code>. That's what we are going to be seeing in the next article. And after that? Sails Sessions and Policies :)</p>

<p>And remeber, the project for this article is available at: <br>
<a href="https://github.com/fdvj/sails-testing-demo">https://github.com/fdvj/sails-testing-demo</a></p>]]></content:encoded></item><item><title><![CDATA[Functional Testing a SailsJS API with Wolfpack: Part 2]]></title><description><![CDATA[<p>In the previous article we saw a boilerplate setup for functional testing a SailsJS API. In this article, we are going to use that boilerplate to start testing our app.</p>

<p class="subtitle">Time to GET dirty and start testing.</p>

<h2 id="testinggettransactions">Testing GET transactions</h2>

<p>Now that we've setup our test environment, we can proceed</p>]]></description><link>https://fernandodevega.com/en/2015/12/19/functional-testing-a-sailsjs-api-with-wolfpack-part-2/</link><guid isPermaLink="false">4abad705-f41e-4dfd-aa5b-7d69b0152eea</guid><category><![CDATA[javascript]]></category><category><![CDATA[nodejs]]></category><category><![CDATA[sails]]></category><category><![CDATA[wolfpack]]></category><category><![CDATA[testing]]></category><dc:creator><![CDATA[Fernando De Vega]]></dc:creator><pubDate>Sat, 19 Dec 2015 20:39:22 GMT</pubDate><media:content url="https://fernandodevega.com/en/content/images/2015/12/photo-1421217668576-c17a9a841471.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://fernandodevega.com/en/content/images/2015/12/photo-1421217668576-c17a9a841471.jpg" alt="Functional Testing a SailsJS API with Wolfpack: Part 2"><p>In the previous article we saw a boilerplate setup for functional testing a SailsJS API. In this article, we are going to use that boilerplate to start testing our app.</p>

<p class="subtitle">Time to GET dirty and start testing.</p>

<h2 id="testinggettransactions">Testing GET transactions</h2>

<p>Now that we've setup our test environment, we can proceed and start writing our functional tests for the API. So I'm going to go TDD here (Test Driven Development), and start writing my tests before some of the code.</p>

<p>Remember you can download all the code we are discussing here at <a href="https://github.com/fdvj/sails-testing-demo">https://github.com/fdvj/sails-testing-demo</a>. Just clone the repository and you will be good to go.</p>

<p>With the directory structure I've already defined and explained briefly in the <a href="https://fernandodevega.com/en/2015/12/19/functional-testing-a-sailsjs-api-with-wolfpack-part-1/">previous post</a>, I'm going to start writing my tests. As you may remember, we created a <strong>spec/api/</strong> directory where we are going to put all our tests. Personally, I like to order it even a bit more, and separate my functional tests from my unit tests. So, in the <strong>spec/api/</strong> directory, create two new directories: <strong>functional</strong> and <strong>unit</strong>. For this article, we are going to be writing our tests in <strong>spec/api/functional</strong>.</p>

<p>For this article, my application will be composed of only two endpoints which I need to test:</p>

<ul>
<li><code>GET /posts</code></li>
<li><code>GET /posts/:id</code></li>
</ul>

<p>In the following articles we will be covering the other common HTTP Verbs (POST, PUT and DELETE).</p>

<p>I've disabled Sails blueprints so that these routes won't get created automatically by Sails.</p>

<p>To keep things simple, the <code>GET /posts</code> should return me a <code>200 OK</code> with an array of posts in the database. <code>GET /posts/:id</code> should return me a <code>200 OK</code> with the given post and update a visit counter; if the post does not exist, it should return a <code>404</code> error. </p>

<p>Let's start by creating our test in <strong>spec/api/functional/PostsSpec.js</strong>. In there we will only test the Posts endpoint.</p>

<pre><code class="language-javascript">// spec/api/functional/PostsSpec.js
describe('Posts endpoint', function(){

});
</code></pre>

<p>The first endpoint we are going to test is the <code>GET /posts</code> endpoint. But first, we need to setup our tests. One important thing we need is to have the "database", in this case wolfpack acting as the db, reset everytime a test run.</p>

<p>The importance of resetting is that we can make sure that we are accurately testing the conditions happening in our test, and prevent data from other tests give us a false positive. To do so, we create a small helper function to reset wolfpack:</p>

<pre><code class="language-javascript">// spec/api/functional/PostsSpec.js
describe('Posts endpoint', function(){

  function clear() {
    wolfpack.clearResults();
    wolfpack.clearErrors();
  }

  beforeEach(clear);
});
</code></pre>

<p>Before continuing with our testing, I want to set up our models so that I know how will the fixtures look like when I create them. So far our app needs two models: a <strong>Posts</strong> model which will hold our posts data, and a <strong>Counters</strong> model which will keep track of how many times a post is viewed.</p>

<p>Our Posts model:</p>

<pre><code class="language-javascript">// api/models/Posts.js
module.exports = {

  attributes: {
    title: 'string',
    content: 'string',
    author: 'string'
  }
};
</code></pre>

<p>And our Counters model:</p>

<pre><code class="language-javascript">// api/models/Counters.js
module.exports = {

  attributes: {
    postId: 'integer',
    count: 'integer',
  }
};
</code></pre>

<p>Now that I have a better idea of how my data will look, I can create my fixtures. In <strong>spec/fixtures/</strong> create a new file called <strong>posts.js</strong> and paste the following:</p>

<pre><code class="language-javascript">// spec/fixtures/posts.js
module.exports = [  
  {
    id: 1,
    title: 'My First Novel',
    content: 'This is my novel. The end.',
    author: 'William Shakespeare'
  },
  {
    id: 2,
    title: 'My first poem',
    content: 'Where is the question mark!"#$%&amp;/()=?',
    author: 'Sappho'
  }
];
</code></pre>

<p>And now for our Counters fixture, create a new file in <strong>spec/fixtures/</strong> called <strong>counters.js</strong> and paste the following:  </p>

<pre><code class="language-javascript">// spec/fixtures/counters.js
module.exports = [  
  {
    id: 1,
    postId: 1,
    count: 3
  },
  {
    id: 2,
    postId: 2,
    count: 1
  }
];
</code></pre>

<p>The fixtures is the data that wolfpack will be returning from the "database". It is not required to use fixtures. If you like, you can also use factories for testing, but for simplicity, in these articles we are going to be using fixtures.</p>

<p>Great! We have our fixtures set up. Now we can do some actual testing and coding!</p>

<p>Our only expectation for <code>GET /posts</code> is for it to return a <code>200 OK</code> status and an array of posts. So let's write that expectation in <strong>spec/api/functional/PostsSpec.js</strong>:</p>

<pre><code class="language-javascript">// spec/api/functional/PostsSpec.js
describe('Posts endpoint', function(){

  function clear() {
    wolfpack.clearResults();
    wolfpack.clearErrors();
  }

  beforeEach(clear);

  describe('GET /posts', function(){

    it("should return an array of posts", function(done){
      wolfpack.setFindResults(fixtures.posts);

      request(server)
        .get('/posts')
        .expect(200, fixtures.posts, done);
    });

  });

});
</code></pre>

<p>Let's back up a little and see what's going on here. The first thing I do is I set up Wolfpack to return the array of posts we have set up in our fixture for every find operation with <code>wolfpack.setFindResults(fixtures.posts)</code>. Whatever we look for, Wolfpack will return that array. You can set multiple results depending on certain criteria with wolfpack, but for simplicity, let's just set it for all since we are going to have one find operation.</p>

<p>Next, I tell Supertest to make a call on the server. Remember when we set supertest to <code>global.request</code> and <code>sails.hooks.http.app</code> to <code>global.server</code>? Well, this is were we use them: <code>request(server)</code>.</p>

<p>On the same object I tell supertest to connect to the server and make a GET request to <code>/posts</code>: <code>request(server).get('/posts')</code>. </p>

<p>Next, I'm telling supertest to expect a <code>200</code> code in return, as well as the same array of fixtures I gave wolfpack, which in theory are all our posts in the database. Finally I pass the <code>done</code> callback which gets called when supertest gets the answer, whether its a success or a failure: <code>request(server).get('/posts').expect(200, fixtures.posts, done)</code>.</p>

<p>It is important to test all supertest tests as <strong>asynchronous tests</strong>. These are asynchronous calls to the api. If you don't treat them as asynchronous operations, you can get random results in your test.</p>

<p>Before moving on, I just want to note that it is a bad practice to make the expectation the same as the input of the db, in other words, using the same fixture as the expectation. Tests are to be written so that if something changes that could affect your application, you should know about it when your test breaks. If you change the fixture, your expectation automatically changes, thus you may be unaware of a breaking change introduced into your database or codebase.</p>

<p>The proper way of testing this would be like this:</p>

<pre><code class="language-javascript">describe('Posts endpoint', function(){

  function clear() {
    wolfpack.clearResults();
    wolfpack.clearErrors();
  }

  beforeEach(clear);

  describe('GET /posts', function(){

    it("should return an array of posts", function(done){
      wolfpack.setFindResults(fixtures.posts);

      request(server)
        .get('/posts')
        .expect(
            200, 
            [{
                id: 1,
                title: 'My First Novel',
                content: 'This is my novel. The end.',
                author: 'William Shakespeare'
              },
              {
                id: 2,
                title: 'My first poem',
                content: 'Where is the question mark!"#$%&amp;/()=?',
                author: 'Sappho'
             }],
            done);
    });

  });

});
</code></pre>

<p>Hard coding the expectation within the test is also <strong>not</strong> a good practice, but at least it is better. It is best to have the expectation in a separate file (it also serves as good documentation). I put it here because I want to keep the amount of lines and files in the article to a minimum and keep it as clear as possible. So, for the sake of your scrollwheel (and mine), to keep this post as small and readable possible, I will keep using the fixtures (which is the bad practice) in some of the tests.</p>

<p>Now run your tests in your console:</p>

<pre><code>grunt test  
</code></pre>

<p>Great, it failed! Obviously! You haven't written any code to make your expectation happen. Let's make our tests pass then.</p>

<p>First things first, let's create our controller and the code we will use for the <code>GET /posts</code> route. We will call it <strong>PostsController.js</strong>:</p>

<pre><code class="language-javascript">// api/controllers/PostsController.js
module.exports = {

  getAllPosts: function(req, res) {
    Posts.find().then(function(posts){
      return res.json(200, posts);
    });
  }    
};
</code></pre>

<p>Now let's add our newly created route to our routes file:</p>

<pre><code class="language-javascript">// config/routes.js
module.exports.routes = {

  'GET /posts': {
    controller: 'PostsController',
    action: 'getAllPosts'
  }

};
</code></pre>

<p>Good. Now let's run the tests again. Type <code>grunt test</code> in your console and hit enter. Let's see the results.</p>

<p>Congratulations! You've just passed your first unit test, TDD style! Now let's move to something more complicated, the <code>GET /posts/:id</code> endpoint.</p>

<p>The <code>GET /posts/:id</code> endpoint has two separate tests we need to run. We need to test that when the post does not exist, it should return a 404. The second test is that, if it exist, it should return a 200, the post itself, and it should call a trigger invisible to the user that updates the counters. Let's see how we can do this.</p>

<p>In the same <strong>spec/api/functional/PostsSpec.js</strong> we are working on, let's add our new tests:</p>

<pre><code class="language-javascript">// spec/api/functional/PostsSpec.js
describe('Posts endpoint', function(){

  function clear() {
    wolfpack.clearResults();
    wolfpack.clearErrors();
  }

  beforeEach(clear);


  // ... The GET /posts tests go here  ...

  describe('GET /posts/:id', function(){
    beforeEach(function(){
      clear();
      Counters.increase.reset();
    });

    it("should return 404 if the post is not found", function(done){
      request(server)
        .get('/posts/1')
        .expect(function(){
          // Make sure the counter is not increased
          if (Counters.increase.called) { throw Error('Counter should have not increased'); }
        })
        .expect(404, done);
    });

    it("should return 200, the post, and increase the visited counter", function(done){
      Posts.setFindResults({id: fixtures.posts[0].id}, fixtures.posts[0]);
      Counters.setFindResults({postId: fixtures.posts[0].id}, fixtures.counters[0]);
      request(server)
        .get('/posts/1')
        .expect(function(res){
          // Verify that the correct counter was increased
          if (!Counters.increase.calledWith(1)) { throw Error('Counter for post should have increased'); }
        })
        .expect(200, {
          id: 1,
          title: 'My First Novel',
          content: 'This is my novel. The end.',
          author: 'William Shakespeare'
        }, done);
    });

  });

});
</code></pre>

<p>Woah! Lots of stuff going on there. What's happening?</p>

<p>Let's start with the setup in the beforeEach. We run the clear function we set up at the beginning to reset wolfpack. Now comes the tricky part.</p>

<p>If you read Wolfpack's <a href="https://github.com/fdvj/wolfpack/blob/master/README.md">documentation</a>, you will see that wolfpack spies on every method in a Sails model. But wait, Counters is a model we created, but we haven't defined any class methods yet! No we haven't, but Im expecting to write a method called <code>increase</code> that will increase the counters for a given post when I call it. Once I write it, Wolfpack will inject Sinon spies into it which will let me know if it was ever called. Problem is, if I don't reset the Sinon's call counter after each test, I may get a false positive on a test. So that's why I'm resetting the call counter for a method I haven't written yet, but I'm expecting to write pretty soon.</p>

<p>Moving on, let's examine this part a little:</p>

<pre><code class="language-javascript">    it("should return 404 if the post is not found", function(done){
      request(server)
        .get('/posts/1')
        .expect(function(){
          // Make sure the counter is not increased
          if (Counters.increase.called) { throw Error('Counter should have not increased'); }
        })
        .expect(404, done);
    });
</code></pre>

<p>What's going on here? Well, for starters, <a href="https://github.com/visionmedia/supertest/blob/master/Readme.md">Supertest</a> allows us to chain multiple expectations for a request. I have two expectations in my test.</p>

<p>First of all, if the post does not exist, obviously it cannot increase the counter, so I kind of created a small unit test here with this <code>expect</code> (which Supertest allows me to test with a function). If the Counters.increase method of the model is called, then my test failed because it shouldn't have.</p>

<p>So to play that scenario, I throw an exception with my own verbose message as to why this test failed. Supertest will catch the exception, and fail the test. In the results, I will then be able to see my verbose message as to why the test failed, in this case, <em>Counter should have not increased</em>.</p>

<p>Next expectation is a <code>404</code>. Since we are resetting our wolfpack database every time a test is run, the Posts table/collection will be empty, so our code in the controller should be able to handle this an interpret it as a <code>404</code>.</p>

<p>Now onto the next part of the tests:</p>

<pre><code class="language-javascript">    it("should return 200, the post, and increase the visited counter", function(done){
      Posts.setFindResults({id: fixtures.posts[0].id}, fixtures.posts[0]);
      Counters.setFindResults({postId: fixtures.posts[0].id}, fixtures.counters[0]);
      request(server)
        .get('/posts/1')
        .expect(function(res){
          // Verify that the correct counter was increased
          if (!Counters.increase.calledWith(1)) { throw Error('Counter for post should have increased'); }
        })
        .expect(200, {
          id: 1,
          title: 'My First Novel',
          content: 'This is my novel. The end.',
          author: 'William Shakespeare'
        }, done);
    });
</code></pre>

<p>We are now testing for when the post exists. First, we set up wolfpack so that it returns a post from the fixtures.</p>

<p>Wait! I don't see we are using wolfpack. We are calling a method <code>setFindResults</code> in the model! Actually, wolfpack injected that method into the model, which is a query method that allows you to tell wolfpack what results it should provide in case the given query is performed.</p>

<p>So the method goes like this:</p>

<pre><code class="language-javascript">// Format
Model.setFindResults(condition, results)

// Example
Model.setFindResults({name: 'John'}, {id: 2, name: 'John'})

Model.findOne({name: 'John'}).then(function(user){  
  // user will be {id: 2, name: 'John'}
});
</code></pre>

<p>You can read Wolfpack's <a href="https://github.com/fdvj/wolfpack">documentation</a> for more details on query methods.</p>

<p>Back to our code, in our case, when the model asks for Posts with an id of 1 (which is the id of the first fixture), wolfpack will return the data for the first fixture to Sails.</p>

<p>The same happens with the Counters model. Most likely it will look for the entry that keeps track of the counter, in which case we will return the first fixture for the counters.</p>

<p>On the expectations side, first we make the call <code>GET /posts/1</code> which we expect first to call the Counters.increase method. Here we are also doing a small unit test as we are using Sinon's <code>calledWith</code> method to test that the method Counters.increased was passed a 1, the <code>postId</code>, as the argument. In other words, the app should somewhere in the endpoint call <code>Counters.increase(1)</code>. In case this does not happen, it should fail the test with the exception.</p>

<p>Finally, I'm expecting a <code>200</code> HTTP status, along with an object identical to what we have as the first post fixture.</p>

<p>Don't forget to pass the <code>done</code> callback at the end so that the test doesn't timeout.</p>

<p>If we now run the tests, they should fail. Good, let's make them green.</p>

<p>First, let's write our methods in the Counters model which increase the count:</p>

<pre><code class="language-javascript">// api/models/Counters.js
module.exports = {

  attributes: {
    postId: 'integer',
    count: 'integer',

    increase: function() {
      this.count++;
      return this;
    }
  },

  increase: function(postId) {
    return this.findOne({postId: postId}).then(function(counter){
      return counter.increase().save();
    });
  }
};
</code></pre>

<p>As you can see, our model looks for the entry of the post, then increase the count and saves it. Good! Just like we planned for with wolfpack. Now let's write the code in our controller that will handle this route:</p>

<pre><code class="language-javascript">// api/controllers/PostsController.js
module.exports = {

  getAllPosts: function(req, res) {
    Posts.find().then(function(posts){
      return res.json(200, posts);
    });
  },

  // This getPost handles the GET /posts/:id route
  getPost: function(req, res) {
    Posts.findOne({id: req.params.id}).then(function(post){
      if (!post) {
        return res.send(404);
      }
      Counters.increase(post.id);
      return res.json(200, post);
    });
  }

};
</code></pre>

<p>Just a small review of the code here. First, we find the post with an id given in the url (<code>req.params.id</code>). If it does not exist, return a <code>404</code>. Otherwise, increase the counter and return a <code>200</code> plus the post. Basically what we are expecting.</p>

<p>Let's build the route then.</p>

<pre><code class="language-javascript">// config/routes.js
module.exports.routes = {

  'GET /posts': {
    controller: 'PostsController',
    action: 'getAllPosts'
  },

  'GET /posts/:id': {
    controller: 'PostsController',
    action: 'getPost'
  }

};
</code></pre>

<p>Great! Our shiny new route is ready to be tested. Let's run those tests and watch them turn green.</p>

<pre><code>grunt test  
</code></pre>

<p>Beautiful! Our tests all return green. Now when we refactor or add a new feature and break something, we'll know for sure.</p>

<h2 id="whatsnext">What's next?</h2>

<p>In this article we saw how we can test GET transactions in our API. There are still three other verbs we need to cover: <code>POST</code>, <code>PUT</code> and <code>DELETE</code>. In the next article we are going to cover how to test <code>POST</code> transactions in your API. Let's move on then.</p>

<p><a href="https://fernandodevega.com/en/2015/12/19/functional-testing-a-sailsjs-api-with-wolfpack-part-3/">Functional Testing a SailsJS API with Wolfpack: Part 3</a></p>]]></content:encoded></item><item><title><![CDATA[Functional Testing a SailsJS API with Wolfpack: Part 1]]></title><description><![CDATA[<p><a href="http://sailsjs.org/">SailsJS</a> is a popular MVC framework for <a href="https://nodejs.org/en/">NodeJS</a>. It is widely used due to its ease of use. However, the official Sails documentation on testing is pretty basic and ambiguous, so a lot of people is left to wonder how to properly test a Sails app.</p>

<p class="subtitle">Learn how to functionally</p>]]></description><link>https://fernandodevega.com/en/2015/12/19/functional-testing-a-sailsjs-api-with-wolfpack-part-1/</link><guid isPermaLink="false">57eea8ac-8dfe-4b14-873e-9f23b70aec9d</guid><category><![CDATA[javascript]]></category><category><![CDATA[nodejs]]></category><category><![CDATA[sails]]></category><category><![CDATA[api]]></category><category><![CDATA[testing]]></category><category><![CDATA[wolfpack]]></category><dc:creator><![CDATA[Fernando De Vega]]></dc:creator><pubDate>Sat, 19 Dec 2015 19:50:15 GMT</pubDate><media:content url="https://fernandodevega.com/en/content/images/2015/12/photo-1413834932717-29e7d4714192--1-.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://fernandodevega.com/en/content/images/2015/12/photo-1413834932717-29e7d4714192--1-.jpg" alt="Functional Testing a SailsJS API with Wolfpack: Part 1"><p><a href="http://sailsjs.org/">SailsJS</a> is a popular MVC framework for <a href="https://nodejs.org/en/">NodeJS</a>. It is widely used due to its ease of use. However, the official Sails documentation on testing is pretty basic and ambiguous, so a lot of people is left to wonder how to properly test a Sails app.</p>

<p class="subtitle">Learn how to functionally test your SailsJS API using Wolfpack and Supertest.</p>

<p>In this first part of the series, we are going to see the basic setup I use to functional test an API with <a href="https://github.com/fdvj/wolfpack">Wolfpack</a>.</p>

<p>I've uploaded to my Github the boilerplate and article samples I'm going to present below. You can checkout the repo <a href="https://github.com/fdvj/sails-testing-demo">here</a>.</p>

<h2 id="asmallintro">A small intro</h2>

<p>When I started working with Sails, I found myself in the situation that I wasn't completely sure on how to test a Sails application. In the simplest terms, the documentation said to lift the Sails application and do the testing with my favorite framework.</p>

<p>I didn't like testing against a database. Too much work involved. For GET transactions, I have to make sure that the database has the proper entries to do the tests well. I also had to make sure my teardown was done correctly, otherwise I could get false positives, or my db could grow really big. I wasn't a fan of that idea.</p>

<p>Looking around, I couldn't find any good docs or plugins that could let me avoid the hassle of testing my models against a db, so I decided to write my own. An thus <a href="https://github.com/fdvj/wolfpack">Wolfpack</a> was created.</p>

<h2 id="whatiswolfpack">What is Wolfpack?</h2>

<p><a href="https://github.com/fdvj/wolfpack">Wolfpack</a> is a database driver, like any other db driver (connector as they are really called) for Sails. The major difference is that Wolfpack does not persist the data; everything is stored in variables which disappear once the tests are done.</p>

<p>Wolfpack is also a testing helper. It basically injects itself into a Sails (Waterline) model you provide and spies upon every method in that model. It allows you to mock results that will usually be sent by a database, as well as mock errors that may happen so you can also test error scenarios in realtime.</p>

<p>Overall, Wolfpack lets you mock the results coming from your models, on a case-by-case basis, giving you a fair amount of control on how to run the tests on your app. Let's get to business.</p>

<h2 id="requirements">Requirements</h2>

<p>For this entire series, I'm assuming you know your way around a Sails project. I'm also assuming you've done test driven development, or at least wrote tests in the past using mocha or jasmine.</p>

<p>On the Sails side, I'm assuming you've disabled Sails default blueprints, which basically create "magic" routes and controllers for our models. If you don't disable blueprints, some of the steps described in this article may not perform as expected.</p>

<h2 id="settingupthetestenvironment">Setting up the test environment</h2>

<p>This first part is all about readying up our environment for proper testing. Let's get to business then.</p>

<p>The following is my boilerplate test environment for a SailsJS application. I use <a href="https://mochajs.org/">Mocha</a>, <a href="http://chaijs.com/">Chai</a> and <a href="https://github.com/visionmedia/supertest">Supertest</a> for my tests. Chai is best suited for Unit testing a Sails application. For functional testing an API, I prefer to use Supertest as it allows me to create a session and easily test HTTP verbs against my API.</p>

<p>From the boilerplate below, you are going to notice I'm setting up stuff I will not be using in this article (coverage and assertion libraries for example), since I'm going to be doing only functional testing. However, you will notice there are some scenarios in which a unit test is better suited. My setup allows me to do both unit and functional testing with ease.</p>

<p>I'll start by assuming you've already setup your Sails project using the Sails client tool and the directory structure is all setup correctly. This is <strong>important</strong> because my setup uses some tasks already implemented by the Sails team.</p>

<p>First things first, let's make sure we have Grunt installed globally. Type in <code>grunt -v</code>. If you get an error, then install Grunt globally like this:</p>

<pre><code>npm install -g grunt-cli  
</code></pre>

<p>It is extremely important to have Grunt installed as the setup I'm using below uses Grunt for almost everything. If you prefer, you can use Gulp as well, but you'll need to tweak some stuff to get my setup working in Gulp.</p>

<p>Also, please note that Sails uses grunt by default, so you may also need to change some tasks there if you prefer to use Gulp or other task runner.</p>

<p>Now that we have Grunt installed, let's install our dev dependencies:</p>

<pre><code>npm install --save-dev blanket chai grunt-blanket grunt-mocha-test mocha sinon supertest wolfpack  
</code></pre>

<p>There's an additional dependency I install, and that is <a href="https://lodash.com/">Lodash</a>. Lodash is a utility library like Underscore, but with a lot more methods and (arguably) better performance. You can use Underscore if you prefer, but just bear in mind you'll need to change the lodash into underscore in the code.</p>

<pre><code>npm install --save lodash  
</code></pre>

<p>Now here comes the boiler plate stuff. Let's start by modifying some of Sails grunt tasks and create our own tasks as well.</p>

<p>In your root application folder, open the <strong>tasks/config/clean.js</strong> file, and add a new task called <strong>coverage</strong>. Right now we won't deal with code coverage, but in a future article we will, and since this is a boilerplate, lets leave it like that. Your clean.js file should look something like this:</p>

<pre><code class="language-javascript">// tasks/config/clean.js
module.exports = function(grunt) {

  grunt.config.set('clean', {
    dev: ['.tmp/public/**'],
    build: ['www'],
    coverage: {
      src: ['coverage/**']
    }
  });

  grunt.loadNpmTasks('grunt-contrib-clean');
};
</code></pre>

<p>Next, create a new file in <strong>tasks/config/</strong> and call it <strong>blanket.js</strong>, and paste the following:</p>

<pre><code class="language-javascript">// tasks/config/blanket.js
module.exports = function(grunt) {

  grunt.config.set('blanket', {
    coverage: {
      src: ['api/'],
      dest: 'coverage/api/'
    }
  });

  grunt.loadNpmTasks('grunt-blanket');

};
</code></pre>

<p>Moving on, open <strong>tasks/config/copy.js</strong> and add a coverage task as well. The file should look like the one below:</p>

<pre><code class="language-javascript">// tasks/config/copy.js
module.exports = function(grunt) {

    grunt.config.set('copy', {
        dev: {
            files: [{
                expand: true,
                cwd: './assets',
                src: ['**/*.!(coffee|less)'],
                dest: '.tmp/public'
            }]
        },
        build: {
            files: [{
                expand: true,
                cwd: '.tmp/public',
                src: ['**/*'],
                dest: 'www'
            }]
        },
        coverage: {
            src: ['spec/api/**'],
            dest: 'coverage/'
        }
    });

    grunt.loadNpmTasks('grunt-contrib-copy');
};
</code></pre>

<p>Great! Now let's setup our test runner task. Create a new file called <strong>test.js</strong> under <strong>tasks/config/</strong> and put the following:</p>

<pre><code class="language-javascript">// tasks/config/test.js
module.exports = function(grunt) {

  grunt.config.set('mochaTest', {
    test: {
      options: {
        reporter: 'spec'
      },
      src: ['spec/helpers/**/*.js', 'coverage/spec/api/**/*.js']
    },
    coverage: {
      options: {
        reporter: 'html-cov',
        quiet: true,
        captureFile: 'coverage.html'
      },
      src: ['coverage/spec/api/**/*.js']
    }
  });

  grunt.loadNpmTasks('grunt-mocha-test');

};
</code></pre>

<p>Excellent! We are almost done setting up our test runner. Now we need to register a test task with grunt so that we can run the tests. Under the <strong>tasks/register/</strong> folder, create a new file called <strong>test.js</strong> and add the following:</p>

<pre><code class="language-javascript">// config/register/test.js
module.exports = function(grunt) {

  grunt.registerTask('test', 'Run tests and code coverage report.', ['clean:coverage', 'blanket', 'copy:coverage', 'mochaTest', 'clean:coverage']);

};
</code></pre>

<p>After all this, we need to verify our task is setup correctly. To do so, simply type <code>grunt --help</code> and look for our task at the bottom. If it shows, we are almost set up. If not, check the files we've just added and modified for anything missing (perhaps a pesky semicolon).</p>

<p>Now it is time to set our test helpers which will initiate Sails and provide us with the assertion libraries. But in order to do so, we need to set up the directory structure for our tests.</p>

<blockquote>
  <p><em>On a sidenote, when I created Wolfpack, I did it with the intention not to ever have to lift Sails for testing. However, since I'm doing functional testing against endpoints, it is for the best to run the application, that means lifting Sails, so that we can do a proper testing of the API.</em></p>
</blockquote>

<p>The root of your Sails application should have several folders like <strong>api</strong>, <strong>assets</strong>, <strong>config</strong>, etc. In the root of our project, we are going to create a new folder called <strong>spec</strong>. You can name this folder differently if you prefer, however, if you do so, don't forget to update the paths in the grunt task files we've created and modified before.</p>

<p>Please note that this is how I setup my directory for the tests. You don't need to follow the same structure I do, but for the purpose of this article, please do so.</p>

<p>Within the <strong>spec/</strong> folder, I'm going to create three additional folders: <strong>api</strong>, <strong>fixtures</strong>, and <strong>helpers</strong>. <strong>spec/api/</strong> is where I put all my tests, <strong>spec/fixtures/</strong> is where I put my fixtures for wolfpack to use, and <strong>spec/helpers/</strong> is where I put any test helpers I need, including sails test helpers as we will soon see.</p>

<p>Let's start by setting up our Sails helper. In <strong>spec/helpers/</strong> create a new file called <strong>sails.js</strong> and paste the following:</p>

<pre><code class="language-javascript">// spec/helpers/sails.js
var Sails = require('sails'),  
    _ = require('lodash'),
    wolfpack = require('wolfpack'),
    fs = require('fs'),
    sails;

global.wolfpack = wolfpack;

before(function(done) {

  // Increase the Mocha timeout so that Sails has enough time to lift.
  this.timeout(30000);

  Sails.lift({
    // configuration for testing purposes
    log: {level: 'silent'}
  }, function(err, server) {
    sails = server;
    if (err) return done(err);

    // Lookup models for wolpack injection
    var files = _.filter(fs.readdirSync(process.cwd() + '/api/models/'), function(file){
      return /\.js$/.test(file);
    });

    // Inject wolfpack into files
    _.each(files, function(file){
      file = file.replace(/\.js$/, '');
      var spied = wolfpack(process.cwd() + '/api/models/' + file);
      global[file] = spied;
      sails.models[file.toLowerCase()] = spied;
    });

    // Set hook path so its easier to call in tests
    global.server = sails.hooks.http.app;

    done(err, sails);
  });
});

after(function(done) {  
  // here you can clear fixtures, etc.
  Sails.lower(done);
});
</code></pre>

<p>Couple of things to note here. First, as you may notice, one of the first things I do is to set Mocha's timeout to 30 seconds (30000 milliseconds). This is because sometimes Sails <strong>takes a lot of time to lift</strong>, more than the 5 second default timeout for asynchronous operations set by Mocha. With the 30 second limit, I give ample time for Sails to finish loading before I start running the tests.</p>

<p>No worries, the 30 second limit just applies to the lifting. The rest of the app still has a 5 second timeout.</p>

<p>Once Sails loads, I start injecting wolfpack into all models loaded by the application. This will allow me to set the fixtures for the tests without having to start wolfpack individually in each model.</p>

<p>Im also exposing two global variables: <code>wolfpack</code>, which contains the wolfpack instance, and a <code>server</code>, which is basically a shortname for the sails app hook I'm going to be using in our Supertest tests.</p>

<p>Finally, the after hook makes sure we correctly lower Sails once the tests are run, so that we don't leave it hanging occupying the port used for testing.</p>

<p>Next, create a file under <strong>spec/helpers/</strong> called <strong>libraries.js</strong> and paste the following:</p>

<pre><code class="language-javascript">// spec/helpers/libraries.js
var chai = require('chai'),  
    fs = require('fs');

chai.config.includeStack = true;

global.expect = chai.expect;  
global.AssertionError = chai.AssertionError;  
global.Assertion = chai.Assertion;  
global.assert = chai.assert;

global.sinon = require('sinon');

global.request = require('supertest');

global._ = require('lodash');

// Load fixtures
global.fixtures = {};

_.each(fs.readdirSync(process.cwd() + '/spec/fixtures/'), function(file){  
  global.fixtures[file.replace(/\.js$/, '').toLowerCase()] = require(process.cwd() + '/spec/fixtures/' + file);
});
</code></pre>

<p>What the <strong>libraries.js</strong> file does is it loads Chai's assertion methods and puts them in the global scope so they can easily be accessed in the tests. However, since in this series I'm functional testing and not unit testing, I will be using Supertest assertion's rather than Chai's.</p>

<p>As you can see as well, I'm loading <a href="http://sinonjs.org/">SinonJS</a> (which is a spy/stub library) and put it in the global scope for easy accessibility, as well as Supertest, which I put in the <code>request</code> global variable, and I also load lodash in the global scope.</p>

<p>Finally, I also load all files within the fixtures folder and load them into the global object fixtures. This way I preload all fixtures into the tests and to call them I simply need to input the name of the fixture file, without the posterior .js, to be able to access them.</p>

<p>We are now done setting up our testing environment.</p>

<h2 id="whatsnext">What's next?</h2>

<p>Well, testing of course! In the next part of this series we are going to start testing our API as any client out there will consume it. Let's move on to the next part then.</p>

<p><a href="https://fernandodevega.com/en/2015/12/19/functional-testing-a-sailsjs-api-with-wolfpack-part-2/">Functional Testing a SailsJS API with Wolfpack: Part 2</a></p>]]></content:encoded></item></channel></rss>