I recently wrote about my [node test strategy](https://remysharp.com/2015/12/14/my-node-test-strategy) and in particular talked about using [tape](https://npmjs.org/tape) as my test runner and piping to [tap-spec](https://www.npmjs.com/package/tap-spec).
I’ve since started to migrate across to [tap](https://www.npmjs.com/package/tap) exclusively for both the runner and the reporter.
[](https://training.leftlogic.com/buy/terminal/cli2?coupon=BLOG\&utm_source=blog\&utm_medium=banner\&utm_campaign=remysharp-discount)
[READER DISCOUNTSave $50 on terminal.training](https://training.leftlogic.com/buy/terminal/cli2?coupon=BLOG\&utm_source=blog\&utm_medium=banner\&utm_campaign=remysharp-discount)
[I’ve published 38 videos for new developers, designers, UX, UI, product owners and anyone who needs to conquer the command line today.](https://training.leftlogic.com/buy/terminal/cli2?coupon=BLOG\&utm_source=blog\&utm_medium=banner\&utm_campaign=remysharp-discount)
[$49 - only from this link](https://training.leftlogic.com/buy/terminal/cli2?coupon=BLOG\&utm_source=blog\&utm_medium=banner\&utm_campaign=remysharp-discount)
There’s a couple of benefits to my workflow in using tap for the tests. Partly the reduced dependencies (so less tracking for changes and less knowledge required), but also the reporting is much more valuable to me, as I’ll show you below.
Tracing errors[](#tracing-errors)
Below is the output, with an error inside my tested code whilst using tape as my runner and executing through tap:

As you can see from the output above, there’s an exception and it’s failing the test, but there’s very little information about where it failed. I can see it was inside a promise, but since a lot of my code is based around promises, including the test itself, there’s essentially no stacktrace at all.
To switch my code from using the tape library to tap, I only need to change the require line from:
var test = require('tape');
…to…
var test = require('tap').test;
The rest of my test code can remain the same, as the API for tape is generally a subset of tap’s own assert API. The output now changes to give a better idea of the stacktrace:

But this still doesn’t quite give me enough detail. I can see exactly where the error is being thrown, but this is in my test code. I need to hook in the stack into my test. The following code shows how the error was being displayed (from the screenshot above):
auth().then(function (res) {
t.notEqual(res, -1, 'auth worked');
}).catch(function (e) {
t.fail(e);
});
Instead of using t.fail
and passing the error directly in, I switch to using t.threw
which works nicely with my promise code and will give me a full stacktrace. A side benefit is that I can also avoid using a plan`so long as my promise has a final `.then(t.end)
to notify that the tests are complete:
auth().then(function (res) {
t.notEqual(res, -1, 'auth worked');
}).catch(t.threw).then(t.end);
Now the failing test has much more detail:

I have the full stacktrace from the origin of the error and I can fix the issue.
Visual deltas[](#visual-deltas)
Finally it’s also worth showing the other big benefit to me, specifically around failing deepEqual
tests. In some of my integration tests, I’ll create a fixture that’s a JSON object the expected result.
If there’s an error and the result is different to my fixture, the tape result doesn’t always help understand where the issue is. As you can see below, the object has been dumped out without any real clue as to where the issue is:

However, using tap as my required test library, automatically gives me a visual cue on the delta between my fixture and my actual output:

Immediately, I can see the timestamp is the problem and it’s simple for me to then go and either fix the code or change the test.
Coverage reporting for free[](#coverage-reporting-for-free)
Finally, tap comes with coverage reporting built in. For local testing, I include the following in my package.json
so I can browse the interactive coverage report:
"scripts": {
"cover": "tap test/*.test.js --cov --coverage-report=lcov",
This generates the [istanbul](https://gotwarlost.github.io/istanbul/) coverage report that I open up immediately and see either where I’m missing coverage, or where I can identify dead code.
In addition, since I publish to Travis for my tests, it will post automatically to tools like [coveralls.io](https://coveralls.io/) and I’m able to share the coverage either with the public or with my team internally. Note there’s a few specific environment values you need to get coverage working (specific to Coveralls):
-
Add your coveralls token under
COVERALLS_REPO_TOKEN
-
If you’ve got a private repo, use
COVERALLS_SERVICE_NAME=travis-pro
(you don’t need this for public repos) -
If you’re using more than one test in your test matrix (i.e. testing node 0.10, 4 and 5) include
COVERALLS_PARALLEL=true
One important caveat: when you include coverage in your tests, then the stacktraces will often only show the right filename and function name, but not the right line number. This is because the coverage has instrumented the code (though I’d [expect sourcemaps](https://github.com/tapjs/node-tap/issues/231) could solve this issue). If I have a failing stacktrace, I run the tap CLI command directly against the failing test file.
And that’s it. These are the reasons I’m using tap now over tape, and so far, it’s all proving really valuable. My next post on testing, I’ll explain how I debug and fix failing tests in (as close to) real-time as possible.
Published 8-Feb 2016 under #code. [Edit this post](https://github.com/remy/remysharp.com/blob/main/public/blog/testing-tape-vs-tap.md)
Comments
Lock Thread
Login
Add Comment[M ↓ Markdown]()
[Upvotes]()[Newest]()[Oldest]()

vik
0 points
6 years ago
Hi Remy
are you still using this setup? I am not getting the fancy diff output like you have outlined in Visual Deltas. I will get only a similar output if I pipe to tap-mocha-reporter classic
\
Or could you show me how you call tap?
Thanks\ v

Jeff Lu
0 points
7 years ago
Hi Remy,\ Here is my sample repo\ [https://github.com/playgrou…;](https://github.com/playground/sample)
I follow your example, but getting the following errors:\ test('First test!', function (assert) {\ assert.plan(1);
request.get('http\://localhost:3000/api/users', function(err, res) {\ let json = JSON.parse(res.body);\ assert.equal(json\[0], 'John', 'name should be John');\ assert.end();\ });\ });
test('Second test!', function (assert) {\ assert.plan(1);
request.get('http\://localhost:3000/api/users', function(err, res) {\ let json = JSON.parse(res.body);\ assert.equal(json\[0], 'John', 'name should be John');\ assert.end();\ });\ });
Server running on port 3000\ test/user.test.js\ ✓ name should be John\ ✓ name should be John
-
missing plan
2 passing (2s)\ 1 failing
-
test/user.test.js missing plan:\ missing plan
npm ERR! Test failed. See above for more details.

vitvad
0 points
8 years ago
Hi Remy, thanks for you post.
Recently heard about intern ([https://theintern.github.io/)](https://theintern.github.io/)), did you have a chance to look at it? Maybe you could write a comparison post for "tap" and "intern".
[Commento](https://commento.io)