As modern JavaScript language features land more and more support I find myself using more and more of the syntactic sugar.

I’m a big fan of [default arguments](https://remysharp.com/2017/10/25/es6-default-arguments) but I also really like the …​spread syntax. With that, I’ve found myself using spread syntax to get a list of unique elements, except until recently, I didn’t understand what was under the hood.

[I’ve published 38 videos for new developers, designers, UX, UI, product owners and anyone who needs to conquer the command line today.](https://training.leftlogic.com/buy/terminal/cli2?coupon=BLOG\&utm_source=blog\&utm_medium=banner\&utm_campaign=remysharp-discount)

Required reading[](#required-reading)

I’m always extremely wary of when I come across "X is an anti-pattern" posts. Mostly because the vast majority of the time, X is condemned without any real scientific proof.

Rich Snapp’s article on the [reduce({…​spread}) anti-pattern](https://www.richsnapp.com/blog/2019/06-09-reduce-spread-anti-pattern) is a welcome change with hard evidence that shows how the spread operator works. The crux of the anti-pattern is that spread is hidden iteration.

So after reading his article, I started scanning my code for this pattern, and as expected my code is littered with it.

Specifically: a method to get unique elements in an array.

How to get unique elements[](#how-to-get-unique-elements)

A thing I love about code is that there’s (usually) more than one way to solve a problem. That’s when your creativity comes in to play.

Here’s a few solutions (but far from comprehensive list) to getting the unique items in a list, starting with the anti-pattern:

// reduce spread
a.reduce((acc, curr) => (!acc.includes(curr) ? [...acc, curr] : acc), [])

// from new set
Array.from(new Set(a))

// reduce concat
a.reduce((acc, curr) => (!acc.includes(curr) ? acc.concat(curr) : acc), [])

// reduce push
a.reduce((acc, curr) => (!acc.includes(curr) && acc.push(curr) && acc || acc), [])

// filter
a.filter((curr, i) => a.indexOf(curr) === i)

The reduce spread pattern (in my case) has stemmed from wanting a single line of code (specifically without braces) so my alternative solutions are forced into the same pattern. This is fine (I think) until the "reduce push" pattern - but it’s my constrains, so I’ll take the ugly in this case.

Using my highly refined "finger in the air" technique to evaluating performance, and known now what I do about reduce spread, this is how I’d expect the code to perform, with slowest to fastest:

  1. reduce spread - because of hidden iteration

  2. from new set - potentially more complex object instantiation (hand wavy)

  3. reduce concat - though I’d expect this to be marginally slower than reduce push only because it has an additional instantiation of a new array

  4. reduce push

  5. filter - appears to have less work, less function calls than the reduce approaches

[Actual results](https://jsperf.com/remy-unique/1) are surprising (or certainly to me):

![unique perf results](/images/unique-perf.png)

The fastest is the reduce push method, and the slowest, which surprised me the most, was the reduce concat method.

I don’t know V8’s engine well enough to know exactly what’s going on under the hood, but given how clever I was trying to be with my code in the first place, it doesn’t take much to adjust my ways to use the reduce push.

Although after all that investigation, the filter method came in a very close second place, and reads a lot simpler, so that’s probably the right call. Just remember, if you’re chaining a series of array methods, the filter will need to use the 3rd argument: .filter((curr, i, self) ⇒ self.indexOf(curr) === i).

👍 34 likes

[![Masoud](/images/no-user.svg "Masoud")](![Maxim(/images/no-user.svg "Maxim")](![Fabian Gündel(/images/no-user.svg "Fabian Gündel")](![Ξ(/images/no-user.svg "Ξ")](![Lewis Lane(/images/no-user.svg "Lewis Lane")](![Michael Pumo(/images/no-user.svg "Michael Pumo")](![numano(/images/no-user.svg "numano")](![Chris Weekly(/images/no-user.svg "Chris Weekly")](![tablatronix(/images/no-user.svg "tablatronix")](![Paweł Lesiecki(/images/no-user.svg "Paweł Lesiecki")](![David Brainer(/images/no-user.svg "David Brainer")](![Chris Russell-Walker(/images/no-user.svg "Chris Russell-Walker")](![Cem 💎(/images/no-user.svg "Cem 💎")](![Dr Mark Everitt(/images/no-user.svg "Dr Mark Everitt")](![hacking myself 🗝️(/images/no-user.svg "hacking myself 🗝️")](![Martin Pitt(/images/no-user.svg "Martin Pitt")](![Ben Kotch(/images/no-user.svg "Ben Kotch")](![Ben Kotch(/images/no-user.svg "Ben Kotch")](![Navaneeth Agastya(/images/no-user.svg "Navaneeth Agastya")](![Real Life Digital(/images/no-user.svg "Real Life Digital")](![john(/images/no-user.svg "john")](![Bryan Robinson(/images/no-user.svg "Bryan Robinson")](![Alex Carpenter(/images/no-user.svg "Alex Carpenter")](![Philipp(/images/no-user.svg "Philipp")](![Andy Bell(/images/no-user.svg "Andy Bell")](![Sibelius Seraphini(/images/no-user.svg "Sibelius Seraphini")](![setlocale(LC\_ALL,(/images/no-user.svg "setlocale(LC_ALL,")](![Alex Wilson in 🇧🇬(/images/no-user.svg "Alex Wilson in 🇧🇬")](![Birgit Pauli-Haack - CU at #WCBos :-)(/images/no-user.svg "Birgit Pauli-Haack - CU at #WCBos :-)")](![Surma(/images/no-user.svg "Surma")](![Eka(/images/no-user.svg "Eka")](![Surma(/images/no-user.svg "Surma")](![orangetronic(/images/no-user.svg "orangetronic")](![Jack Skinner’;--(/images/no-user.svg "Jack Skinner’;--")](https://twitter.com/developerjack)

Comments

Lock Thread

Login

Add Comment[M ↓   Markdown]()

[Upvotes]()[Newest]()[Oldest]()

0 points

4 years ago

Mmm, I don’t know. reduce`includes`push may be the fastest, but it looks obscure at first. You’ll have to use a wrapper function to give it a proper semantic name, which adds a little bit of overhead. On the other hand, Array.from+Set is very expressive. Sure, you have to know that `Set`s don’t have duplicates, but that’s half of the reason for them. And I don’t usually create a wrapper function for that. (Maybe my coworkers hate me for that…​ I don’t know!)

Also, what about larger arrays? 10 items don’t tell us much…​

1 point

4 years ago

There’s quite a few [replies on twitter](https://mobile.twitter.com/rem/status/1139836593844498432) which does try with much larger sets, and then mixed how unique the values are etc.

In general, I’m seeing the filter method coming up best for speed and for legibility. Totally agree that reduce + push is the worst for reading (and did kinda caveat that too 😉)

1 point

4 years ago

Haha yeah, I read the thread on Twitter too, nice contributions there. Especially the remark from Surma about CPU caches (he had to deal with it when developing image tile-rotation on Squoosh). I would have replied there but I wanted to try this Commento thing, that also support Markdown. Feels better than pretending Twitter supports Markdown too 😂

[Commento](https://commento.io)