Gotchas when converting strings to arrays in JS

lionel-rowe - Aug 18 '21 - - Dev Community

This is a response to @antoomartini 's awesome article here, where she describes 4 ways to turn a string into an array:

However, not all of the 4 ways work in the same way. We can see this when we try to use a string such as '💩', rather than a Latin-alphabet string:

const str = '💩'

str.split('') // ["\ud83d", "\udca9"]

;[...str] // ["💩"]

Array.from(str) // ["💩"]

Object.assign([], str) // ["\ud83d", "\udca9"]
Enter fullscreen mode Exit fullscreen mode

Why the difference?

To understand the difference, let's take a look at how each way works in turn.

String#split

String#split matches and splits on 16-bit units, as encoded in UTF-16, the internal string representation that JavaScript uses.

You can find what these units are by using string index notation, and you can count them using String#length:

'ab'[0] // "a"
'ab'[1] // "b"
'ab'.length // 2

'💩'[0] // "\ud83d"
'💩'[1] // "\udca9"
'💩'.length // 2
Enter fullscreen mode Exit fullscreen mode

As you can see, something weird is going here. That's because emojis, and various other characters, take up two 16-bit units (for a total of 32 bits) instead of just one.

Therefore, with String#split, they get split right down the middle, into those individual 16-bit units. The 16-bit units that make up our emoji aren't proper characters, so the JavaScript console displays them in Unicode escape notation (\uXXXX, where each X is a hexadecimal digit).

Object.assign

How does Object.assign work?

The Object.assign() method copies all enumerable own properties from one or more source objects to a target object. It returns the modified target object. (Source: MDN)

In this case, source is '💩', and target is []. Object.assign therefore assigns '💩''s property 0 to the array's property 0 and '💩''s property 1 to the array's property 1. As a result, we get the same outcome as with String#split — the individual 16-bit units that are found at those indexes.

[...spread]

The spread operator (...) was introduced in ES6. With the introduction of ES6 features, JavaScript started getting smarter with its Unicode handling.

Instead of assigning properties, the spread operator iterates over its operand — in this case, our string. String iteration is done based on Unicode codepoints, rather than individual 16-bit units. Our friendly poop emoji is only a single Unicode codepoint, so we get the result we want.

Array.from

As with spread notation, Array.from was introduced in ES6. It iterates over the argument passed to it, so again, we get the expected result.

Caveats

Array.from and spread notation work great for Unicode codepoints, but they still won't cover every situation. Sometimes, what looks like a single glyph is actually multiple Unicode codepoints. For example:

const str1 = ''
const str2 = str1.normalize('NFD')
// "lǜ", looks exactly the same, but composed with combining diacritics

;[...str1] // ["l", "ǜ"]
;[...str2] // ["l", "u", "̈", "̀"]
Enter fullscreen mode Exit fullscreen mode

Or, for another emoji-based example:

const emoji = '👩🏿‍💻'

;[...emoji] // ["👩", "🏿", "‍", "💻"]
Enter fullscreen mode Exit fullscreen mode

Here, it's because the emoji is actually composed of 4 Unicode codepoints, representing woman, skin tone 6, zero-width joiner, and computer respectively.

Further reading

For a much deeper dive, I highly recommend Matthias Bynens's excellent article JavaScript has a Unicode problem.


Thanks for reading! What are your favorite Unicode tips and tricks or JavaScript Unicode gotchas?

. . . . . . . . . . . . . .