React Native is a popular cross-platform JavaScript framework. Components of React Native apps are rendered in Native UI. In this article, we will focus on the security side of the framework.
Analyzing React Native
React Native has an alternative approach for cross-platform development. Traditionally, Cordova-based frameworks used WebView to render the whole application. In contrast, React Native applications run the JS code in a JavaScript VM based on JavaScriptCore. The application uses native JavaScriptCore on iOS and JavaScriptCore libs are bundled on an APK on Android.
In React Native, the communication between Native and JavaScript code is handled by a JavaScript Bridge. The source JS files are compiled into one single bundle file known as entry-file
. In development mode, the file is bundled on a local server and fetched by the application. For production, the application logic is usually bundled in a single file, usually index.android.bundle
or index.ios.bundle
. Similarly to Cordova, the bundle file is present in the assets
folder and, as also happens with Cordova, we can assume React Native apps as containers that run JS code. This logic is implemented in expo
. Under certain limitations, Expo can run different business logic in a single application. At this moment, it's fair to assume the entry-file
as the core application logic.
We will be dividing the article into the following sections:
- Securing app to server connection
- Securing local data
- Advanced integrity checks
Securing App to Server Connection
Usually, smartphone apps communicate with the backend server via APIs. Insecure Communication is highlighted in the OWASP Mobile Top 10 at #3:
Mobile applications frequently do not protect network traffic. They may use SSL/TLS during authentication but not elsewhere. This inconsistency leads to the risk of exposing data and session IDs to interception. The use of transport security does not mean the app has implemented it correctly. To detect basic flaws, observe the phone's network traffic. More subtle flaws require inspecting the design of the application and the applications configuration. - M3-Insecure Communication.
Starting from iOS 9 and Android Pie, SSL is required by default. We can enable cleartext traffic but it's not recommended. To secure the connection further, we can pin our server certificates.
SSL Pinning in React Native
Apps are dependent on Certificate Authorities (CA) and Domain Name Servers (DNS) to validate domains for TLS. Unsafe certificates can be installed on a user device, thereby opening the device to a Man-in-the-Middle attack. SSL pinning can be used to mitigate this risk.
We use the fetch
API or libraries like axios
or frisbee
to consume APIs in our React Native applications. However, these libraries don't have support for SSL pinning. Let's explore the available plugins.
-
react-native-ssl-pinning: this plugin uses OkHttp3 on Android and AFNetworking on iOS to provide SSL pinning and cookie handling. In this case, we will be using
fetch
from the library to consume APIs. For this library, we will have to bundle the certificates inside the app. Necessary error handling needs to be implemented in older apps to handle certificate expiry. The app needs to be updated with newer certificates before certificates expire. This library uses promises and supports multi-part form data. -
react-native-pinch: this plugin is similar to
react-native-ssl-pinning
. We have to bundle certificates inside the app. This library supports both promises and callbacks.
To use HPKP (Http Public Key Pinning), we can consider these plugins:
-
react-native-cert-pinner: this plugin allows us to use public hashes to pin the server. Unlike the plugins above, we can use
fetch
and other utilities directly. The pinning occurs before native JS is run. Also, there is no requirement to define hashes in the request itself. - react-native-trustkit: this is a wrapper plugin for the iOS Trustkit library. This library is available for iOS only.
Alternatively, we can use native implementations as outlined by Javier Muñoz. He has implemented pinning for Android and iOS natively.
Securing Local Storage
Quite often, we store data inside our application. There are multiple ways to store persistent data in React Native. Async-storage
, sqlite
, pouchdb
and realm
are some of the methods to store data. Insecure storage is highlighted at #2 in the OWASP Mobile Top 10:
Insecure data storage vulnerabilities occur when development teams assume that users or malware will not have access to a mobile device's filesystem and subsequent sensitive information in data-stores on the device. Filesystems are easily accessible. Organizations should expect a malicious user or malware to inspect sensitive data stores. Usage of poor encryption libraries is to be avoided. Rooting or jailbreaking a mobile device circumvents any encryption protections. When data is not protected properly, specialized tools are all that is needed to view application data. - M2-Insecure Data Storage.
Let's take a look at some plugins which add a layer of security to our application. Also, we will be exploring some plugins which use native security features like Keychain
& Keystore Access
.
SQLite
SQLite is the most common way to store data. A very popular and open-source extension for SQLite encryption is SQLCipher. Data in SQLCipher is encrypted via 256 bit AES which can't be read without a key. React Native has two libraries that provide SQLCipher:
-
react-native-sqlcipher-2 : this is a fork of react-native-sqlite-2. We can use
pouchdb
as an ORM provider with this library, so it's an additional bonus. -
react-native-sqlcipher-storage : this is a fork of react-native-sqlite-storage. The library has to be set up manually since it doesn't seem to support
react-native link
. Interestingly, the library is based on the Cordova implementation.
Realm
Realm is a nice alternative database provider to React Native Apps. It's much faster than SQLite and it has support for encryption by default. It uses the AES256 algorithm and the encrypted realm is verified using SHA-2 HMAC hash. Details of the library can be found here.
Keychain and Keystore Access
Both iOS and Android have native techniques to store secure data. Keychain services allows developers to store small chunks of data in an encrypted database. On Android, most plugins use the Android keystore system for API 23(Marshmallow) and above. For lower APIs, Facebook's conceal provides the necessary crypto functions. Another alternative is to store encrypted data in shared preferences.
React Native has three libraries that provide secure storage along with biometric/face authentication:
- React Native KeyChain: as the name implies, this plugin provides access to keychain/keystore. It uses Keychain (iOS), Keystore (Android 23+), and conceal. There is support for Biometric Auth. This plugin has multiple methods and options for both Android and iOS. However, it only allows the storage of the username & password.
- React Native Sensitive Info: this plugin is similar to React Native Keychain. It uses Keychain (iOS) and shared preferences (Android) to store data. We can store multiple key-value pairs using this plugin.
- RN Secure Storage: this plugin is similar to React Native Sensitive Info. It uses Keychain (iOS), Keystore (Android 23+), and Secure Preferences to store data. We can store multiple key-value pairs.
Advanced Integrity Checks
JailMonkey and SafetyNet
Rooted and jailbroken devices should be considered insecure by intent. Root privileges allow users to circumvent OS security features, spoof data, analyze algorithms, and access secured storage. As a rule of thumb, the execution of the app on a rooted device should be avoided.
JailMonkey allows React Native applications to detect root or jailbreak. Apart from that, it can detect if mock locations can be set using developer tools.
SafetyNet is an Android-only API for detecting rooted devices and bootloader unlocks. We have covered SafetyNet extensively in a previous article. react-native-google-safetynet is a wrapper plugin for SafetyNet's attestation API. It can be used to verify the user's device.
Additionally, we can use react-native-device-info to check if an app is running in an emulator.
Protecting the Application Logic
Earlier in the article, we mentioned how the application logic in entry-file
is available in plain sight. In other words, a third-party can retrieve the code, reverse-engineer sensitive logic, or even tamper with the code to abuse the app (such as unlocking features or violating license agreements).
Protecting the application logic is a recommendation in the OWASP Mobile Top 10. Specifically, the main concerns include code tampering:
Mobile code runs within an environment that is not under the control of the organization producing the code. At the same time, there are plenty of different ways of altering the environment in which that code runs. These changes allow an adversary to tinker with the code and modify it at will. — M8-Code Tampering
And reverse engineering:
Generally, most applications are susceptible to reverse engineering due to the inherent nature of code. Most languages used to write apps today are rich in metadata that greatly aides a programmer in debugging the app. This same capability also greatly aides an attacker in understanding how the app works. — M9-Reverse Engineering
Let’s highlight two different strategies to address this risk.
Hermes
Facebook introduced Hermes with the react-native 0.60.1 release. Hermes is a new JavaScript Engine optimized for mobile apps. Currently, it is only available with Android and it’s usage is optional. Hermes can be used in the project with react-native 0.60.4
by changing the enableHermes
flag in build.gradle
.
Its key benefits are improved start-up time, decreased memory usage, and smaller app size. One of the strategies that Hermes uses to achieve this is precompiling JavaScript to bytecode. At first glance, this appears to make entry-file
unreadable. But let's look at a real example.
Let's assume that our entry-file
is the one found below:
const {createDecipheriv, createCipheriv, randomBytes} = require('crypto');
const key = Buffer.from('60adba1cf391d89a3a71c72a615cbba8', 'hex');
const algorithm = 'aes-128-cbc';
const softwareVersion = '2.0';
module.exports.createKey = function(userId, expireDate) {
const payload = {
userId,
expireDate,
softwareVersion
};
const json = Buffer.from(JSON.stringify(payload), 'utf8');
const iv = randomBytes(16);
const cipher = createCipheriv(algorithm, key, iv);
let encoded = cipher.update(json);
encoded = Buffer.concat([encoded, cipher.final()]);
const joined = iv.toString('hex') + ';' + encoded.toString('hex');
return Buffer.from(joined, 'utf8').toString('base64');
}
module.exports.validateLicense = function(license, userId) {
const licenseFields = Buffer.from(license, 'base64').toString('utf8');
const fields = licenseFields.split(';');
const iv = Buffer.from(fields[0], 'hex');
const data = Buffer.from(fields[1], 'hex');
const decipher = createDecipheriv(algorithm, key, iv);
let decoded = decipher.update(data);
decoded = Buffer.concat([decoded, decipher.final()]);
const result = JSON.parse(decoded);
if (result.userId != userId) {
throw new Error('Wrong user');
}
if (new Date(result.expireDate) < new Date()) {
throw new Error('Expired license');
}
if (result.softwareVersion != softwareVersion) {
throw new Error('This license is not valid for this program version');
}
return result;
}
After Hermes compiles this file, the resulting bytecode can easily be decompiled using hbcdump and, among the decompiled code, we find some easy to read code:
s0[ASCII, 0..-1]:
s1[ASCII, 0..2]: 2.0
s2[ASCII, 3..34]: 60adba1cf391d89a3a71c72a615cbba8
s3[ASCII, 35..35]: ;
s4[ASCII, 36..50]: Expired license
s5[ASCII, 71..120]: This license is not valid for this program version
s6[ASCII, 121..130]: Wrong user
s7[ASCII, 133..143]: aes-128-cbc
s8[ASCII, 143..148]: crypto
s9[ASCII, 154..159]: global
s10[ASCII, 160..165]: base64
s11[ASCII, 166..168]: hex
s12[ASCII, 177..180]: utf8
i13[ASCII, 50..56] #C765D706: exports
i14[ASCII, 56..70] #FF849242: softwareVersion
i15[ASCII, 127..132] #6FE51CD4: userId
i16[ASCII, 147..154] #1E019520: toString
i17[ASCII, 167..176] #68A06D42: expireDate
i18[ASCII, 173..176] #CD347266: Date
i19[ASCII, 181..186] #5AA7C487: Buffer
i20[ASCII, 186..196] #FD81EB01: randomBytes
i21[ASCII, 196..200] #0EC469F8: split
i22[ASCII, 201..205] #9102A3D0: Error
i23[ASCII, 205..211] #EB75CA32: require
i24[ASCII, 212..215] #971CE5C7: JSON
i25[ASCII, 216..221] #CB8DFA65: concat
i26[ASCII, 222..235] #96C7181F: createCipheriv
i27[ASCII, 235..249] #D60B6B51: validateLicense
i28[ASCII, 250..265] #723D6A80: createDecipheriv
i29[ASCII, 266..274] #01D3AE7D: createKey
i30[ASCII, 275..279] #47993A63: final
i31[ASCII, 280..283] #EAF03666: from
i32[ASCII, 283..288] #2A322C6E: module
i33[ASCII, 289..293] #958EDB02: parse
i34[ASCII, 294..302] #807C5F3D: prototype
i35[ASCII, 303..311] #8D1543BD: stringify
i36[ASCII, 312..317] #60396F4B: update
Function<global>0(1 params, 15 registers, 4 symbols):
Offset in debug table: src 0x0, vars 0x0
license.js[1:1]
CreateEnvironment r0
GetGlobalObject r1
TryGetById r4, r1, 1, "require"
LoadConstUndefined r3
LoadConstString r2, "crypto"
Call2 r2, r4, r3, r2
GetByIdShort r3, r2, 2, "createDecipheriv"
StoreToEnvironment r0, 0, r3
GetByIdShort r3, r2, 3, "createCipheriv"
StoreToEnvironment r0, 1, r3
GetByIdShort r2, r2, 4, "randomBytes"
StoreToEnvironment r0, 2, r2
TryGetById r5, r1, 5, "Buffer"
GetByIdShort r4, r5, 6, "from"
LoadConstString r3, "60adba1cf391d89a3"...
LoadConstString r2, "hex"
Call3 r2, r4, r5, r3, r2
StoreToEnvironment r0, 3, r2
TryGetById r2, r1, 7, "module"
GetByIdShort r3, r2, 8, "exports"
CreateClosure r2, r0, 1
PutById r3, r2, 1, "createKey"
TryGetById r1, r1, 7, "module"
GetByIdShort r1, r1, 8, "exports"
CreateClosure r0, r0, 2
PutById r1, r0, 2, "validateLicense"
Ret r0
So, while Hermes introduces a certain degree of complexity to the entry-file
code, it doesn't actually conceal the code nor do anything to prevent code tampering, which means that it won't stop an attacker — let's not forget that this is not even the purpose of Hermes.
And this leads us to an approach that obfuscates React Native’s JavaScript source code to effectively mitigate the risk of code tampering and reverse engineering: Jscrambler.
Jscrambler
Jscrambler provides a series of layers to protect JavaScript. Unlike most tools that only include (basic) obfuscation, Jscrambler provides three security layers:
- Polymorphic JavaScript & HTML5 obfuscation;
- Code locks (domain, OS, browser, time frame);
- Self-defending (anti-tampering & anti-debugging);
By protecting the source code of React Native apps with Jscrambler, the resulting code is highly obfuscated, as can be observed below:
On top of this obfuscation, there's a Self-Defending layer that provides anti-debugging and anti-tampering capabilities and enables setting countermeasures like breaking the application, deleting cookies, or destroying the attacker’s environment.
To get started with protecting React Native source code with Jscrambler, check the official guide.
Final Thoughts
This article provides an overview of techniques to harden a React Native application.
Developer surveys show that React Native is still a framework of choice, even among development teams of large enterprises.
It’s then crucial to create a threat model and, depending on the application’s use case, employ the required measures to ensure that the application is properly secured.
Feel free to test how Jscrambler protects your React Native source code by using a free trial.