Nx is a smart, extensible, toolable, and easy-to-use build framework. In this post, I'll show you how it works using 12 diagrams.
Plugins & Code Generation
Let's create a new Nx workspace.
> npx create-nx-workspace --preset=empty
This creates the following:
apps/
libs/
tools/
workspace.json
nx.json
tsconfig.base.json
package.json
Nx is a VSCode of build tools: it has a powerful core that you can build plugins for.
Let's run nx list
to see the list of available plugins:
> NX Also available:
@nrwl/cypress (builders,generators)
@nrwl/jest (builders,generators)
@nrwl/linter (builders)
@nrwl/node (builders,generators)
@nrwl/workspace (builders,generators)
@nrwl/express (executors,generators)
@nrwl/next (executors,generators)
@nrwl/react (executors,generators)
@nrwl/storybook (executors,generators)
@nrwl/web (executors,generators)
...
> NX Community plugins:
nx-electron - An Nx plugin for developing Electron applications
nx-stylelint - Nx plugin to use stylelint in a nx workspace
@nxtend/ionic-react - An Nx plugin for developing Ionic React applications and libraries
@nxtend/ionic-angular - An Nx plugin for developing Ionic Angular applications and libraries
@nxtend/capacitor - An Nx plugin for developing cross-platform applications using Capacitor
@nxtend/firebase - An Nx plugin for developing applications using Firebase
...
Let's add Next.js plugin, which will also add the React, Jest, and Cypress plugins.
> yarn add @nrwl/next
Let's use the Next.js and React generators to create new projects (applications and libraries) in the workspace.
Nx uses a virtual file system to run its generator, so you can compose them, run them in the dry-run mode etc.
> nx g @nrwl/next:app app1
> nx g @nrwl/react:app app2
> nx g @nrwl/react:lib lib
Everything is Metadata-Driven
Everything in Nx comes with metadata to enable toolability. For instance, you can run the generator from within VSCode. Default values, validations, autocompletion will work.
Even though I won't show it in this post. It's important to note that this works for any plugin, and also works for any other command. This metadata is used by Nx itself, by VSCode and WebStorm integrations, by GitHub integration, and by third-party tools implementing richer experiences with Nx.
Project Graph
This is a project graph. It reflects the source code in the repo and all the external dependencies that aren't authored in the repo (e.g., webpack, react).
Nodes in the project graph are defined in workspace.json
. You can manually define dependencies between the nodes, but you don't have to do it very often. Instead, Nx will analyze the source code (e.g., package.json, ts files, etc) and figure out dependencies for you. Will see this in action below.
Starting with Nx 12.1, this functionality is pluggable, so Nx can analyze other source files (e.g., Go, Kotlin).
We put a lot of work into making this process very fast, but even then it can take a few seconds for a large repo. That's why Nx stores the cached project graph, so it only reanalyzes the files you have changed.
Why not simply use package.json, like Lerna?
Similar to Lerna, Nx analyzes package.json files, but doing that alone is insufficient for many projects. For instance, Nx allows you to have lightweight nodes with less config, it works across languages and platforms, and supports scenarios where dependencies are implicit (e.g., e2e tests depending on the app).
Let's add this import to both apps:
import '@happyorg/mylib'
This changes the project graph to:
Task Graph
Any time you run anything, Nx will create a task graph from the project graph, and then will execute the tasks in that graph.
For instance > nx test lib
will create a task graph with a single node:
Projects/Targets/Tasks/Executors
Projects are the source code in the repo. A target is something that you can do with a project (e.g., build/serve/test). Every project can have many targets.
{
"root": "apps/app1",
"sourceRoot": "apps/app1",
"projectType": "application",
"targets": {
"build": {
"executor": "@nrwl/next:build",
"outputs": ["{options.outputPath}"],
"options": {
"root": "apps/app1",
"outputPath": "dist/apps/app1"
}
},
"serve": {
"executor": "@nrwl/next:server",
"options": {
"buildTarget": "app1:build",
"dev": true
}
},
"export": {
"executor": "@nrwl/next:export",
"options": {
"buildTarget": "app1:build:production"
}
},
"test": {
"executor": "@nrwl/jest:jest",
"outputs": ["coverage/apps/app1"],
"options": {
"jestConfig": "apps/app1/jest.config.js",
"passWithNoTests": true
}
}
}
}
To make adding Nx to an existing repo easier, if you don’t define any targets in workspace.json, Nx will treat npm scripts you have defined as targets.
An executor is a function (with some metadata) that tells Nx what to do when you run say nx test lib
. The metadata piece is crucial. This is what tells Nx how to validate params and set defaults, what to cache etc.
Task is an invocation of target. If you invoke the same target twice, you will create two tasks.
Creating a Task Graph
Nx uses the project graph (information about how projects relate to each other), but the two graphs aren't directly connected (e.g., they are not isomorphic). In the case above, app1
and app2
depend on lib
, but if you run nx run-many --target=test --projects=app1,app2,lib
, the created task graph will look like this:
Even though the apps depend on lib
, testing app1
doesn't depend on testing lib
. This means that the two tasks can run in parallel.
Let's change this.
{
"dependsOn": [
{
"target": "test",
"projects": "dependencies"
}
]
}
With this, running the same test command will create the following task graph:
This doesn't make much sense for tests, but it often makes sense for builds, where to build app1
, you want to build lib
first. You can also define similar relationships between targets of the same project (e.g., test depends on build).
It's important to stress that a task graph can contain different targets (e.g., builds and test), and those can run in parallel. For instance, as Nx is building app2
, it can be testing app1
at the same time.
Affected
When you run nx test app1
, you are telling Nx to run the app1:test
task plus all the tasks it depends on.
When you run nx run-many --target=test --projects=app1,lib
, you are telling Nx to do the same for two tasks app1:test
and lib:test
.
When you run nx run-many --target=test --all
, you are telling Nx to do this for all the projects.
As your workspace grows, retesting all projects becomes too slow. To address this Nx implements code change analysis (i.e., it analyzes your PRs) to get the min set of projects that need to be retested. How does it work?
When you run nx affected --target=test
, Nx will look at the files you changed in your PR, it will look at the nature of change (what exactly did you update in those files), and it will use this to figure the list of projects in the workspace that can be affected by this change. It will then run the run-many
command with that list.
For instance, if my PR changes lib
, and I then run nx affected --target=test
, Nx will figure out that app1
and app2
depend on lib
, so it will invoke nx run-many --target=test --projects=app1,app2,lib
.
Nx is able to analyze the nature of the change. E.g., if you change the version of Next.js in package.json, Nx will know that
app2
cannot be affected by it, so it will only retestapp1
.
Running Tasks
Nx will run the tasks in the task graph in the right order. Before running the task, Nx will compute its computation hash. As long as the computation hash is the same, the output of running the task will be the same.
How does Nx do it?
By default, the computation hash for say nx test app1
will include:
- all the source files of
app1
andlib
- relevant global configuration
- versions of externals dependencies
- runtime values provisioned by the user (e.g., version of Node)
- command flags
This behavior is customizable. For instance, lint checks may only depend on the source code of the project and global configs. Builds can depend on the dts files of the compiled libs instead of their source.
Once Nx computes the hash for a task, it then checks if it ran this exact computation before. First it checks locally, and then if it is missing, and if a remote cache is configured, it checks remotely.
If Nx finds the computation, Nx will retrieve it and replay it. Nx will place the right files in the right folders and will print the terminal output. So from the user's point of view the command ran the same, just a lot faster.
If Nx doesn't find this computation, Nx will run the task, and after it completes, it will take the outputs and the terminal output and will store it locally (and if configured remotely). All of this happens transparently, so you don't have to worry about it.
Although conceptually this is fairly straightforward, we do a lot of clever things to make this experience good for the user. For instance:
- We use an interesting technique of capturing stdout and stderr to make sure the replayed output looks exactly the same, including on Windows.
- We minimize the IO by remembering what files are replayed where.
- We only show relevant output when processing a large task graph.
- We provide affordances for troubleshooting cache misses.
- And many other things like that.
All of these are crucial for making Nx usable for any non-trivial workspace. For instance, if you run nx build app1 --parallel
, and it depends on say 1000 libs, Nx will create a task graph like this:
It will then process the task graph from the leaves, running everything it can in parallel. If ParentLib depends on ChildLib1 and ChildLib2, it will build the child libs first. Before running every task it will check if it already has the needed files in the dist folder? Found them? Then don't do anything. No? Check local cache and, if needed, remote cache. Cache hit? Restore the files. Cache miss, run the command, capture stdout and cache it together the file outputs for future use. The min amount of work that has to happen will happen. The rest will be either left as is or restored from cache.
Distributed Task Execution
Nx Cloud is a cloud companion for the Nx build framework. Many features of Nx Cloud are free, but some are paid. One of them is the distributed computation cache, which allows you to share cache with your team members and CI agents. If you pull the main branch in the morning, everything will be cache because the CI just did it.
But an even more exciting feature Nx Cloud provides is config-free distributed task execution (DTE). When using the distributed task execution, Nx is able to run any task graph on a many agents instead of locally.
When using this, nx affected --build
, won't run the build locally (which for large workspace can take hours). Instead, it will send the Task Graph to Nx Cloud. Nx Cloud Agents will then pick up the task they can run and execute them.
Note this happens transparently. If an agent builds app1
, it will fetch the outputs for lib
if it doesn't have it already.
As agents complete tasks, the main job where you invoked nx affected --build
will start receiving created files and terminal outputs.
After nx affected --build
completes, the machine will have the build files and all the terminal outputs as if it ran it locally.
Summary
- Nx is a smart, extensible, toolable and easy to use build framework.
- You can install plugins that will bring executors, generators, and dep graph processors.
- Nx uses a virtual file system to enable powerful code generation and code augmentation workflows with previews and VSCode and WebStorm support.
- You can very easily create apps, components, libs etc.
- Everything in Nx is metadata-driven and toolable.
- Nx is able to analyze your source code to create a Project Graph.
- Nx can use the project graph and information about projects' targets to create a Task Graph.
- Nx is able to perform code-change analysis to create the smallest task graph for your PR.
- Nx supports computation caching to never execute the same computation twice. This computation cache is pluggable and is distributed.
- Nx supports distributed task execution where a single command can run on multiple agents with zero-config.
Learn More
- Check out nx.dev to learn more about the Nx Build Framework.