Context
Let's assume that you have a server to work with that needs to be functionally tested, now let's imagine your normal validation routine on your local machine; it probably implies booting your server up, navigating through the terminal and trying to figure which tests you need to execute and all that hassle.
Most of the people only run unit tests on their machines, then we open a PR - it gets approved and we merge our code, after this whole process, the code is deployed on some linux machines and we probably get some feedback from some functional tests. And.. in the end, we find out that few functional tests are failing, so how much time did we wasted? Probably a lot. As right now we need to open another PR to fix the bugs or the tests.
What if you could execute functional tests on your machine inside docker containers that mimic your environments? or remotely on some linux node (like Jenkins/Circle CI/Buddy are providing). In this article I will give an example on how to do it; as orchestrating some Docker containers will solve some of your problems.
Approach
Before starting, I will leave a hyperlink to the github repository; Everything described in this article is contained in the repository linked, and most of the gists you will see are containing just code snippets; so if you want to go in-depth with this approach, please clone the repository, browse the code and try it on your machine, it will help a lot.
Now let's dive a bit into the example in order to understand the whole flow. For our example we will need the following dependencies:
Express will be our basic server and will use body-parser extension.
Jest will be our testing framework, we'll use it to execute tests.
Axios will be our HTTP Client, we'll use it to send requests.
yargs will be used to pass arguments from CLI in order to run tests on different environments(stages).
Now to our server:
As you can see, our server is basic, it has only one route with path /add and the only thing he knows is how to add two numbers; however adding two numbers can be considered as a functionality, so later we will write a test for this functionality.
Going forward into the testing part:
If we look closely at our package.json we will see some scripts.
By running the following commands:
$ npm run start
$ npm run test:local
The server will be booted on our machine, then tests will be triggered and will run against the server; but that's running against our machine, and we're not shipping our machine, we want our environment to be as close as possible to production, right?
So.. that's ok but still not the greatest solution, back on the topic;
Our entry point in the jest tests is the global-setup, and in this example we're getting the stage/environment from the CLI, then we're setting it as an environment variable as they are language/OS agnostic, lastly we're loading the environment configs dynamically based on that variable.
In the next gist I have defined the url based on the environment, the path to our add operation, and the addRequest, which will trigger a post request to our server.
postRequest is defined in request.js file.
And finally the jest test:
Summary:
We have the test framework ready, jest is triggering async requests via axios to the server, the server processes the requests then responds accordingly, and lastly we validate the responses with asserts.
And now, how can we tie everything together and orchestrate the docker containers?
First, we need a dockerfile for our server that will create the image for our docker container.
and a dockerfile for our tests:
and docker-compose file to define our containers:
So we are creating two different containers, the server container will be based on application.Dockerfile and will have the port 3000 exposed as it is the port our server uses; then after the image is built, the command npm run start
will be executed as entrypoint, which will boot the server.
The test-executor container will be based on test-executor.Dockerfile, and npm run test:docker
command will be executed as entrypoint which will trigger the tests.
Finally, npm run test:remote
will be executed (it's actually a script syntax that executes a docker compose up
command in the background) and will start both containers and finally will execute the tests automatically after the server (application container) is booted up.
After the tests are executed, the container will return an exit code 0; which indicates that the container does not have a foreground process attached so both containers will be stopped.
In the screenshot (above) we can see the application booted up, then the test-executor triggering the jest command, which then executed the tests against the server that was started in the other container; finally it printed the results in the console. All done with a simple command from your terminal.
Final thoughts:
The same idea can be applied if you want to execute frontend tests with selenium/cypress/wdio or some other library; besides the test-executor container you will need to orchestrate a selenium grid and some nodes that will need to communicate with your application in the docker network.
If you want to go even deeper with the idea, you will need to set everything up in some CI/CD tool like Jenkins/Circle CI/Buddy, then you can trigger jobs based on some webhooks, or from a simple request button that exists on a dashboard. We are using this in our daily workflow and it increased the productivity a lot, preventing lots of bugs and unwanted failed builds. So it does help a ton! that's why I decided to share it with the whole community.
Obviously, you don't need to use the same technologies as I did, for example, you can have a Java or Python server that uses Junit/Robot and any HTTP library or some other technologies you work with; the idea is kind of the same and it can be easily applied no matter of which tools/tech you're using.
In the next article I will show you how to do the same thing as described in this one, but we will have a frontend application instead of a server on which we will execute some user interface tests with a popular library.
Tips:
Start your application inside containers that are as close as possible to your production environment. Use config files generated by configuration management tools, would be ideal for the generation phase to be a prerequisite to building your application container.
Use suites to trigger bigger batches of tests.
Use shells as entrypoints or when you need to trigger multiple commands in a specific order.
Always use environment variables for your containers as it's easy to manage them and they are OS/language agnostic.
Always try to use light images as base such as alpine, or slim.
Availability:
The github repository can be found at:
https://github.com/iugabogdan/docker-orchestration-node-express-jest
I look forward to receiving your feedback and questions via the "Issues" tab on my git repo (above). Or just leave a comment to this story. Every star to the github repo and every comment will make my day better.
Spread love and share things that made your life better to the community!
Sources:
This post is inspired by a period in my lifetime when I worked with some really amazing people from Porto to which I am thankful as I've learned a lot from them.
The png image: https://www.pngegg.com/en/png-nqprj