The demand for efficient and scalable infrastructure management has never been greater in today's dynamic software development landscape. Some of the best tools for this landscape are Docker Swarm, Appwrite, and Swarmpit, which provide high flexibility and performance.
By leveraging the capabilities of these technologies, organizations can achieve increased scalability, reliability, and productivity in their application deployment and management processes. These technologies facilitate seamless containerization, faster development with pre-built services, and easier cluster administration.
In this tutorial, you'll learn how to deploy your Appwrite containers using Docker Swarm and manage the containers using Swarmpit.
Prerequisites
To get the most out of this tutorial, ensure you have the following:
Set up the virtual machine servers
First, you’ll be setting up two virtual machines, Manager and Worker, which will act as the servers for our swarm. To proceed, do the following:
- Download and Install the UTM virtualizer from the official website
- Download the Ubuntu server 22.04.2 LTS for ARM from their official website
After that, follow the instructions in this documentation to install the Ubuntu server on the virtual machines.
During the Ubuntu server installation, you will have the opportunity to install Docker and OpenSSH. It is recommended that you select both of these options during the installation process.
Once you have completed the installation, you can verify the installation of Docker and OpenSSH. To check the Docker version, run the following command in the terminal:
docker -v
If Docker fails to install during the Ubuntu installation, you can refer to the documentation for installation instructions.
For OpenSSH, you can check the SSH service status by executing the command below:
systemctl status sshd.service
Once you've completed that, execute the following command to install Git:
sudo apt install git
Set up the network file system (NFS) mount
The NFS mount enables the sharing of a directory. As the service needs to be deployed across multiple nodes, it is crucial to share certain resources, such as volumes, across all nodes.
Accessing the nodes via SSH
To ensure smooth server access, start by retrieving the IP address for the servers using the following command:
ip addr | grep “inet 192”
You'll need to repeat the steps to retrieve the IP addresses for both the manager and worker nodes.
Once you have obtained both the manager and worker IP addresses, follow these instructions to SSH into the servers from the virtual machine host computer.
Open two terminal sessions on the virtual machine host computer; in the first terminal, execute the following command to connect to the manager server:
ssh manager@<manager_ip>
For the second terminal, run the command:
ssh worker@<worker_ip>
This connects to the worker server.
Replace both <manager_ip>
and <worker_ip>
with their corresponding IP addresses.
Setting up the NFS
Next, you can follow these steps to set up the NFS shared directory on the manager's SSH terminal.
First, install the NFS server package on the host by executing the command:
sudo apt install nfs-kernel-server
After installing the NFS server package, you need to create the shared directory on the host using the following command:
sudo mkdir ./nfs/app -p
To ensure security, NFS converts any root actions performed on the worker
(client) to the nobody:nogroup
credentials. To align the directory ownership with these credentials, you need to execute the following command on the manager node terminal:
sudo chown nobody:nogroup ./nfs/app
Afterward, open the /etc/exports
file on the host machine with root privileges:
sudo nano /etc/exports
Within the /etc/exports
file, create a configuration line for each directory you want to share. Replace with the actual IP address of the worker node. For example:
/home/manager/nfs/app <worker_ip>(rw,sync,no_subtree_check)
In this configuration line, the rw
option grants read and write access to the client, sync
ensures changes are written to disk before replying, and no_subtree_check
disables subtree checking to prevent issues when files are renamed while open on the client.
Save and close the /etc/exports
file.
Finally, restart the NFS server to make the shares available:
sudo systemctl restart nfs-kernel-server
If you have a firewall, you must adjust the settings to allow the worker node to access the files.
After that, you have to mount this directory on the worker node. To do this, Firstly, install the nfs-common on the worker node using the command:
sudo apt install nfs-common
Then create mount the directory using the following commands:
sudo mkdir -p /nfs/app
sudo mount <manager_ip>:/home/manager/nfs/app /nfs/app
Replace <manager_ip>
with the actual manager IP. Once that is done, you can run this command to check that the NFS shared directory has been mounted.
df -h
Setting up the swarm
Docker Swarm provides a native clustering and orchestration solution for Docker, enabling efficient distribution and scalability of services across multiple nodes.
To initiate Docker Swarm on the manager node, execute the following command:
sudo docker swarm init --advertise-addr <manager_ip>
By running this command specifically on the manager node, Docker Swarm is activated, and a join token is generated. This token allows for creating and managing a swarm of Docker worker nodes, empowering you to scale and distribute services seamlessly.
Copy the command and run it on the worker node to join the worker node to the swarm.
Once this is successful, you can check whether the node has been added to the swarm on the manager node by running the command:
sudo docker node ls
Creating and deploying the Appwrite services on the swarm
To do this, you’ll be using the docker
stack
command. This command allows you to set up multiple Docker services via the compose file. It uses the YAML file to set up its services.
First, you must download the appwrite docker-compose.yml
from the Appwrite’s documentation and modify its content to be swarm-compatible.
After downloading the file, you can open it using any text editor you prefer.
Preparing the Appwrite swarm file
To proceed, use the shared directories as volumes, as Docker Swarm does not typically share volumes between nodes. Using the previously created NFS directory, you can easily share directories between nodes.
To make it happen, you must take the following steps:
On your manager node, run the following commands.
cd ./nfs/app
This step is crucial because the compose file relies on specific files from this repository. Afterwards, you can create the necessary volume directories by executing the following command:
mkdir -p ./appwrite/{mariadb,redis,cache,uploads,certificates,functions,influxdb,config,builds,app,src,dev,docs,tests,public,appwrite}
After creating the volume directories, proceed to update the volumes in the downloaded compose file by deleting the volume specifications in the file.
Next, replace the volume specification of each service with their corresponding shared directory volume as indicated in the table below:
Old | New |
---|---|
appwrite-config:/storage/config:ro | ./appwrite/config:/storage/config:ro |
appwrite-certificates:/storage/certificates:ro | ./appwrite/certificates:/storage/certificates:ro |
appwrite-uploads:/storage/uploads:rw | ./appwrite/uploads:/storage/uploads:rw |
appwrite-cache:/storage/cache:rw | ./appwrite/cache:/storage/cache:rw |
appwrite-certificates:/storage/certificates:rw | ./appwrite/certificates:/storage/certificates:rw |
appwrite-functions:/storage/functions:rw | ./appwrite/functions:/storage/functions:rw |
Next, to optimize request handling and resource allocation in a Docker Swarm cluster, restrict the Traefik
service to the manager node. Restricting this centralizes routing and load balancing, avoiding conflicts and simplifying management.
Add the following configuration below the Traefik
service's port specification to enforce this restriction.
deploy:
replicas: 1
placement:
constraints:
- "node.role==manager"
The deploy option specifies that only one replica be created, and the service be deployed on the manager node.
Afterward, it is necessary to include the environment variables. Typically, you need to specify the values for each service's environment variable individually. However, in this case, you can simplify the process by using an environment file containing all the required services' secrets. Simply add this file to each service requiring environment variables by adding the following specification.
env_file:
- .env.appwrite
Next, eliminate any unnecessary specifications in the file to ensure Swarm compatibility. Remove specifications such as x-logging
and container-name
, as they are incompatible with Swarm. Additionally, remove all occurrences of <<: *x-logging
from the compose file.
The docker-compose file should now look like this
Creating the swarm file on the manager node
Once completed, you must copy the compose file to the manager node for deployment. To accomplish this, follow these steps.
First, on the manager node, run the command:
sudo nano /home/manager/nfs/app/appwrite.swarm.yml
Then copy the already modified file content from your text editor and paste it into the manager node’s nano editor interface.
Save the file.
Next, run the following command.
sudo nano /home/manager/nfs/app/.env.appwrite
After doing so, add this content.
Once you have completed all the previous steps, execute the following command on the manager's node to deploy the services:
sudo docker stack deploy -c /home/manager/nfs/app/appwrite.swarm.yml
This command initiates deploying the services.
The deployment takes a while, as the docker stack command pulls the images from the Docker Hub before running the containers.
To list all services, execute:
sudo docker service ls
For more detailed information about a specific service, use:
sudo docker service ps —no-trunc <SERVICE_NAME>
These commands provide valuable insights into the status and details of the services running within the Docker Swarm cluster.
Once the services are up and running, open the web browser on the virtual machine’s host computer and visit http://<manager_ip>
to preview the Appwrite app.
Create and deploy the Swarmpit Services on the Swarm
Once Appwrite has been successfully deployed, you can then proceed to set up the Swarmpit on the swarm.
To deploy Swarmpit, run the following command on the manager node:
sudo git clone https://github.com/swarmpit/swarmpit -b master && \
sudo docker stack deploy -c swarmpit/docker-compose.arm.yml swarmpit
Once the swarmpit services have been deployed, navigate to the VM’s host browser, then visit http://192.168.64.12:888.
Managing the swarm with swarmpit
To proceed, create your first admin account on Swarmpit by entering your username and password, then click Create Admin.
After creating your account, the browser will take you to the dashboard where you can manage the deployed services.
Conclusion
Docker Swarm and Swarmpit provide numerous benefits when serving Appwrite. These tools enable rapid and efficient infrastructure setup and management. Swarmpit distinguishes itself by offering a user-friendly graphical interface in addition to swarm management capabilities.
Furthermore, you can gain numerous advantages for your Appwrite applications by leveraging the power of Docker Swarm and Swarmpit. Scalability, high availability, simplified updates, service discovery, load balancing, and centralized control are just a few examples.
Resources
You may find the following resources useful: