From Code to Cloud: Deploying Hybrid SSR-SPA App on AWS Using Bash Script Automation

Jeysson Aly Contreras - May 10 - - Dev Community

Days ago, I wrote an article where I exposed one approach to building a SPA-SSR with plain JavaScript. This example works with Express to handle requests for different routes, rendering the initial HTML content on the server and sending it to the client. Each route’s rendering function generates the appropriate HTML for that route. On the client side, the JavaScript code takes over after the initial HTML is loaded. I built a client-side router to manage navigation and content updates without full page reloads.

Today I will delve into the deployment process, but before jumping into that, it’s important to mention that AWS offers several serverless and PaaS tools like Amazon Amplify, AWS Lambda, and Elastic Beanstalk, which simplify the deployment process and reduce the billing overhead. These tools provide easy integration with popular frameworks like React, Angular, and Vue.js, allowing developers to quickly deploy their applications without worrying about server management or infrastructure setup. However, in this tutorial, I will take a different approach. I will use infrastructure as code resources like AWS EC2 instances, VPC configurations, route tables, and more. This choice allows us to explore some foundational infrastructure elements, providing a deeper understanding of how these components work together to host web applications in a cloud environment.

Continuing with the subject matter and for learning purposes, I will proceed with the third part of a set of articles that delve into the intricacies of coding and deploying hybrid applications, but with a more practical approach.

The main goal is to show how to deploy the same application on AWS without relying on any GitOps methodology. The idea is to demonstrate an alternative approach to application deployment that is not dependent on any configuration management tool, infrastructure automation tool, or CI/CD pipeline.

I know you wonder why it’s important if everyone talks about DevOps and the automation of the software development life cycle; shouldn’t I learn only this? Of course, you should, but before jumping into a CI/CD pipeline, you should first learn how to take control of the deployment process. As developers often need to test new features or changes in real cloud environments to ensure they work as expected and don’t introduce issues, configuring CI/CD pipelines or management tools can be complex and time-consuming, especially for small projects or when quick testing is needed.

Now, if you work hand in hand with the operations team, script-driven deployment will bring great benefits for both development and operations. This good practice contributes to time savings for operations so they can focus on higher-level tasks, such as optimizing infrastructure, monitoring system health, and ensuring security, besides providing executable documentation of the deployment process.

Let’s get into it.

I will follow step-by-step the following approach to implement a script-driven deployment:

Implementation of all AWS resources
Nginx Configuration
Self-signed certificates
Docker implementation
Installation and deployment scripts
Implementation of all AWS resources

This is the first step in the script-driven deployment process, where I will create and configure all necessary AWS resources, such as VPCs, EC2 instances, security groups, route tables, Nats, and Internet gateways. This step ensures that the infrastructure is properly set up to support the deployment of our application. However, in this article, we do not delve into this process because I dedicate an entire article to it. You can find more information about how to implement the cloud infrastructure that supports the current application here.

The cloud architecture diagram that we have to follow for this application is shown below.

AWS DIAGRAM
It illustrates the various components and their interactions within the AWS environment. This diagram serves as a visual representation of the infrastructure setup and helps in understanding the overall system design.

Nginx Configuration

I will set up a folder called “nginx” on the server and create a configuration file named nginx.confwithin that folder. In this configuration file, I will define the necessary settings for Nginx, such as server blocks, proxy settings, and SSL certificates if required. This will ensure that Nginx is properly configured to handle incoming requests and route them to the appropriate backend services in our cloud infrastructure.

server {
listen 80;
listen [::]:80;
server_name www.example.com;
return 301 https://$host$request_uri; # Automatically redirect to HTTPS
# Omitted SSL-related settings for port 80 since it's just for redirection
}
As you can see in the code above, the first block redirect automatically redirects all HTTP traffic to HTTPS, ensuring secure communication between the client and the server. The “return” directive with a 301 status code is used to perform the redirection.

server {
listen 443 ssl;
http2 on;
listen [::]:443 ssl;
server_name www.example.com;

ssl_certificate /var/certificates/nginx.crt; 
ssl_certificate_key /var/certificates/nginx.key;
ssl_dhparam /var/certificates/dhparam.pem;
ssl_protocols TLSv1.3;
ssl_prefer_server_ciphers off; # Better security practice
ssl_ciphers "TLS_AES_256_GCM_SHA384:TLS_AES_128_GCM_SHA256:TLS_CHACHA20_POLY1305_SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256";
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";

# Default location for other requests
location / {

    proxy_pass http://10.0.2.239:5000;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection 'upgrade';
    proxy_set_header Host $host;
    proxy_cache_bypass $http_upgrade;

}
Enter fullscreen mode Exit fullscreen mode

}
The second block establishes an SSL connection with the client by configuring the self-signed certificates and key. This ensures that all data transmitted between the client and server is encrypted and secure. Additionally, we can configure other SSL parameters, such as cipher suites and protocols, to enhance the security of our server. Additionally, the location path to set up a reverse proxy is also specified in this block. This allows the server to forward client requests to another server and retrieve the response, acting as an intermediary. Within this application, the reverse proxy redirects the client’s request to the express server, which then handles the request and sends back the response to the reverse proxy. This setup not only improves performance by distributing the workload but also adds an extra layer of security as the client’s IP address remains hidden from the destination server.

Self-Signed Certificates

Within the nginx folder, I create a script to generate self-signed certificates, which can be used for testing or development purposes. These certificates are not issued by a trusted certificate authority, but they can still provide encryption for communication between the server and clients.

! /bin/bash

openssl req -x509 -nodes -new -sha256 -days 1024 -newkey rsa:2048 -keyout nginx.key -out nginx.crt -subj "//x=1/C=US/CN=SPA-webpack-Express"

openssl dhparam -out dhparam.pem 2048

rm -rf certificates
mkdir certificates
mv nginx.crt certificates/
mv nginx.key certificates/
mv dhparam.pem certificates/
By generating our own self-signed certificates, we have full control over the certificate creation process and can ensure that our server is using secure encryption. Run bash certificates.shto generate the self-signed certificates.

Docker Implementation

In the same folder, I also created a Dockerfile to containerize the Nginx server along with the self-signed certificates. This allows for easy deployment and scalability of the server in different environments. Additionally, using Docker ensures that the server and its dependencies are isolated, reducing potential conflicts and improving overall security.

FROM nginx:latest
LABEL Jeysson Contreras "alyconr@hotmail.com"
COPY nginx/nginx.conf /etc/nginx/conf.d/
COPY nginx/certificates /var/certificates
EXPOSE 80 443
ENTRYPOINT [ "nginx" ]
CMD ["-g", "daemon off;"]
The Dockerfile configuration is very simple and clear; I just copy nginx.confand the self-signed certificates into the container and expose the necessary ports for communication. This ensures that the server is running in a secure and controlled environment, making it easier to manage and maintain. The CMD command in the Dockerfile specifies the command that will be run when the container is started. In this case, it starts the nginx server with the specified configuration (-g daemon off) to keep the server running in the foreground.

Docker Compose

In the Docker Compose configuration file, I define the services that will be run as part of the application stack.

version: '3.9'

services:
nginx:
container_name: spa-app-docker
restart: always
image: 810129/spa-app:latest
ports:
- "80:80"
- "443:443"

networks:
  - mynetwork
Enter fullscreen mode Exit fullscreen mode

networks:

mynetwork:
Each service is defined with its own configuration, including the image to use, the ports to expose, and the network to connect to. The networking mode is set to “bridge” by default, which allows the containers to communicate with each other using their IP addresses. However, you can also specify a different networking mode, such as “host” or none,” depending on your application’s requirements. Additionally, you can define environment variables, volumes, and other settings for each service in the Docker Compose file to customize their behavior.

Installations and deployment scripts

Installation Script

The first script that I have to implement is the installpackages.shscript, which is the script that will be responsible for installing all the necessary packages and dependencies required for the application to run smoothly. This script will ensure that all the required software and libraries are installed on both instances.

!/bin/bash

IP_SERVER="localhost"
IP_PRIVATE="localhost"
USER="root"
SSH_KEYPEM_PATH=~/Desktop/ssh-keys-aws/REDHAT-KEYS.pem
SSH_KEYPUB_PATH=~/.ssh/id_rsa.pub
FLAG_FILE=~/.ssh/key_copied_flag

Display help message

display_help() {
cat <<EOF
Usage:
installpackages.sh -m Display this help message.
installpackages.sh -u Name of host's USER
installpackages.sh -i IP of the Public Server to install
installpackages.sh -s IP of the Private Server to install
EOF
exit 0
}

Parse command-line options

parse_Options() {
while getopts ":u:i:s:m" opt; do
case $opt in
u ) USER="$OPTARG";;
i ) IP_SERVER="$OPTARG";;
s ) IP_PRIVATE="$OPTARG";;
m ) display_help ;;
\?) echo "Invalid option: -"$OPTARG"" >&2; exit 1;;
:) echo "Option -"$OPTARG" requires an argument." >&2; exit 1;;
esac
done

}

Display configuration settings

display_configuration() {
echo "Your REMOTE PUBLIC SERVER IP is "$IP_SERVER""
echo "Your REMOTE PRIVATE IP is "$IP_PRIVATE""
echo "Your USER is "$USER""
}

copy_public_key() {
if [ ! -e "$FLAG_FILE" ]; then
if [ -f "$SSH_KEYPEM_PATH" ]; then
# Check if the public key exists
if [ -f "$SSH_KEYPUB_PATH" ]; then
cat "$SSH_KEYPUB_PATH" | ssh -i "$SSH_KEYPEM_PATH" "$USER@$IP_SERVER" "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
scp -r "$SSH_KEYPEM_PATH" "$USER@$IP_SERVER":~/
touch "$FLAG_FILE"
# Create flag file to indicate key has been copied
else
echo "Public key not found"
fi
else
echo "private key not found"
fi
else
echo "Flag file already exists"
fi
}

Install packages on the first server

install_packages_server_one() {
ssh -o ControlMaster=no -T -f "$USER@$IP_SERVER" '
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg -y
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update -y
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
sudo usermod -aG docker $USER
newgrp docker
chmod 400 REDHAT-KEYS.pem
IP_PRIVATE='$IP_PRIVATE'
ssh -i REDHAT-KEYS.pem $USER@$IP_PRIVATE "
sudo apt-get update
sudo apt-get install nodejs npm lsof -y
echo "All packages installed press enter"

"
'
}
main () {
parse_Options "$@"
display_configuration
copy_public_key
install_packages_server_one
}

main "$@"
The installpackages.sh has a function install_packages_server_one() with all the necessary install packages and configuration steps to run Docker and Docker Compose. Furthermore, there is a final line of code that allows the connection to a second instance via SSH in order to install node js and npm packages.

As shown in the code above, the script installs docker and docker-compose on the public instance and node js packages on the private instance from the public subnet. The private instance doesn’t have access from outside, so it relies on the public instance to connect via SSH and install the necessary packages through the Nat gateway. This setup ensures the secure and efficient installation of all required software components for running Docker and Docker Compose and for running the Express server on the private instance. By utilizing the public instance as a bridge, the installation process is streamlined and allows for easy management of npm packages. Additionally, this setup enhances security by limiting external access to the private instance and ensuring that only trusted connections are established via SSH.

Deployment Script

First of all, I declare all the variables that are needed for the deployment script, and then I build the following functions:

!/bin/bash

Default configuration

DOCKER_COMPOSE_DIR="WebpackBabelDockerDeploy"
EXPRESS_DIR="dist"
IP_DOCKER="localhost"
IP_EXPRESS="localhost"
USER="root"
BULD_ENV="prod"
DOCKERHUB_NAME_SPACE="810129"
VERSION="1.0.0"
DOCKER_TAG_SNAPSHOT=$(echo VERSION)-$( git rev-parse --short HEAD)-SNAPSHOT
SSH_KEYPEM_PATH=~/Desktop/ssh-keys-aws/REDHAT-KEYS.pem
SSH_KEYPUB_PATH=~/.ssh/id_rsa.pub
FLAG_FILE=~/.ssh/key_copied_flag

Display usage help

display_help() {
cat <<EOF
Usage in order:
deployment.sh -m Display this help message.
deployment.sh -u Name of USER to deploy to the host
deployment.sh -i IP of the host to deploy Docker Compose
deployment.sh -s IP of the host to deploy Express server
deployment.sh -d Name of the Docker Compose Directory
deployment.sh -f Name of the Express Directory

deployment.sh -e Choose build environment: 'prod' or 'dev' (default is 'prod')
EOF
exit 0
}
The display_help ()function prints out a usage message explaining how to use the script and the available command-line options.

parse_options() {
while getopts ":u:i:s:d:f:e:m" opt; do
case $opt in
d) DOCKER_COMPOSE_DIR="$OPTARG" ;;
f) EXPRESS_DIR="$OPTARG" ;;
i) IP_DOCKER="$OPTARG" ;;
s) IP_EXPRESS="$OPTARG" ;;
u) USER="$OPTARG" ;;
e) BULD_ENV="$OPTARG" ;;
m) display_help ;;
\?) echo "Invalid option: -$OPTARG" >&2; exit 1 ;;
:) echo "Option -$OPTARG requires an argument." >&2; exit 1 ;;
esac
done
}
The parse_optionsfunction uses the getoptscommand to parse the command-line options provided when running the script. It allows you to customize the script’s behavior by specifying options like Docker Compose directory, Express directory, IPs, user, and build environment.

Copy public key to remote servers

copy_public_key() {
if [ ! -e "$FLAG_FILE" ]; then
if [ -f "$SSH_KEYPEM_PATH" ] && [ -f "$SSH_KEYPUB_PATH" ]; then
cat "$SSH_KEYPUB_PATH" | ssh -i "$SSH_KEYPEM_PATH" "$USER@$IP_DOCKER" "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
scp -r "$SSH_KEYPEM_PATH" "$USER@$IP_DOCKER":~/
touch "$FLAG_FILE" # Create flag file to indicate key has been copied
else
echo "Public or Private key not found"
fi
else echo "Flag file already exists"
fi
}
The copy_pulic_key()function checks if a flag file exists and, if not, copies the SSH public key to the public instance. The purpose of copying the public key to the public server is to establish a secure and passwordless SSH connection for future interactions with this instance. This way, you can automate tasks like copying files, executing commands, and deploying applications to these hosts without the need to enter a password each time, enhancing security and convenience in the deployment process.

Delete old versions and clean up Docker containers on Docker Compose host

cleanup_remote_hosts() {
if [ -d "$DOCKER_COMPOSE_DIR" ]; then
rm -Rf "$DOCKER_COMPOSE_DIR" ./"$DOCKER_COMPOSE_DIR.tar.gz"
rm -Rf "$EXPRESS_DIR" ./"$EXPRESS_DIR.tar.gz"

    ssh -T "$USER@$IP_DOCKER" <<EOF
        rm -Rf ~/PROJECTS/
        mkdir ~/PROJECTS/
        chmod -R 777 ~/PROJECTS/
        docker stop \$(docker ps -a -q)
        docker rm -f \$(docker ps -a -q)
        sudo docker rmi "$DOCKERHUB_NAME_SPACE/spa-app:$DOCKER_TAG_SNAPSHOT"
        IP_EXPRESS='$IP_EXPRESS'
        chmod 400 ~/REDHAT-KEYS.pem
        ssh -i ~/REDHAT-KEYS.pem $USER@$IP_EXPRESS "
        sudo rm -Rf ~/EXPRESS/
        mkdir ~/EXPRESS/            
        chmod -R 777 ~/EXPRESS/
        "
Enter fullscreen mode Exit fullscreen mode

EOF
fi
}
The cleanup_remote_hosts() function is responsible for cleaning up old versions and Docker containers on the Docker Compose host ($IP_DOCKER) and the Express server host ($IP_EXPRESS). It removes old project directories, stops and removes Docker containers, and even removes Docker images to start fresh with each deployment.

Build and bundle the app based on the build environment

build_and_bundle_app() {
if [ "$BULD_ENV" == "dev" ]; then
npm run build-dev
else
npm run build-prod
fi

docker build -t "$DOCKERHUB_NAME_SPACE/spa-app:$DOCKER_TAG_SNAPSHOT" -f nginx/Dockerfile .
for component in spa-app; do
    docker push "$DOCKERHUB_NAME_SPACE/$component:$DOCKER_TAG_SNAPSHOT"
done
Enter fullscreen mode Exit fullscreen mode

}
Build and Bundle App: The build_and_bundle_app function is responsible for building and bundling the web application based on the specified build environment ($BULD_ENV). It typically runs npm run build-dev or npm run build-prod and then builds a Docker image for the application.

Deploy Docker Compose and Express server

deploy_services() {
# Create and copy project directories
mkdir "$DOCKER_COMPOSE_DIR"
cp -r ./nginx "./$DOCKER_COMPOSE_DIR"
cp -r ./docker-compose.yml "./$DOCKER_COMPOSE_DIR"
cp -r ./package.json "./$DOCKER_COMPOSE_DIR"

mkdir "$EXPRESS_DIR"
cp -r ./dist "./$EXPRESS_DIR"
cp -r ./server.js "./$EXPRESS_DIR"
cp -r ./package.json "./$EXPRESS_DIR"

# Compress the working directories
tar -zcvf "$DOCKER_COMPOSE_DIR.tar.gz" "$DOCKER_COMPOSE_DIR"
tar -zcvf "$EXPRESS_DIR.tar.gz" "$EXPRESS_DIR"

# Copy compressed directories to hosts
scp -r ./"$DOCKER_COMPOSE_DIR.tar.gz" "$USER@$IP_DOCKER:~/PROJECTS/"
scp -r ./"$EXPRESS_DIR.tar.gz" "$USER@$IP_DOCKER:~/"

# If the copy is successful
if [ $? -eq 0 ]; then
    echo "Copy Folders successful"
    ssh -T "$USER@$IP_DOCKER" <<EOF
        cd ~/PROJECTS/
        tar -xvzf "$DOCKER_COMPOSE_DIR.tar.gz"
        cd "$DOCKER_COMPOSE_DIR"
        sed -i "s#image: $DOCKERHUB_NAME_SPACE/spa-app:latest#image: $DOCKERHUB_NAME_SPACE/spa-app:$DOCKER_TAG_SNAPSHOT#g" docker-compose.yml
        docker compose up -d
        IP_EXPRESS='$IP_EXPRESS'
        chmod 400 ~/REDHAT-KEYS.pem
        scp -i ~/REDHAT-KEYS.pem -r ~/EXPRESS-FOLDER.tar.gz $USER@$IP_EXPRESS:~/EXPRESS/
        ssh -T -i ~/REDHAT-KEYS.pem "$USER@$IP_EXPRESS" "
            cd ~/EXPRESS/
            tar -xvzf "$EXPRESS_DIR.tar.gz"
            cd "$EXPRESS_DIR"
            sudo npm install
            sudo npm install -g pm2
            sudo pm2 -f start server.js
        "          
Enter fullscreen mode Exit fullscreen mode

EOF
else
echo "Deployment failed"
fi

}
Deploy Services: The deploy_services function copies project directories and compressed files to remote hosts, updates the Docker Compose configuration to use the new Docker image and then starts the Docker Compose services. It also copies the Express application to the Express server host, installs dependencies, and starts the application using pm2.

Main function

main() {
parse_options "$@"
display_configuration
copy_public_key
cleanup_remote_hosts
build_and_bundle_app
deploy_services
}

Run the script

main "$@"
Main Function: The main function orchestrates the entire deployment process by calling the other functions in the appropriate order.

You can run the script on your terminal :

bash -u $USER -i $IP_DOCKER -s $IP_EXPRESS -d $DOCKER_COMPOSE_DIR -s $EXPRESS_DIR -e $BUILD_ENV

To know how to use the script, what options are available, and the order to run it, you can use the “-m” option bash deployment.sh -m.

Overall, this script automates the deployment of a web application by performing tasks such as copying files, setting up SSH keys, cleaning up old resources, building and bundling the application, and deploying it to specified remote hosts. It offers flexibility through command-line options to customize the deployment configuration.

Conclusion
In this guide, I talked about nginx, OpenSSL, docker, aws, and bash script, which are all important tools and technologies used in the deployment process of a web application. By understanding how these components work together, you can effectively automate the deployment process and save time and effort in managing your web application deployments. With the knowledge gained from this guide, you can confidently deploy your web applications with ease and efficiency, without configuring any pipelines or manually setting up servers. Bash scripts are a great way to automate repetitive tasks and streamline the deployment process for web applications. Furthermore, scripting helps to make executable documentation, making it easier for others to understand and replicate the deployment process. With the ability to customize deployment configurations through command-line options, developers can easily adapt the automation process to fit their specific needs.

If you have enjoyed this article please give some claps and comment below. I share the repo where you will find the complete example.

. . .