You can use GitLab CI with Docker to create Docker images. To run Docker commands in your Gitlab pipeline, you must configure GitLab Runner to support docker commands [1].
There are three ways to enable Docker commands:
- The shell executor
- The Docker executor with the Docker image (Docker-in-Docker)
- Docker socket binding
Executing a runner in privileged mode is required. But if you want to use docker build
without privileged mode, you can also use:
- Kaniko
- External services like Google Cloud Build or AWS Code Build.
In both of these alternatives, you will need to save the credentials on Gitlab CI as secrets or variables. But in Amazon Web Services, you can avoid this usage by binding the Kubernetes service account attached to the specific runner with an IAM role. This can be achieved by enabling IAM Roles for Service Accounts (IRSA) on the EKS cluster.
IRSA will help us to run AWS Code build jobs from Gitlab CI pipeline without storing credentials or connecting to a docker registry. We let AWS Code build do the work for us.
Architecture
In this blog post, we configure the Gitlab runner to start AWS Code Build jobs. The following architecture describes this process:
- Creating an EKS cluster using eksctl
- Creating IAM role to start a build job and access the build informations
- Attaching the IAM role to the specific runner
- Creating a build project
- Creating an ECR repository
- Configuring the build specification
If you want to understand how an IAM role can be attached to a Gitlab runner, please refer to my previous post on Securing access to AWS IAM Roles from Gitlab CI
EKS configuration
We start by creating the EKS cluster.
export AWS_PROFILE=<AWS_PROFILE>
export AWS_REGION=eu-west-1
export EKS_CLUSTER_NAME=devops
export EKS_VERSION=1.19
eksctl create cluster \
--name $EKS_CLUSTER_NAME \
--version $EKS_VERSION \
--region $AWS_REGION \
--managed \
--node-labels "nodepool=dev"
- Create an IAM OIDC identity provider for the cluster
eksctl utils associate-iam-oidc-provider --cluster=$EKS_CLUSTER_NAME --approve
ISSUER_URL=$(aws eks describe-cluster \
--name $EKS_CLUSTER_NAME \
--query cluster.identity.oidc.issuer \
--output text)
- Configure
kubectl
to communicate with the cluster:
aws eks --region $AWS_REGION update-kubeconfig --name $EKS_CLUSTER_NAME
- Create the namespace to use for the Kubernetes service account.
kubectl create namespace dev
- Create the Kubernetes service account to use for specific runner:
kubectl create serviceaccount -n dev app-deployer
- Allow the Kubernetes service account to impersonate the IAM Role by using an IAM policy binding between the two. This binding allows the Kubernetes Service account to act as the IAM Role.
ISSUER_HOSTPATH=$(echo $ISSUER_URL | cut -f 3- -d'/')
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
PROVIDER_ARN="arn:aws:iam::$AWS_ACCOUNT_ID:oidc-provider/$ISSUER_HOSTPATH"
cat > oidc-trust-policy.json << EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "$PROVIDER_ARN"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"${ISSUER_HOSTPATH}:sub": "system:serviceaccount:dev:app-deployer",
"${ISSUER_HOSTPATH}:aud": "sts.amazonaws.com"
}
}
}
]
}
EOF
GITLAB_ROLE_NAME=gitlab-runner-role
aws iam create-role \
--role-name $GITLAB_ROLE_NAME \
--assume-role-policy-document file://oidc-trust-policy.json
GITLAB_ROLE_ARN=$(aws iam get-role \
--role-name $GITLAB_ROLE_NAME \
--query Role.Arn --output text)
- Add the
eks.amazonaws.com/role-arn=$GITLAB_ROLE_ARN
annotation to the Kubernetes service account, using the IAM role ARN.
kubectl annotate serviceAccount app-deployer -n dev eks.amazonaws.com/role-arn=$GITLAB_ROLE_ARN
You could also use
eksctl create iamserviceaccount [..]
[2]
Configure Code Build
Create service role for Code Build
IMAGE_REPO_NAME="app"
cat > codebuild-trust-policy.json << EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "codebuild.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
CODE_BUILD_ROLE_NAME="app-code-build-service-role"
aws iam create-role \
--role-name $CODE_BUILD_ROLE_NAME \
--assume-role-policy-document file://codebuild-trust-policy.json
CODEBUILD_ROLE_ARN=$(aws iam get-role \
--role-name $CODE_BUILD_ROLE_NAME \
--query Role.Arn --output text)
cat > code-build-policy.json << EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CloudWatchLogsPolicy",
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"*"
]
},
{
"Sid": "S3GetObjectPolicy",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::codebuild-$AWS_REGION-$AWS_ACCOUNT_ID-input-bucket/*"
]
},
{
"Sid": "ECRPullPolicy",
"Effect": "Allow",
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage"
],
"Resource": [
"arn:aws:ecr:$AWS_REGION:$AWS_ACCOUNT_ID:repository/$IMAGE_REPO_NAME"
]
},
{
"Sid": "ECRAuthPolicy",
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken"
],
"Resource": [
"*"
]
},
{
"Sid": "ECRPushPolicy",
"Effect": "Allow",
"Action": [
"ecr:CompleteLayerUpload",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
],
"Resource": [
"arn:aws:ecr:$AWS_REGION:$AWS_ACCOUNT_ID:repository/$IMAGE_REPO_NAME"
]
},
{
"Sid": "S3BucketIdentity",
"Effect": "Allow",
"Action": [
"s3:GetBucketAcl",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::codebuild-$AWS_REGION-$AWS_ACCOUNT_ID-input-bucket"
}
]
}
EOF
aws iam put-role-policy \
--role-name $CODE_BUILD_ROLE_NAME \
--policy-name app-code-build-policy \
--policy-document file://code-build-policy.json
Create the ECR Repository
aws ecr create-repository --repository-name $IMAGE_REPO_NAME
Create the Code Build project and the associated bucket to store artifacts
CODEBUILD_BUCKET=codebuild-$AWS_REGION-$AWS_ACCOUNT_ID-input-bucket
aws s3api create-bucket \
--bucket $CODEBUILD_BUCKET \
--region $AWS_REGION \
--create-bucket-configuration LocationConstraint=$AWS_REGION
aws s3api put-public-access-block \
--bucket $CODEBUILD_BUCKET \
--public-access-block-configuration "BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"
cat > project.json << EOF
{
"name": "build-app-docker-image",
"source": {
"type": "S3",
"location": "codebuild-<AWS_REGION>-<AWS_ACCOUNT_ID>-input-bucket/app/<IMAGE_TAG>/image.zip"
},
"artifacts": {
"type": "NO_ARTIFACTS"
},
"environment": {
"type": "LINUX_CONTAINER",
"image": "aws/codebuild/standard:4.0",
"computeType": "BUILD_GENERAL1_SMALL",
"environmentVariables": [
{
"name": "AWS_REGION",
"value": "<AWS_REGION>"
},
{
"name": "AWS_ACCOUNT_ID",
"value": "<AWS_ACCOUNT_ID>"
},
{
"name": "IMAGE_REPO_NAME",
"value": "<IMAGE_REPO_NAME>"
},
{
"name": "IMAGE_TAG",
"value": "<IMAGE_TAG>"
}
],
"privilegedMode": true
},
"serviceRole": "<ROLE_ARN>"
}
EOF
IMAGE_TAG="latest"
sed -i "s/<IMAGE_REPO_NAME>/$IMAGE_REPO_NAME/g; s/<IMAGE_TAG>/$IMAGE_TAG/g; s/<AWS_REGION>/$AWS_REGION/g; s/<AWS_ACCOUNT_ID>/$AWS_ACCOUNT_ID/g; s,<ROLE_ARN>,$CODEBUILD_ROLE_ARN,g;" project.json
aws codebuild create-project --cli-input-json file://project.json &
Finally, we create the build.json
, the buildspec.yml
and the Dockerfile for our test.
cat > build.json << EOF
{
"projectName": "app",
"sourceLocationOverride": "codebuild-<AWS_REGION>-<AWS_ACCOUNT_ID>-input-bucket/app/<IMAGE_TAG>/image.zip",
"environmentVariablesOverride": [
{
"name": "IMAGE_TAG",
"value": "<IMAGE_TAG>",
"type": "PLAINTEXT"
}
]
}
EOF
buildspec.yml
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t $IMAGE_REPO_NAME:$IMAGE_TAG .
- docker tag $IMAGE_REPO_NAME:$IMAGE_TAG $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
cat > Dockerfile << EOF
FROM nginx
ENV AUTHOR=Dev.to
WORKDIR /usr/share/nginx/html
COPY hello_docker.html /usr/share/nginx/html
CMD cd /usr/share/nginx/html && sed -e s/Docker/"$AUTHOR"/ hello_docker.html > index.html ; nginx -g 'daemon off;'
EOF
cat > hello_docker.html << EOF
<!DOCTYPE html><html>
<head>
<meta charset="utf-8">
</head>
<body>
<h1 id="toc_0">Hello Docker!</h1>
<p>This is being served from a <b>docker</b><br>
container running Nginx.</p>
</body>
</html>
EOF
Assign KSA to Gitlab runner
The next step is to assign the KSA to our Gitlab runner.
- Start by installing Helm:
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
- Add Gitlab Helm package:
helm repo add gitlab https://charts.gitlab.io
- Configure the runner:
Create the file values.yaml
:
cat > values.yaml << EOF
imagePullPolicy: IfNotPresent
gitlabUrl: https://gitlab.com/
runnerRegistrationToken: "<REGISTRATION_TOKEN>"
unregisterRunners: true
terminationGracePeriodSeconds: 3600
concurrent: 10
checkInterval: 30
rbac:
create: true
metrics:
enabled: true
runners:
image: ubuntu:18.04
locked: true
pollTimeout: 360
protected: true
serviceAccountName: app-deployer
privileged: false
namespace: dev
builds:
cpuRequests: 100m
memoryRequests: 128Mi
services:
cpuRequests: 100m
memoryRequests: 128Mi
helpers:
cpuRequests: 100m
memoryRequests: 128Mi
tags: "k8s-dev-runner"
nodeSelector:
nodepool: dev
EOF
You can find the description of each attribute in the Gitlab runner charts repository [3]
Get the Gitlab registration token in
Project -> Settings -> CI/CD -> Runners
in theSetup a specific Runner manually
section.Install the runner:
helm install -n dev docker-image-dev-runner -f values.yaml gitlab/gitlab-runner
Using the specific runner in Gitlab CI
Before running our first pipeline in Gitlab CI, let's add the Kubernetes cluster administrator permission to the IAM role we created earlier.
cat > build-docker-image-policy.json << EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CodeBuildStartPolicy",
"Effect": "Allow",
"Action": [
"codebuild:StartBuild",
"codebuild:BatchGet*"
],
"Resource": [
"arn:aws:codebuild:$AWS_REGION:$AWS_ACCOUNT_ID:project/build-app-docker-image"
]
},
{
"Sid": "LogsAccessPolicy",
"Effect": "Allow",
"Action": [
"logs:FilterLogEvents"
],
"Resource": [
"*"
]
},
{
"Sid": "S3ObjectCodeBuildPolicy",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::codebuild-$AWS_REGION-$AWS_ACCOUNT_ID-input-bucket",
"arn:aws:s3:::codebuild-$AWS_REGION-$AWS_ACCOUNT_ID-input-bucket/app/*"
]
}
]
}
EOF
aws iam put-role-policy \
--role-name $GITLAB_ROLE_NAME \
--policy-name build-docker-image-policy \
--policy-document file://build-docker-image-policy.json
Note: This policy is used as an example. AWS recommends you to use fine grained permissions.
Now we can run our pipeline .gitlab-ci.yml
:
stages:
- dev
before_script:
- yum install -y zip jq
publish image:
stage: dev
image:
name: amazon/aws-cli
script:
- IMAGE_TAG=$CI_COMMIT_TAG-$CI_COMMIT_SHORT_SHA
- sed -i "s/<IMAGE_TAG>/$IMAGE_TAG/g; s/<AWS_REGION>/$AWS_REGION/g; s/<AWS_ACCOUNT_ID>/$AWS_ACCOUNT_ID/g;" build.json
- zip -r image.zip buildspec.yml Dockerfile hello_docker.html
- aws s3api put-object --bucket $CODEBUILD_BUCKET --key app/$IMAGE_TAG/image.zip --body image.zip
- CODEBUILD_ID=$(aws codebuild start-build --project-name "build-app-docker-image" --cli-input-json file://build.json | jq -r '.build.id')
- sleep 5
- CODEBUILD_JOB=$(aws codebuild batch-get-builds --ids $CODEBUILD_ID)
- LOG_GROUP_NAME=$(jq -r '.builds[0].logs.groupName' <<< "$CODEBUILD_JOB")
- |
if [[ ${CODEBUILD_ID} != "" ]];
then
while true
do
sleep 10
aws logs tail $LOG_GROUP_NAME --since 10s
CODE_BUILD_STATUS=$(aws codebuild batch-get-builds --ids "$CODEBUILD_ID" | jq '.builds[].phases[] | select (.phaseType=="BUILD") | .phaseStatus' | tr -d '"')
if [[ ${CODE_BUILD_STATUS} = "FAILED" ]];
then
exit 1
elif [[ ${CODE_BUILD_STATUS} = "SUCCEEDED" ]];
then
break
fi
done
else
echo "Build initialization has failed"
exit 1
fi
tags:
- k8s-dev-runner
only:
refs:
- tags
The job will:
- Zip the necessary files to build the image
- Upload the zip file to the S3 bucket
- Start the build and get the generated ID
- Get the log group
- Show the code build logs
- Wait until the build is finished
Let's start our pipeline.
Push the following files in your Gitlab project. Do not forget to tag the commit.
- project.json
- build.json
- buildspec.yml
- Dockerfile
- hello_docker.html
- values.yaml
- .gitlab-ci.yml
Add the following Variable CI/CD:
AWS_ACCOUNT_ID=$AWS_ACCOUNT_ID
AWS_REGION=$AWS_REGION
CODEBUILD_BUCKET=$CODEBUILD_BUCKET
Now you can run the pipeline.
gitlab.output
Running with gitlab-runner 13.9.0 (2ebc4dc4)
on docker-image-dev-runner-gitlab-runner-5d98965dc9-2tr44 LEPNxeEr
Resolving secrets
00:00
Preparing the "kubernetes" executor
00:00
Using Kubernetes namespace: dev
WARNING: Pulling GitLab Runner helper image from Docker Hub. Helper image is migrating to registry.gitlab.com, for more information see https://docs.gitlab.com/runner/configuration/advanced-configuration.html#migrating-helper-image-to-registrygitlabcom
Using Kubernetes executor with image amazon/aws-cli ...
Preparing environment
00:06
Waiting for pod dev/runner-lepnxeer-project-25941042-concurrent-0lj8mk to be running, status is Pending
Waiting for pod dev/runner-lepnxeer-project-25941042-concurrent-0lj8mk to be running, status is Pending
ContainersNotReady: "containers with unready status: [build helper]"
ContainersNotReady: "containers with unready status: [build helper]"
Running on runner-lepnxeer-project-25941042-concurrent-0lj8mk via docker-image-dev-runner-gitlab-runner-5d98965dc9-2tr44...
Getting source from Git repository
00:02
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/stack-labs/internal/sandbox/chabanerefes/code-build-gitlab-ci/.git/
Created fresh repository.
Checking out 4b3e0451 as master...
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:58
$ yum install -y zip jq
Loaded plugins: ovl, priorities
Resolving Dependencies
--> Running transaction check
---> Package jq.x86_64 0:1.5-1.amzn2.0.2 will be installed
--> Processing Dependency: libonig.so.2()(64bit) for package: jq-1.5-1.amzn2.0.2.x86_64
---> Package zip.x86_64 0:3.0-11.amzn2.0.2 will be installed
--> Running transaction check
---> Package oniguruma.x86_64 0:5.9.6-1.amzn2.0.4 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
jq x86_64 1.5-1.amzn2.0.2 amzn2-core 154 k
zip x86_64 3.0-11.amzn2.0.2 amzn2-core 263 k
Installing for dependencies:
oniguruma x86_64 5.9.6-1.amzn2.0.4 amzn2-core 127 k
Transaction Summary
================================================================================
Install 2 Packages (+1 Dependent package)
Total download size: 543 k
Installed size: 1.6 M
Downloading packages:
--------------------------------------------------------------------------------
Total 2.8 MB/s | 543 kB 00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : oniguruma-5.9.6-1.amzn2.0.4.x86_64 1/3
Installing : jq-1.5-1.amzn2.0.2.x86_64 2/3
Installing : zip-3.0-11.amzn2.0.2.x86_64 3/3
Verifying : zip-3.0-11.amzn2.0.2.x86_64 1/3
Verifying : oniguruma-5.9.6-1.amzn2.0.4.x86_64 2/3
Verifying : jq-1.5-1.amzn2.0.2.x86_64 3/3
Installed:
jq.x86_64 0:1.5-1.amzn2.0.2 zip.x86_64 0:3.0-11.amzn2.0.2
Dependency Installed:
oniguruma.x86_64 0:5.9.6-1.amzn2.0.4
Complete!
$ IMAGE_TAG="v0.1.0-$CI_COMMIT_SHORT_SHA"
$ sed -i "s/<IMAGE_TAG>/$IMAGE_TAG/g; s/<AWS_REGION>/$AWS_REGION/g; s/<AWS_ACCOUNT_ID>/$AWS_ACCOUNT_ID/g;" build.json
$ zip -r image.zip buildspec.yml Dockerfile hello_docker.html
updating: Dockerfile (deflated 34%)
updating: hello_docker.html (deflated 22%)
updating: buildspec.yml (deflated 61%)
$ aws s3api put-object --bucket $CODEBUILD_BUCKET --key app/$IMAGE_TAG/image.zip --body image.zip
{
"ETag": "\"adaa387a8c8186972f83cc03ef85c0d9\""
}
$ CODEBUILD_ID=$(aws codebuild start-build --project-name "build-app-docker-image" --cli-input-json file://build.json | jq -r '.build.id')
$ sleep 5
$ CODEBUILD_JOB=$(aws codebuild batch-get-builds --ids $CODEBUILD_ID)
$ LOG_GROUP_NAME=$(jq -r '.builds[0].logs.groupName' <<< "$CODEBUILD_JOB")
$ if [[ ${CODEBUILD_ID} != "" ]]; # collapsed multi-line command
2021/04/16 16:40:28 Waiting for agent ping
2021/04/16 16:40:30 Waiting for DOWNLOAD_SOURCE
2021/04/16 16:40:31 Phase is DOWNLOAD_SOURCE
2021/04/16 16:40:31 CODEBUILD_SRC_DIR=/codebuild/output/src352441152/src
2021/04/16 16:40:31 YAML location is /codebuild/output/src352441152/src/buildspec.yml
2021/04/16 16:40:31 Processing environment variables
2021/04/16 16:40:31 No runtime version selected in buildspec.
2021/04/16 16:40:31 Moving to directory /codebuild/output/src352441152/src
2021/04/16 16:40:31 Registering with agent
2021/04/16 16:40:31 Phases found in YAML: 3
2021/04/16 16:40:31 PRE_BUILD: 2 commands
2021/04/16 16:40:31 BUILD: 4 commands
2021/04/16 16:40:31 POST_BUILD: 3 commands
2021/04/16 16:40:31 Phase complete: DOWNLOAD_SOURCE State: SUCCEEDED
2021/04/16 16:40:31 Phase context status code: Message:
2021/04/16 16:40:31 Entering phase INSTALL
2021/04/16 16:40:31 Phase complete: INSTALL State: SUCCEEDED
2021/04/16 16:40:31 Phase context status code: Message:
2021/04/16 16:40:31 Entering phase PRE_BUILD
2021/04/16 16:40:31 Running command echo Logging in to Amazon ECR...
2021-04-16T16:40:35.757000+00:00 bd057f7c-b326-41a0-a426-7ffd47eefd7c Logging in to Amazon ECR...
2021-04-16T16:40:35.757000+00:00 bd057f7c-b326-41a0-a426-7ffd47eefd7c
2021/04/16 16:40:31 Running command aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[Container] 2021/04/16 16:40:37 Phase complete: PRE_BUILD State: SUCCEEDED
[Container] 2021/04/16 16:40:37 Phase context status code: Message:
[Container] 2021/04/16 16:40:37 Entering phase BUILD
[Container] 2021/04/16 16:40:37 Running command echo Build started on `date`
Build started on Fri Apr 16 16:40:37 UTC 2021
[Container] 2021/04/16 16:40:37 Running command echo Building the Docker image...
Building the Docker image...
[Container] 2021/04/16 16:40:37 Running command docker build -t $IMAGE_REPO_NAME:$IMAGE_TAG .
Sending build context to Docker daemon 4.608kB
Step 1/5 : FROM nginx
latest: Pulling from library/nginx
f7ec5a41d630: Pulling fs layer
aa1efa14b3bf: Pulling fs layer
b78b95af9b17: Pulling fs layer
c7d6bca2b8dc: Pulling fs layer
cf16cd8e71e0: Pulling fs layer
0241c68333ef: Pulling fs layer
c7d6bca2b8dc: Waiting
cf16cd8e71e0: Waiting
0241c68333ef: Waiting
b78b95af9b17: Verifying Checksum
b78b95af9b17: Download complete
aa1efa14b3bf: Verifying Checksum
aa1efa14b3bf: Download complete
f7ec5a41d630: Download complete
c7d6bca2b8dc: Verifying Checksum
c7d6bca2b8dc: Download complete
0241c68333ef: Verifying Checksum
0241c68333ef: Download complete
cf16cd8e71e0: Verifying Checksum
cf16cd8e71e0: Download complete
f7ec5a41d630: Pull complete
---> Running in 47fe16263fa9
Removing intermediate container 47fe16263fa9
---> f39182f28f46
Step 3/5 : WORKDIR /usr/share/nginx/html
---> Running in ab16c2902110
Removing intermediate container ab16c2902110
---> 1af4cd082179
Step 4/5 : COPY hello_docker.html /usr/share/nginx/html
---> b198e809d3bd
Step 5/5 : CMD cd /usr/share/nginx/html && sed -e s/Docker/""/ hello_docker.html > index.html ; nginx -g 'daemon off;'
---> Running in 7ab6d9888ce7
Removing intermediate container 7ab6d9888ce7
---> 124aa6c81ee7
Successfully built 124aa6c81ee7
Successfully tagged app:v0.1.0-4b3e0451
2021-04-16T16:40:45.928000+00:00 bd057f7c-b326-41a0-a426-7ffd47eefd7c
[Container] 2021/04/16 16:40:45 Running command docker tag $IMAGE_REPO_NAME:$IMAGE_TAG $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
2021-04-16T16:40:45.928000+00:00 bd057f7c-b326-41a0-a426-7ffd47eefd7c
[Container] 2021/04/16 16:40:45 Phase complete: BUILD State: SUCCEEDED
[Container] 2021/04/16 16:40:45 Phase context status code: Message:
[Container] 2021/04/16 16:40:45 Entering phase POST_BUILD
[Container] 2021/04/16 16:40:45 Running command echo Build completed on `date`
Build completed on Fri Apr 16 16:40:45 UTC 2021
2021-04-16T16:40:45.928000+00:00 bd057f7c-b326-41a0-a426-7ffd47eefd7c
[Container] 2021/04/16 16:40:45 Running command echo Pushing the Docker image...
Pushing the Docker image...
2021-04-16T16:40:45.928000+00:00 bd057f7c-b326-41a0-a426-7ffd47eefd7c
[Container] 2021/04/16 16:40:45 Running command docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
The push refers to repository [[MASKED].dkr.ecr.eu-west-1.amazonaws.com/app]
61504854da99: Preparing
64ee8c6d0de0: Preparing
974e9faf62f1: Preparing
15aac1be5f02: Preparing
23c959acc3d0: Preparing
4dc529e519c4: Preparing
7e718b9c0c8c: Preparing
4dc529e519c4: Waiting
7e718b9c0c8c: Waiting
974e9faf62f1: Layer already exists
15aac1be5f02: Layer already exists
23c959acc3d0: Layer already exists
64ee8c6d0de0: Layer already exists
4dc529e519c4: Layer already exists
7e718b9c0c8c: Layer already exists
61504854da99: Pushed
v0.1.0-4b3e0451: digest: sha256:6d12814984b825931f91f33d43962b0442737557bb1a3b3d8399b3e7ef9b71e0 size: 1777
2021-04-16T16:40:48.366000+00:00 bd057f7c-b326-41a0-a426-7ffd47eefd7c
[Container] 2021/04/16 16:40:46 Phase complete: POST_BUILD State: SUCCEEDED
[Container] 2021/04/16 16:40:46 Phase context status code: Message:
Cleaning up file based variables
00:00
Job succeeded
That's it!
Conclusion
Using AWS Code build to publish our docker image frees us from having to manage operational and security layers at the pipeline level.
Hope you enjoyed reading this blog post.
If you have any questions or feedback, please feel free to leave a comment.
Thanks for reading!
Documentation
[1] https://docs.gitlab.com/ee/ci/docker/using_docker_build.html
[2] https://eksctl.io/usage/iamserviceaccounts
[3] https://gitlab.com/gitlab-org/charts/gitlab-runner/-/blob/master/values.yaml