Overview
After development of our RESTful webservices (see post
here) and CI exercises (see post
here), now we eventually reach the Continuous Deployment (CD) part.
Obviously, we’re going to experiment a Docker-based CD process.
Below components will be engaged to build up our integrated CD solution:
- The Docker image we built and published previously in our private Docker Registry;
- Apache Mosos: The DC/OS platform for managing our CPU, memory, storage, and other compute resources within our distributed clusters;
- Apache ZooKeeper: A highly reliable distributed coordination tool for Apache Mosos cluster;
- Mesosphere Marathon: The container orchestration platform for Apache Mesos.
What we’re going to focus in this chapter is as highlighted as below:
Table of Contents
Update docker-compose.yml By Adding CD Components
Having the infrastructure setup for CI previously, it’s pretty easy to add more components into our Docker Compose configuration file for our CD solution which includes:
- ZooKeeper
- Mesos: Master, Slave
- Marathon
ZooKeeper
~/docker-compose.yml
…
# Zookeeper
zookeeper:
image: jplock/zookeeper:3.4.5
ports:
- "2181"
Mesos: mesos-master, mesos-slave
~/docker-compose.yml
…
# Mesos Master
mesos-master:
image: mesosphere/mesos-master:0.28.1-2.0.20.ubuntu1404
hostname: mesos-master
links:
- zookeeper:zookeeper
environment:
- MESOS_ZK=zk://zookeeper:2181/mesos
- MESOS_QUORUM=1
- MESOS_WORK_DIR=/var/lib/mesos
- MESOS_LOG_DIR=/var/log
ports:
- "5050:5050"
# Mesos Slave
mesos-slave:
image: mesosphere/mesos-slave:0.28.1-2.0.20.ubuntu1404
links:
- zookeeper:zookeeper
- mesos-master:mesos-master
environment:
- MESOS_MASTER=zk://zookeeper:2181/mesos
- MESOS_EXECUTOR_REGISTRATION_TIMEOUT=5mins
- MESOS_CONTAINERIZERS=docker,mesos
- MESOS_ISOLATOR=cgroups/cpu,cgroups/mem
- MESOS_LOG_DIR=/var/log
volumes:
- /var/run/docker.sock:/run/docker.sock
- /usr/bin/docker:/usr/bin/docker
- /sys:/sys:ro
- mesosslace-stuff:/var/log
- /lib/x86_64-linux-gnu/libsystemd-journal.so.0:/lib/x86_64-linux-gnu/libsystemd-journal.so.0
- /usr/lib/x86_64-linux-gnu/libapparmor.so.1:/usr/lib/x86_64-linux-gnu/libapparmor.so.1
- /lib/x86_64-linux-gnu/libcgmanager.so.0:/lib/x86_64-linux-gnu/libcgmanager.so.0
- /lib/x86_64-linux-gnu/libnih.so.1:/lib/x86_64-linux-gnu/libnih.so.1
- /lib/x86_64-linux-gnu/libnih-dbus.so.1:/lib/x86_64-linux-gnu/libnih-dbus.so.1
- /lib/x86_64-linux-gnu/libgcrypt.so.11:/lib/x86_64-linux-gnu/libgcrypt.so.11
expose:
- "5051"
Marathon
~/docker-compose.yml
…
# Marathon
marathon:
image: mesosphere/marathon
links:
- zookeeper:zookeeper
ports:
- "8080:8080"
command: --master zk://zookeeper:2181/mesos --zk zk://zookeeper:2181/marathon
The Updated docker-compose.yml
Now the final updated ~/docker-compose.yml is as below:
# Git Server
gogs:
image: 'gogs/gogs:latest'
#restart: always
hostname: '192.168.56.118'
ports:
- '3000:3000'
- '1022:22'
volumes:
- '/data/gogs:/data'
# Jenkins CI Server
jenkins:
image: devops/jenkinsci
# image: jenkinsci/jenkins
# links:
# - marathon:marathon
volumes:
- /data/jenkins:/var/jenkins_home
- /opt/jdk/java_home:/opt/jdk/java_home
- /opt/maven:/opt/maven
- /data/mvn_repo:/data/mvn_repo
- /var/run/docker.sock:/var/run/docker.sock
- /usr/bin/docker:/usr/bin/docker
- /etc/default/docker:/etc/default/docker
- /lib/x86_64-linux-gnu/libsystemd-journal.so.0:/lib/x86_64-linux-gnu/libsystemd-journal.so.0
- /usr/lib/x86_64-linux-gnu/libapparmor.so.1:/usr/lib/x86_64-linux-gnu/libapparmor.so.1
- /lib/x86_64-linux-gnu/libcgmanager.so.0:/lib/x86_64-linux-gnu/libcgmanager.so.0
- /lib/x86_64-linux-gnu/libnih.so.1:/lib/x86_64-linux-gnu/libnih.so.1
- /lib/x86_64-linux-gnu/libnih-dbus.so.1:/lib/x86_64-linux-gnu/libnih-dbus.so.1
- /lib/x86_64-linux-gnu/libgcrypt.so.11:/lib/x86_64-linux-gnu/libgcrypt.so.11
ports:
- "8081:8080"
# Private Docker Registry
docker-registry:
image: registry
volumes:
- /data/registry:/var/lib/registry
ports:
- "5000:5000"
# Zookeeper
zookeeper:
image: jplock/zookeeper:3.4.5
ports:
- "2181"
# Mesos Master
mesos-master:
image: mesosphere/mesos-master:0.28.1-2.0.20.ubuntu1404
hostname: mesos-master
links:
- zookeeper:zookeeper
environment:
- MESOS_ZK=zk://zookeeper:2181/mesos
- MESOS_QUORUM=1
- MESOS_WORK_DIR=/var/lib/mesos
- MESOS_LOG_DIR=/var/log
ports:
- "5050:5050"
# Mesos Slave
mesos-slave:
image: mesosphere/mesos-slave:0.28.1-2.0.20.ubuntu1404
links:
- zookeeper:zookeeper
- mesos-master:mesos-master
environment:
- MESOS_MASTER=zk://zookeeper:2181/mesos
- MESOS_EXECUTOR_REGISTRATION_TIMEOUT=5mins
- MESOS_CONTAINERIZERS=docker,mesos
- MESOS_ISOLATOR=cgroups/cpu,cgroups/mem
- MESOS_LOG_DIR=/var/log
volumes:
- /var/run/docker.sock:/run/docker.sock
- /usr/bin/docker:/usr/bin/docker
- /sys:/sys:ro
- mesosslace-stuff:/var/log
- /lib/x86_64-linux-gnu/libsystemd-journal.so.0:/lib/x86_64-linux-gnu/libsystemd-journal.so.0
- /usr/lib/x86_64-linux-gnu/libapparmor.so.1:/usr/lib/x86_64-linux-gnu/libapparmor.so.1
- /lib/x86_64-linux-gnu/libcgmanager.so.0:/lib/x86_64-linux-gnu/libcgmanager.so.0
- /lib/x86_64-linux-gnu/libnih.so.1:/lib/x86_64-linux-gnu/libnih.so.1
- /lib/x86_64-linux-gnu/libnih-dbus.so.1:/lib/x86_64-linux-gnu/libnih-dbus.so.1
- /lib/x86_64-linux-gnu/libgcrypt.so.11:/lib/x86_64-linux-gnu/libgcrypt.so.11
expose:
- "5051"
# Marathon
marathon:
image: mesosphere/marathon
links:
- zookeeper:zookeeper
ports:
- "8080:8080"
command: --master zk://zookeeper:2181/mesos --zk zk://zookeeper:2181/marathon
Spin Up the Docker Containers
Verify Our Components
Jenkins: http://192.168.56.118:8081/
Marathon: http://192.168.56.118:8080/
Deployment By Using Mesos + Marathon
Marathon provides RESTful APIs for managing the behaviors we're going to load into the platform, such as deleting an application, deploying an application etc.
So the idea we're going to illustrate is to:
- Prepare the payload, or the Marathon deployment file;
- Wrap the interactions with Marathon as simple deployment scripts which can eventually be "plugged" into Jenkins as one more build steps to kick off the CD pipeline.
Marathon Deployment File
Firstly we need to create the Marathon deployment file, which is in JSON format, for scheduling our application on Marathon.
Let’s name it marathon-app-springboot-jersey-swagger.json and put it under /data/jenkins/workspace/springboot-jersey-swagger (which will be eventually mapped to Jenkins container’s path of “/var/jenkins_home/workspace/springboot-jersey-swagger”, or "${WORKSPACE}/" so that it’s reachable in Jenkins).
$ sudo touch /data/jenkins/workspace/springboot-jersey-swagger/marathon-app-springboot-jersey-swagger.json
$ sudo vim /data/jenkins/workspace/springboot-jersey-swagger/marathon-app-springboot-jersey-swagger.json
/data/jenkins/workspace/springboot-jersey-swagger/marathon-app-springboot-jersey-swagger.json
{
"id": "springboot-jersey-swagger",
"container": {
"docker": {
"image": "192.168.56.118:5000/devops/springboot-jersey-swagger:latest",
"network": "BRIDGE",
"portMappings": [
{"containerPort": 8000, "servicePort": 8000}
]
}
},
"cpus": 0.2,
"mem": 256.0,
"instances": 1
}
Deployment Scripts
Once we have the Marathon deployment file to publish we can compile the deployment scripts that will remove the currently running application and redeploy it using the new image.
There are better upgrade strategies out of the box but, I won’t discuss them here.
Let’s put it under /data/jenkins/workspace/springboot-jersey-swagger (which will be eventually mapped to Jenkins container’s below path so that it’s reachable in Jenkins) and don’t forget to grant it with execution permission.
$ sudo touch /data/jenkins/workspace/springboot-jersey-swagger/deploy-app-springboot-jersey-swagger.sh
$ sudo chmod +x /data/jenkins/workspace/springboot-jersey-swagger/deploy-app-springboot-jersey-swagger.sh
$ sudo vim /data/jenkins/workspace/springboot-jersey-swagger/deploy-app-springboot-jersey-swagger.sh
#!/bin/bash
# We can make it much more flexible if we expose some of below as parameters
APP="springboot-jersey-swagger"
MARATHON="192.168.56.118:8080"
PAYLOAD="${WORKSPACE}/marathon-app-springboot-jersey-swagger.json"
CONTENT_TYPE="Content-Type: application/json"
# Delete the old application
# Note: again, this can be enhanced for a better upgrade strategy
curl -X DELETE -H "${CONTENT_TYPE}" http://${MARATHON}/v2/apps/${APP}
# Wait for a while
sleep 2
# Post the application to Marathon
curl -X POST -H "${CONTENT_TYPE}" http://${MARATHON}/v2/apps -d@${PAYLOAD}
Plug It In By Adding One More Build Step In Jenkins
Simply add one more build step after the Maven step.
It's a "Execute Shell" build step and the commands are really simple:
cd ${WORKSPACE}
./deploy-app-springboot-jersey-swagger.sh
Save it.
Trigger the Process Again
By default, any code check-in will automatically trigger the build and deployment process.
But for testing purpose, we can click the “Build Now” to manually trigger it.
The Console Output is really verbose but it can help you understand the whole process. Check it out
here.
Once the process is done in Jenkins (we can monitor the progress by viewing the Console Output), we can see our application exist in “Deployments” tab and quickly move into “Application” tab of Marathon which means that the deployment is done.
Note: if an application keeps pending in deployment because of “waiting for resource offer”, it’s most likely because of no sufficient resource (e.g. Mem) set in Mesos. In this case, we may need to add more Mesos Slave or fine tune the JSON deployment file with fewer resources.
One may notice that our application is under “Unknown”. Why? It’s because we haven’t set up our health check mechanism yet so Marathon doesn't really know whether the application is healthy or not.
Verify Our App
Once application is deployed by Mesos + Marathon, there will be new Docker container(s) spun up automatically.
But how can we access our application deployed by Mesos + Marathon? The port configuration part in Marathon may be a bit
confusing and we will figure it out later. In current stage we have to see what port our app is exposed, by below command:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a020940ae0b5 192.168.56.118:5000/devops/springboot-jersey-swagger:latest "java -jar /springboo" 8 minutes ago Up 8 minutes 0.0.0.0:31489->8000/tcp mesos-e51e55d3-cd64-4369-948f-aaa433f12c1e-S0.b0e0d351-4da0-4af5-be1b-5e7dab876d06
...
As what I highlighted in red, the port exposed to host is 31489 so our application URL will be:
http://{the docker host ip}:31489, or http://192.168.56.118:31489 in my case.
Yes, once you reach here, our application's continuous deployment is now part of our integrated CI + CD pipeline. We did it!
Conclusion
Now it's time to recap our DevOps story.
I update a bit on the diagram which may be better for me to illustrate the process and components we have engaged.
The Development As Usual
Firstly, as usual, we develop our RESTful webservices by following Microservices architecture with components like
Java,
Spring Boot,
Jersey,
Swagger,
Maven and more.
Our source code is hosted by
Gogs, which is our private Git Repository.
We develop, do unit test, check in code -- everything is just as usual.
Related posts:
The Continuous Integration (CI) Comes To Help
Jenkins, acted as our CI Server, comes to help which detects changes in our Git Repository and triggers our integrated CI + CD pipeline.
For CI part, Jenkins engages Maven to run through tests, both Unit Test and SIT if any and build our typical Spring Boot artifact, an all-in-one fat Jar file. Meanwhile, with the Docker Maven plugin,
docker-maven-plugin, Maven can take one step further to help build our desired Docker image where our application Jar file is being hosted.
The Docker image will be published to our private
Docker Registry and our CI part ends.
Related posts:
The Final Show in Continuous Deployment (CD)
The trigger point is still in Jenkins -- by adding one more build step -- once CI part is done, Jenkins then executes our Shell scripts where the interactions with Marathon are wrapped.
By posting a predefined JSON payload, or the deployment file, to Marathon, it will deploy our application to Mesos platform. As a result, Docker container(s) will be spun up automatically with specific resources (e.g. CPU, RAM) allocated for our application.
Accordingly, the web services offered by our application are up and running.
That's our whole story of "from code to online services".
Future Improvements?
Yes, of course. Anyway, it's still a DevOps prototype.
Along the way, for simplicity purposes, there are some topics or concerns that haven't been properly addressed yet which include but are not limited to:
- Using multi-server cluster to make it "real";
- Introducing security and ACL to bring in necessary control;
- Applying HAProxy setup for the service clustering;
- Enabling auto scaling;
- Adding health check mechanism for our application into Marathon;
- Fine-tuning the application upgrade strategy;
- Engaging Consul for automatic service discovery;
- Having a centralized monitoring mechanism;
- Trying it out with AWS or Azure;
- And more
Final Words
Continuous Delivery (= Continuous Integration + Continuous Deployment) is a big topic and has variable solutions in the market now, be it commercial or open source.
I started working on this set-up mainly to make my hands dirty and also help developers and administrators to learn how to use a series of open source components to build up a streamlined CI+CD pipeline so that we can sense the beauty of continuous delivery.
As mentioned in previous chapter of "Future Improvements", a lot more things are still pending for exploring. But that's also the reason why architects like me love it the way that IT rocks, isn't it?
Enjoy DevOps and appreciate your comments if any!