Thursday, 20 October 2016

Integrated Continuous Deployment (CD) Solution with Mesos, Zookeeper, Marathon on top of CI

Overview

After development of our RESTful webservices (see post here) and CI exercises (see post here), now we eventually reach the Continuous Deployment (CD) part.
Obviously, we’re going to experiment a Docker-based CD process.

Below components will be engaged to build up our integrated CD solution:

  • The Docker image we built and published previously in our private Docker Registry;
  • Apache Mosos: The DC/OS platform for managing our CPU, memory, storage, and other compute resources within our distributed clusters;
  • Apache ZooKeeper: A highly reliable distributed coordination tool for Apache Mosos cluster;
  • Mesosphere Marathon: The container orchestration platform for Apache Mesos.

What we’re going to focus in this chapter is as highlighted as below:



Table of Contents

Update docker-compose.yml By Adding CD Components

Having the infrastructure setup for CI previously, it’s pretty easy to add more components into our Docker Compose configuration file for our CD solution which includes:
  • ZooKeeper
  • Mesos: Master, Slave
  • Marathon

ZooKeeper

~/docker-compose.yml
…
# Zookeeper
zookeeper:
  image: jplock/zookeeper:3.4.5
  ports:
    - "2181"

Mesos: mesos-master, mesos-slave

~/docker-compose.yml
…
# Mesos Master
mesos-master:
  image: mesosphere/mesos-master:0.28.1-2.0.20.ubuntu1404
  hostname: mesos-master
  links:
    - zookeeper:zookeeper
  environment:
    - MESOS_ZK=zk://zookeeper:2181/mesos
    - MESOS_QUORUM=1
    - MESOS_WORK_DIR=/var/lib/mesos
    - MESOS_LOG_DIR=/var/log
  ports:
    - "5050:5050"

# Mesos Slave
mesos-slave:
  image: mesosphere/mesos-slave:0.28.1-2.0.20.ubuntu1404
  links:
    - zookeeper:zookeeper
    - mesos-master:mesos-master
  environment:
    - MESOS_MASTER=zk://zookeeper:2181/mesos
    - MESOS_EXECUTOR_REGISTRATION_TIMEOUT=5mins
    - MESOS_CONTAINERIZERS=docker,mesos
    - MESOS_ISOLATOR=cgroups/cpu,cgroups/mem
    - MESOS_LOG_DIR=/var/log
  volumes:
    - /var/run/docker.sock:/run/docker.sock
- /usr/bin/docker:/usr/bin/docker

    - /sys:/sys:ro
- mesosslace-stuff:/var/log

    - /lib/x86_64-linux-gnu/libsystemd-journal.so.0:/lib/x86_64-linux-gnu/libsystemd-journal.so.0
    - /usr/lib/x86_64-linux-gnu/libapparmor.so.1:/usr/lib/x86_64-linux-gnu/libapparmor.so.1
    - /lib/x86_64-linux-gnu/libcgmanager.so.0:/lib/x86_64-linux-gnu/libcgmanager.so.0
    - /lib/x86_64-linux-gnu/libnih.so.1:/lib/x86_64-linux-gnu/libnih.so.1
    - /lib/x86_64-linux-gnu/libnih-dbus.so.1:/lib/x86_64-linux-gnu/libnih-dbus.so.1
    - /lib/x86_64-linux-gnu/libgcrypt.so.11:/lib/x86_64-linux-gnu/libgcrypt.so.11
  expose:
    - "5051"

Marathon

~/docker-compose.yml
…
# Marathon
marathon:
  image: mesosphere/marathon
  links:
    - zookeeper:zookeeper
  ports:
    - "8080:8080"
  command: --master zk://zookeeper:2181/mesos --zk zk://zookeeper:2181/marathon

The Updated docker-compose.yml

Now the final updated ~/docker-compose.yml is as below:
# Git Server
gogs:
  image: 'gogs/gogs:latest'
  #restart: always
  hostname: '192.168.56.118'
  ports:
    - '3000:3000'
    - '1022:22'
  volumes:
    - '/data/gogs:/data'

# Jenkins CI Server
jenkins:
  image: devops/jenkinsci
#  image: jenkinsci/jenkins
#  links:
#    - marathon:marathon
  volumes:
    - /data/jenkins:/var/jenkins_home

    - /opt/jdk/java_home:/opt/jdk/java_home
    - /opt/maven:/opt/maven
    - /data/mvn_repo:/data/mvn_repo

    - /var/run/docker.sock:/var/run/docker.sock
    - /usr/bin/docker:/usr/bin/docker
    - /etc/default/docker:/etc/default/docker

    - /lib/x86_64-linux-gnu/libsystemd-journal.so.0:/lib/x86_64-linux-gnu/libsystemd-journal.so.0
    - /usr/lib/x86_64-linux-gnu/libapparmor.so.1:/usr/lib/x86_64-linux-gnu/libapparmor.so.1
    - /lib/x86_64-linux-gnu/libcgmanager.so.0:/lib/x86_64-linux-gnu/libcgmanager.so.0
    - /lib/x86_64-linux-gnu/libnih.so.1:/lib/x86_64-linux-gnu/libnih.so.1
    - /lib/x86_64-linux-gnu/libnih-dbus.so.1:/lib/x86_64-linux-gnu/libnih-dbus.so.1
    - /lib/x86_64-linux-gnu/libgcrypt.so.11:/lib/x86_64-linux-gnu/libgcrypt.so.11
  ports:
- "8081:8080"

# Private Docker Registry
docker-registry:
  image: registry
  volumes:
    - /data/registry:/var/lib/registry
  ports:
    - "5000:5000"

# Zookeeper
zookeeper:
  image: jplock/zookeeper:3.4.5
  ports:
    - "2181"

# Mesos Master
mesos-master:
  image: mesosphere/mesos-master:0.28.1-2.0.20.ubuntu1404
  hostname: mesos-master
  links:
    - zookeeper:zookeeper
  environment:
    - MESOS_ZK=zk://zookeeper:2181/mesos
    - MESOS_QUORUM=1
    - MESOS_WORK_DIR=/var/lib/mesos
    - MESOS_LOG_DIR=/var/log
  ports:
    - "5050:5050"

# Mesos Slave
mesos-slave:
  image: mesosphere/mesos-slave:0.28.1-2.0.20.ubuntu1404
  links:
    - zookeeper:zookeeper
    - mesos-master:mesos-master
  environment:
    - MESOS_MASTER=zk://zookeeper:2181/mesos
    - MESOS_EXECUTOR_REGISTRATION_TIMEOUT=5mins
    - MESOS_CONTAINERIZERS=docker,mesos
    - MESOS_ISOLATOR=cgroups/cpu,cgroups/mem
    - MESOS_LOG_DIR=/var/log
  volumes:
    - /var/run/docker.sock:/run/docker.sock
     - /usr/bin/docker:/usr/bin/docker

    - /sys:/sys:ro
    - mesosslace-stuff:/var/log

    - /lib/x86_64-linux-gnu/libsystemd-journal.so.0:/lib/x86_64-linux-gnu/libsystemd-journal.so.0
    - /usr/lib/x86_64-linux-gnu/libapparmor.so.1:/usr/lib/x86_64-linux-gnu/libapparmor.so.1
    - /lib/x86_64-linux-gnu/libcgmanager.so.0:/lib/x86_64-linux-gnu/libcgmanager.so.0
    - /lib/x86_64-linux-gnu/libnih.so.1:/lib/x86_64-linux-gnu/libnih.so.1
    - /lib/x86_64-linux-gnu/libnih-dbus.so.1:/lib/x86_64-linux-gnu/libnih-dbus.so.1
    - /lib/x86_64-linux-gnu/libgcrypt.so.11:/lib/x86_64-linux-gnu/libgcrypt.so.11
  expose:
    - "5051"

# Marathon
marathon:
  image: mesosphere/marathon
  links:
    - zookeeper:zookeeper
  ports:
    - "8080:8080"
  command: --master zk://zookeeper:2181/mesos --zk zk://zookeeper:2181/marathon

Spin Up the Docker Containers

$ docker-compose up

Verify Our Components

Jenkins: http://192.168.56.118:8081/
Marathon: http://192.168.56.118:8080/

Deployment By Using Mesos + Marathon

Marathon provides RESTful APIs for managing the behaviors we're going to load into the platform, such as deleting an application, deploying an application etc.
So the idea we're going to illustrate is to:
  1. Prepare the payload, or the Marathon deployment file;
  2. Wrap the interactions with Marathon as simple deployment scripts which can eventually be "plugged" into Jenkins as one more build steps to kick off the CD pipeline.

Marathon Deployment File

Firstly we need to create the Marathon deployment file, which is in JSON format, for scheduling our application on Marathon.
Let’s name it marathon-app-springboot-jersey-swagger.json and put it under /data/jenkins/workspace/springboot-jersey-swagger (which will be eventually mapped to Jenkins container’s path of “/var/jenkins_home/workspace/springboot-jersey-swagger”, or "${WORKSPACE}/" so that it’s reachable in Jenkins).
$ sudo touch /data/jenkins/workspace/springboot-jersey-swagger/marathon-app-springboot-jersey-swagger.json
$ sudo vim /data/jenkins/workspace/springboot-jersey-swagger/marathon-app-springboot-jersey-swagger.json

/data/jenkins/workspace/springboot-jersey-swagger/marathon-app-springboot-jersey-swagger.json
{
    "id": "springboot-jersey-swagger", 
    "container": {
      "docker": {
        "image": "192.168.56.118:5000/devops/springboot-jersey-swagger:latest",
        "network": "BRIDGE",
        "portMappings": [
          {"containerPort": 8000, "servicePort": 8000}
        ]
      }
    },
    "cpus": 0.2,
    "mem": 256.0,
    "instances": 1
}

Deployment Scripts

Once we have the Marathon deployment file to publish we can compile the deployment scripts that will remove the currently running application and redeploy it using the new image. 
There are better upgrade strategies out of the box but, I won’t discuss them here.
Let’s put it under /data/jenkins/workspace/springboot-jersey-swagger (which will be eventually mapped to Jenkins container’s below path so that it’s reachable in Jenkins) and don’t forget to grant it with execution permission.
$ sudo touch /data/jenkins/workspace/springboot-jersey-swagger/deploy-app-springboot-jersey-swagger.sh
$ sudo chmod +x /data/jenkins/workspace/springboot-jersey-swagger/deploy-app-springboot-jersey-swagger.sh
$ sudo vim /data/jenkins/workspace/springboot-jersey-swagger/deploy-app-springboot-jersey-swagger.sh

#!/bin/bash

# We can make it much more flexible if we expose some of below as parameters
APP="springboot-jersey-swagger"
MARATHON="192.168.56.118:8080"
PAYLOAD="${WORKSPACE}/marathon-app-springboot-jersey-swagger.json"
CONTENT_TYPE="Content-Type: application/json"

# Delete the old application
# Note: again, this can be enhanced for a better upgrade strategy
curl -X DELETE -H "${CONTENT_TYPE}" http://${MARATHON}/v2/apps/${APP}

# Wait for a while
sleep 2

# Post the application to Marathon
curl -X POST -H "${CONTENT_TYPE}" http://${MARATHON}/v2/apps -d@${PAYLOAD}

Plug It In By Adding One More Build Step In Jenkins

Simply add one more build step after the Maven step.
It's a "Execute Shell" build step and the commands are really simple:
cd ${WORKSPACE}
./deploy-app-springboot-jersey-swagger.sh
Save it.

Trigger the Process Again

By default, any code check-in will automatically trigger the build and deployment process.
But for testing purpose, we can click the “Build Now” to manually trigger it.
The Console Output is really verbose but it can help you understand the whole process. Check it out here.

Once the process is done in Jenkins (we can monitor the progress by viewing the Console Output), we can see our application exist in “Deployments” tab and quickly move into “Application” tab of Marathon which means that the deployment is done.
Note: if an application keeps pending in deployment because of “waiting for resource offer”, it’s most likely because of no sufficient resource (e.g. Mem) set in Mesos. In this case, we may need to add more Mesos Slave or fine tune the JSON deployment file with fewer resources.



One may notice that our application is under “Unknown”. Why? It’s because we haven’t set up our health check mechanism yet so Marathon doesn't really know whether the application is healthy or not.

Verify Our App

Once application is deployed by Mesos + Marathon, there will be new Docker container(s) spun up automatically.
But how can we access our application deployed by Mesos + Marathon? The port configuration part in Marathon may be a bit confusing and we will figure it out later. In current stage we have to see what port our app is exposed, by below command:
$ docker ps
CONTAINER ID        IMAGE                                                         COMMAND                  CREATED             STATUS              PORTS                                          NAMES
a020940ae0b5        192.168.56.118:5000/devops/springboot-jersey-swagger:latest   "java -jar /springboo"   8 minutes ago       Up 8 minutes        0.0.0.0:31489->8000/tcp                        mesos-e51e55d3-cd64-4369-948f-aaa433f12c1e-S0.b0e0d351-4da0-4af5-be1b-5e7dab876d06
...

As what I highlighted in red, the port exposed to host is 31489 so our application URL will be:
http://{the docker host ip}:31489, or http://192.168.56.118:31489 in my case.

Yes, once you reach here, our application's continuous deployment is now part of our integrated CI + CD pipeline. We did it!

Conclusion

Now it's time to recap our DevOps story.
I update a bit on the diagram which may be better for me to illustrate the process and components we have engaged.

The Development As Usual

Firstly, as usual, we develop our RESTful webservices by following Microservices architecture with components like Java, Spring Boot, Jersey, Swagger, Maven and more.
Our source code is hosted by Gogs, which is our private Git Repository.
We develop, do unit test, check in code -- everything is just as usual.

Related posts:

The Continuous Integration (CI) Comes To Help

Jenkins, acted as our CI Server, comes to help which detects changes in our Git Repository and triggers our integrated CI + CD pipeline.
For CI part, Jenkins engages Maven to run through tests, both Unit Test and SIT if any and build our typical Spring Boot artifact, an all-in-one fat Jar file. Meanwhile, with the Docker Maven plugin, docker-maven-plugin, Maven can take one step further to help build our desired Docker image where our application Jar file is being hosted.
The Docker image will be published to our private Docker Registry and our CI part ends.

Related posts:

The Final Show in Continuous Deployment (CD)

Apache Mesos, together with Marathon and ZooKeeper founds the CD world.
The trigger point is still in Jenkins -- by adding one more build step -- once CI part is done, Jenkins then executes our Shell scripts where the interactions with Marathon are wrapped.
By posting a predefined JSON payload, or the deployment file, to Marathon, it will deploy our application to Mesos platform. As a result, Docker container(s) will be spun up automatically with specific resources (e.g. CPU, RAM) allocated for our application.
Accordingly, the web services offered by our application are up and running.
That's our whole story of "from code to online services".

Future Improvements?

Yes, of course. Anyway, it's still a DevOps prototype.
Along the way, for simplicity purposes, there are some topics or concerns that haven't been properly addressed yet which include but are not limited to:
  • Using multi-server cluster to make it "real";
  • Introducing security and ACL to bring in necessary control;
  • Applying HAProxy setup for the service clustering;
  • Enabling auto scaling;
  • Adding health check mechanism for our application into Marathon;
  • Fine-tuning the application upgrade strategy;
  • Engaging Consul for automatic service discovery;
  • Having a centralized monitoring mechanism;
  • Trying it out with AWS or Azure;
  • And more

Final Words

Continuous Delivery (= Continuous Integration + Continuous Deployment) is a big topic and has variable solutions in the market now, be it commercial or open source.
I started working on this set-up mainly to make my hands dirty and also help developers and administrators to learn how to use a series of open source components to build up a streamlined CI+CD pipeline so that we can sense the beauty of continuous delivery.
As mentioned in previous chapter of "Future Improvements", a lot more things are still pending for exploring. But that's also the reason why architects like me love it the way that IT rocks, isn't it?

Enjoy DevOps and appreciate your comments if any!


Monday, 3 October 2016

Integrated Continuous Integration (CI) Solution With Jenkins, Maven, docker-maven-plugin, Private Docker Registry


In previous blog, I shared how to run Jenkins + Maven in Docker container here and we could already automate the Maven-based build automatically in our Jenkins CI server, which is great to kick off for the CI process.

But, while reviewing what we're going to achieve as below diagram, we realize that having a Spring Boot-powered jar file is not what we want, we want to have a Docker image built and registered into our private Docker Registry so that we're fully ready for the coming Continuous Deployment!

Integrated Jenkins + Maven + docker-maven-plugin

So let's focus on the highlighted part on above diagram to build an integrated Jenkins + Maven + docker-maven-plugin solution, by using Docker container technology also!

Build Up Our Private Docker Registry

Docker has provided a great Docker image for building up our private Docker Registry, here.
Let's do it by simply running through below commands:
$ sudo mkdir /data/registry
$ sudo chown -R devops:docker /data/registry
$ whoami
devops
$ pwd
/home/devops
$ vim docker-compose.yml
-------------------------------
# Private Docker Registry
docker-registry:
  image: registry
  volumes:
    - /data/registry:/var/lib/registry
  ports:
    - "5000:5000"
-------------------------------

It's almost done, let's spin it up:
$ docker-compose up

And try it out:
$ curl http://127.0.0.1:5000/
"\"docker-registry server\""

Well, our private Docker Registry works but wait…the Docker client can’t push image to it yet.
We need to make some changes at Docker's configuration before pushing images:
$ sudo vim /etc/default/docker
---------------------------
DOCKER_OPTS="--insecure-registry 192.168.56.118:5000"
---------------------------

And then restart the Docker daemon:
$ sudo service docker restart

Now our private Docker Registry is ready to rock.

But how to verify whether our Docker Registry is really working as what we expected?
It must be: 1) Ready for publishing; and 2) Ready for searching.

The idea is to use our greatest public available "hello-world" image which is lightweight enough to try, 960B!

$ docker pull hello-world
$ docker images
...
hello-world    latest                     690ed74de00f        11 months ago       960 B

$ docker tag 690ed74de00f 192.168.56.118:5000/hello-world
$ docker images
...
192.168.56.118:5000/hello-world    latest  690ed74de00f        11 months ago       960 B
hello-world                        latest  690ed74de00f        11 months ago       960 B

$ docker push 192.168.56.118:5000/hello-world
The push refers to a repository [192.168.56.118:5000/hello-world]
5f70bf18a086: Image successfully pushed
b652ec3a27e7: Image succewr12334567ssfully pushed
Pushing tag for rev [690ed74de00f] on {http://192.168.56.118:5000/v1/repositories/hello-world/tags/latest}

$ curl http://192.168.56.118:5000/v1/search?q=hello
{"num_results": 1, "query": "hello", "results": 
 [
  {"description": "", "name": "library/hello-world"}
 ]
}

Sure enough, our private Docker Registry is fully ready!

Okay, so far we already have a private Docker Registry, already can do a basic Maven build in Jenkins. What should we do next?

Enable the docker-maven-plugin

It's time to enable our glue, the docker-maven-plugin, in our project.
The purposes of using this plugin are to:

  1. Automate the Docker image build process on top of our typical build which the output might be jar, war. Now, we want to have a Docker image equipped with our application as the deliverable!
  2. Publish our built Docker image to our private Docker Registry where is the starting point of our Continuous Deployment!
<Frankly speaking, there were many issues encountered along my experiment like here but to make long story short, I'll go directly to the points>


Firstly, please refer to my previous blog "Some Practices to Run Maven + Jenkins In Docker Container" here for the basic setup of the environment.
Below successful experiment would be on top of this environment.

Explicitly Specify Docker Host in /etc/default/docker

Edit the /etc/default/docker as below:
#DOCKER_OPTS="--insecure-registry 192.168.56.118:5000"
DOCKER_OPTS="--insecure-registry 192.168.56.118:5000 -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock"

And restart the Docker daemon is required, of course:
$ sudo service docker restart

Enable the docker-maven-plugin In pom.xml

There are many Docker Maven plugins available and I simply choose spotify's one, here.
The configuration of the plugin in pom.xml is as below:
<!-- Docker image building plugin -->
<plugin>
 <groupId>com.spotify</groupId>
 <artifactId>docker-maven-plugin</artifactId>
 <version>0.4.13</version>
 <executions>
  <execution>
   <phase>package</phase>
   <goals>
    <goal>build</goal>
   </goals>
  </execution>
 </executions>
 <configuration>
  <dockerHost>${docker.host}</dockerHost>
  
  <baseImage>frolvlad/alpine-oraclejdk8:slim</baseImage>
  <imageName>${docker.registry}/devops/${project.artifactId}</imageName>
  <entryPoint>["java", "-jar", "/${project.build.finalName}.jar"]</entryPoint>
  
  <resources>
     <resource>
    <targetPath>/</targetPath>
    <directory>${project.build.directory}</directory>
    <include>${project.build.finalName}.jar</include>
     </resource>
  </resources>
  
  <forceTags>true</forceTags>
  <imageTags>
     <imageTag>${project.version}</imageTag>
  </imageTags>
 </configuration>
</plugin>
Note: For complete pom.xml file, please refer to my GitHub repository, here.

The Jenkins Job

Below screenshots are the major configuration parts of the Jenkins Job.

Source Code Management


Build Triggers


Maven Goals


Console Output

Here I also posted a typical Console Output of the Jenkins Job for reference as a Gist here.

Have A Check On What We Have

Firstly we can check whether our local repository has the cached image:
$ docker images
REPOSITORY                                             TAG                        IMAGE ID            CREATED             SIZE
...
192.168.56.118:5000/devops/springboot-jersey-swagger   1.0.0-SNAPSHOT             203fde08df84        44 minutes ago      191.9 MB
...

Great, then have a check whether this image has been registered into our private Docker Registry:
$ curl http://192.168.56.118:5000/v1/search?q=springboot-jersey-swagger
{"num_results": 1, "query": "springboot-jersey-swagger", "results": 
 [
  {"description": "", "name": "devops/springboot-jersey-swagger"}
 ]
}

Yeah! We made it!

Try Our Docker Image Out Also

To go deeper, we can also take a look on whether our application works or not:
docker run -d -p 8000:8000 192.168.56.118:5000/devops/springboot-jersey-swagger

Open the browser and key in: http://192.168.56.118:8000


Hooray, it works perfectly!

Conclusion

From this experiment we have learned that:
  1. By using a customized Docker container, we can put Jenkins, Maven and docker-maven-plugin together to automate our project build process and the deliverable can be our desired Docker image;
  2. The Docker image can be pushed automatically to our private Docker Registry within the pipeline which will enable us to make the Continuous Deployment happen soon.

So the CI part is done and let's look forward to our CD part soon!
Enjoy your CI experiments also!


Wednesday, 28 September 2016

Some Practices to Run Maven + Jenkins In Docker Container

Background

In Java world, it's very common for us to use Apache Maven as the build infra to manage build process.
How if we want to do it more interestingly by engaging together with Docker and Jenkins?

Issues We Have To Tackle

There already has pre-built Jenkins image for Docker, here.
But some known permission issues keep making noise like what somebody mentioned here, while mounting a host folder for the data generated as a volume -- and typically that's a good practice.

Meanwhile, Maven will try to download all necessary artifacts from Maven repositories -- be it from Internet-based public repositories or in-house private repositories -- to local folder as the cache to facilitate the build process. But, this cache may be very huge (e.g. gigabytes level) so it's not practical to put it into Docker container.
Is it possible to share it from the host folder instead so that even we spin up new containers these cached artifacts are still there?

So, these major issues/concerns should be fully addressed while think of solutioning. As a summary, these issues/concerns are as below:

  • How can we achieve the goal of having more control and flexibility while using Jenkins?
  • Can we share resources from host so that we can always keep Docker container lightweight?

 

Tools We're Going to Use

Before wee deep dive into detailed solutions, we assume that below tools that we are going to use have already been installed and configured properly, in our host server (e.g. mine is Ubuntu 14.x, 64bit):
  • Docker (e.g. mime is version 1.11.0, build 4dc5990)
  • Docker Compose (e.g. mime is version 1.6.2, build 4d72027)
  • JDK (e.g. mime is version 1.8.0_102)
  • Apache Maven (e.g. mime is version 3.3.3)

 

Solutions & Practices

1. Customize Our Jenkins Docker Image To Gain More Control and Flexibility

It's pretty straightforward to think about building our own docker image for Jenkins so that we can have more control and flexibility. Why not?

$ whoami
devops
$ pwd
/home/devops
$ mkdir jenkins
$ cd jenkins
$ touch Dockerfile
$ vim Dockerfile

~/jenkins/Dockerfile 
FROM jenkins

MAINTAINER Bright Zheng <bright.zheng AT outlook DOT com>

USER root

RUN GOSU_SHA=5ec5d23079e94aea5f7ed92ee8a1a34bbf64c2d4053dadf383992908a2f9dc8a \
        && curl -sSL -o /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/1.9/gosu-$(dpkg --print-architecture)" \
        && chmod +x /usr/local/bin/gosu && echo "$GOSU_SHA  /usr/local/bin/gosu" | sha256sum -c -

COPY entrypoint.sh /entrypoint.sh

ENTRYPOINT ["/entrypoint.sh"]

I believe some explanation is required here. 
  1. The image we're going to build is based on official jenkins Docker image; 
  2. We're trying to setup a simple but powerful tool gosu here which can be used for runing some specific scripts by specific user later;
  3. We will use a dedicated entry script file namely "entrypoint.sh" for our customize-able scripts

Okay, time to focus on our entry point:
$ touch entrypoint.sh
$ vim entrypoint.sh

~/jenkins/entrypoint.sh
#! /bin/bash
set -e

# chown on the fly
chown -R 1000 "$JENKINS_HOME"
chown -R 1000 "/data/mvn_repo"

# chmod for jdk and maven
chmod +x /opt/maven/bin/mvn
chmod +x /opt/jdk/java_home/bin/java
chmod +x /opt/jdk/java_home/bin/javac

exec gosu jenkins /bin/tini -- /usr/local/bin/jenkins.sh

Some more explanation as the important tricks come from here:
  1. Yes, we will change the $JENKINS_HOME's owner to user Jenkins (with uid=1000) which has been hardcoded in official Jenkins Dockerfile, here;
  2. We're going to expose a dedicated folder "/data/mvn_repo" to be the "home" folder for the cached Maven artifacts, which should be mounted from host as a Docker volume;
  3. We're going to grant execute permission for some Maven, JDK executable commands which would be mounted and shared from host also!
Now it's ready and let's build our customized Docker image -- we tag it as devops/jenkinsci.

docker build -t devops/jenkinsci .

BTW, if we run it behind proxy, some build-args are required to be part of our command, as below example:

docker build \
 --build-arg https_proxy=http://192.168.56.1:3128 \
 --build-arg http_proxy=http://192.168.56.1:3128 \
 -t devops/jenkinsci .

Once succeeded, we can use below command to verify whether the newly built Docker image is in the local Docker repository or not:

$ docker images

Now we have successfully got our customized Docker image for Jenkins.

2. Engage Docker Compose

What's next? Well, it's time to engage Docker Compose to spin up our container(s).
(BTW, if there is just one container to spin up, simply use docker run command can work as well. But by consideration of more types of components would be involved in, Docker Compose is a better choice definitely...)

Let's prepare the docker-compose.yml:
$ pwd
/home/devops
$ touch docker-compose.yml
$ vim docker-compose.yml

~/docker-compose.yml
# Jenkins CI Server
jenkins:
  # 1 - Yeah, we use our customized Jenkins image
image: devops/jenkinsci volumes: # 2 - JENKINS_HOME - /data/jenkins:/var/jenkins_home # 3 - Share host's resources to container - /opt/jdk/java_home:/opt/jdk/java_home - /opt/maven:/opt/maven - /data/mvn_repo:/data/mvn_repo # 4 - see explanation - /var/run/docker.sock:/var/run/docker.sock - /usr/bin/docker:/usr/bin/docker

    # 5 - see explanation
- /lib/x86_64-linux-gnu/libsystemd-journal.so.0:/lib/x86_64-linux-gnu/libsystemd-journal.so.0 - /usr/lib/x86_64-linux-gnu/libapparmor.so.1:/usr/lib/x86_64-linux-gnu/libapparmor.so.1 - /lib/x86_64-linux-gnu/libcgmanager.so.0:/lib/x86_64-linux-gnu/libcgmanager.so.0 - /lib/x86_64-linux-gnu/libnih.so.1:/lib/x86_64-linux-gnu/libnih.so.1 - /lib/x86_64-linux-gnu/libnih-dbus.so.1:/lib/x86_64-linux-gnu/libnih-dbus.so.1 - /lib/x86_64-linux-gnu/libgcrypt.so.11:/lib/x86_64-linux-gnu/libgcrypt.so.11 ports: - "8081:8080"

Again, the explanation is as below:
  1. We employ our own customized Jenkins image;
  2. We mount host's /data/jenkins folder to /var/jenkins_home which is the default JENKINS_HOME location by Jenkins;
  3. We share host's JDK, Maven and Maven local repository to the container by mounting as the volumes;
  4. That's from the instructions of the great Docker sharing solution (I'd like to name it "Docker with Docker", haha), see here;
  5. Some lib components have to be shared for above "Docker with Docker" solution.
To start it up, simply key in:
$ docker-compose up

After a few seconds, our Jenkins container will be ready for use and we can simply open browser of your choice and access URL like http://{IP}:8081/ -- a brand new and fully customized Jenkins will be showing in front of you.

What's next? Jenkins console might be your workspace where interesting stuff like build project creation and configuration will be your work items to get CI work.
You may refer to Jenkins documentation to get started.

Conclusion

From these practices, some useful points we have learned and gained are:
  1. We can gain more benefits (e.g. control, flexibility) if we'd customize our Jenkins image and the customization is pretty easy;
  2. We can share a lot of resources from our host, from libs, JDK, Maven to any other potentially heavy/big stuff, so that we can always make our Docker container as lightweight as possible.

Interesting right? Let me know if you have any comments.

Tuesday, 23 August 2016

Consul - An Excellent Framework for Service Discovery and More: Get Started



1. What is Consul?

From Consul's introduction:
Consul has multiple components, but as a whole, it is a tool for discovering and configuring services in your infrastructure.
It provides several key features:

  • Service Discovery: Clients of Consul can provide a service, such as api or mysql, and other clients can use Consul to discover providers of a given service. Using either DNS or HTTP, applications can easily find the services they depend upon.
  • Health Checking: Consul clients can provide any number of health checks, either associated with a given service ("is the webserver returning 200 OK"), or with the local node ("is memory utilization below 90%"). This information can be used by an operator to monitor cluster health, and it is used by the service discovery components to route traffic away from unhealthy hosts.
  • Key/Value Store: Applications can make use of Consul's hierarchical key/value store for any number of purposes, including dynamic configuration, feature flagging, coordination, leader election, and more. The simple HTTP API makes it easy to use.
  • Multi Datacenter: Consul supports multiple datacenters out of the box. This means users of Consul do not have to worry about building additional layers of abstraction to grow to multiple regions.

Consul is designed to be friendly to both the DevOps community and application developers, making it perfect for modern, elastic infrastructures.

2.  Latest Version

As of writing, the latest version is v0.6.4.

3. Installation

Assume that we have two servers (Ubuntu 14 and 16, 64bit) in this practice:
  • 192.168.56.118
  • 192.168.56.119
We must install Consul in both server before further exploring.
$ mkdir ~/consul
$ cd ~/consul
$ wget https://releases.hashicorp.com/consul/0.6.4/consul_0.6.4_linux_amd64.zip
$ unzip consul_0.6.4_linux_amd64.zip
$ mv consul /usr/local/bin
$ consul -v
Consul v0.6.4
Consul Protocol: 3 (Understands back to: 1)

4. Start Up

There are several types of roles in Consul cluster:
  • Server
  • Client
The Server type also has Bootstrap mode or "normal" server. I won't deep dive too much as all the details can be further refered to Consul's Documentation.

4.1 As Bootstrap Instance

$ consul agent -server -bootstrap-expect 1 -data-dir /tmp/consul

But we may encounter issues like “Failed to get advertise address: Multiple private IPs found. Please configure one.” which is because we have multiple network adapters configured.
In this case, start it up this way:
$ consul agent -server -bootstrap-expect 1 -data-dir /tmp/consul -bind 192.168.56.119

4.2 As Other Server/Client Instance

In other servers, we should start it up as normal server (without -bootstrap):
$ consul agent -server -data-dir /tmp/consul -bind 192.168.56.118

Or client:
$ consul agent -data-dir /tmp/consul -bind 192.168.56.118


Note: “No cluster leader” will happen after this command because it cannot find a cluster leader and is not enabled to become the leader itself. This state occurs because our second and third server are enabled, but none of our servers are connected with each other yet.

5. consul join

To connect to each other, we need to join these servers to one another. This can be done in any direction.
So we can do it in any server, let say 192.168.56.119 and join another in:
$ consul join 192.168.56.118
Successfully joined cluster by contacting 1 nodes.

6. consul members

We can check our members and statuses:
$ consul members
Node       Address              Status  Type    Build  Protocol  DC
ubuntu-14  192.168.56.118:8301  alive   server  0.6.4  2         dc1
ubuntu16   192.168.56.119:8301  alive   server  0.6.4  2         dc1

7. Configuration

As a best practice, we should create a dedicated folder for further configuration, like services and checks.
$ sudo mkdir /etc/consul.d

And restart the agent with "-config-dir" specified:

  • For Bootstrap agent/server:
$ consul agent -server -bootstrap-expect 1 -data-dir /tmp/consul -bind 192.168.56.119 -config-dir /etc/consul.d


  • For normal agent/server:
$ consul agent -server -data-dir /tmp/consul -bind 192.168.56.118 -config-dir /etc/consul.d

7.1 Add Service & Health Check

That's the sexiest part of Consul.

Let’s use ngix, one of the most popular HTTP servers, as an example.

In both nodes, install the nginx:
$ sudo apt-get install nginx

And create a service file into our configuration folder:
$ sudo touch /etc/consul.d/web.json
$ sudo vim /etc/consul.d/web.json

Verify whether nginx is working:
$ curl --noproxy '*' 127.0.0.1

The web.json content is as below:
{
    "service": {
        "name": "web",
        "port": 80,
        "tags": ["nginx", "dev"],
        "check": {
            "script": "curl --noproxy '*' 127.0.0.1 > /dev/null 2>&1",
            "interval": "10s"
        }
    }
}

We can enable the service without restarting Consul by sending it a SIGHUP (1) signal:
$ ps –ef|grep 
bright    4583  2415  1 09:21 pts/0    00:00:06 consul agent -server -bootstrap-expect 1 -data-dir /tmp/consul -bind 192.168.56.119 -config-dir /etc/consul.d
$ sudo kill -1 4583  
==> Caught signal: hangup
==> Reloading configuration...
…
    2016/08/20 09:33:05 [INFO] agent: Synced service 'web'

8. Web UI Console

We can enable the built-in UI by simply adding “-ui” parameter when we start it up:
$ consul agent -server -bootstrap-expect 1 -data-dir /tmp/consul -bind 192.168.56.119 -config-dir /etc/consul.d –node=node1 –ui -client=192.168.56.119

Caution: we also need to specify “-client” here to “expose” the address.

Now open browser and key in: http://192.168.56.119:8500/ui/











We can always see a default “consul” service which is itself:


Click the “web” tab and we can see our web service is “passing” which means healthy.





















Of course, there are more features in the web UI Console pending for further exploring.

Well, we have done our "get started" part for Consul.

Tuesday, 12 July 2016

Travis-CI for Github public projects - the best friends to work together


For Github public projects, it's better to clearly provide the build status so everyone has the confidence to use your code.
It's absolutely necessary to have a Continuous Integration (CI) service integrated with our projects.

There are a series of such online CI services and so far I love Travis-CI the most as IMHO, they're the best friends to work together.

In short, Travis-CI provides online CI services. Most importantly, it's free of charge for open source projects and has fully integrated with Github.

To make it happen, simply follow below steps can do:

  1. Login with existing Github account and it will automatically sync our public Github repositories which may take a couple of minutes (depending on how many repositories you have in Github);
  2. Enable one specific (or all) repository for further CI service by one single click to switch on;
  3. Add a dedicated .travis.yml file under our project (see attached below for my project as an example);
  4. Git add, commit, push to Github to automatically trigger Travis CI service;
  5. To display the build status in Github project, simply do these:
    • Go to the specific repository (e.g. https://travis-ci.org/{username or org}/{repository})
    • Click the build image and select the Markdown from the dropdown list and copy the text;
    • Paste it into README.md of our Github project.
      For example, below is the text for my project:

      [![Build Status](https://travis-ci.org/itstarting/springboot-jersey-swagger.svg?branch=master)](https://travis-ci.org/itstarting/springboot-jersey-swagger)
  6. Then git add, commit, push to Github again and the build status will be displayed in our Github project (see my project as an example here).

Attachment - My .travis.yml

1
2
3
4
5
6
7
8
9
## We develop the project by Java
language: java

## Do the tests
script: mvn clean test

## We have specified the JDK as v8 in Maven pom.xml
jdk:
  - oraclejdk8

Monday, 20 June 2016

From Code To Online Services: My experiments of DevOps - Development of RESTful Web Services by Spring Boot, Jersey, Swagger


1. The Stack of Frameworks / Specs


Well, even any application can be built and used for this experiment, but why not build an interesting one: a real world RESTful service by saying hello to someone? Alright, that’s just another statement for a hello world application;)

Here we go:

  • Sprint Boot: The fundamental framework for the services.
  • Jersey: The RI of JAX-RS specs.
  • Swagger: The API documentation specs and tooling.
  • For testing, we’re going to use 
    • Spring Boot Test Framework (spring-boot-starter-test) with Junit, Hamcrest
    • Rest Assured

2. Git Server: Gogs


Even we can setup Git Server later, but as version control is important, please do it as early as possible.
Note: I did try GitLab but: 1) it’s too slow running in my laptop; and 2) I was facing several permission issues being thrown from embedded Nginx…so I changed my mind and had Gogs instead as it’s really “a painless self-hosted Git service” as advertised.

2.1 Spin Up the Docker Container

Login to the Ubuntu VM, our logical host and execute below commands:

1
2
3
4
5
$ cd ~
$ mkdir cd_demo
$ cd cd_demo
$ touch docker-compose.yml
$ vim docker-compose.yml

1
2
3
4
5
6
7
Gogs:
  image: 'gogs/gogs:latest'
  ports:
    - '3000:3000'
    - '1022:22'
  volumes:
    - '/data/gogs:/data'

Note: it’s a good practice to mount a local path (here is /data) for the container so all valuable data like settings, database will not be gone even we upgrade our container. 

$ docker-compose up

Now, we have spun up our first Docker container for Gogs, our Git server.

2.2 Configuration

In just a few seconds, the Gogs container should have been up and running and we can access the app by: http://192.168.56.118:3000

It would guide us for the “Install Steps For First-time Run”.
Below were my inputs, for your reference:
  • Database Settings
    • Database Type: SQLite3
    • Path: /data/db/gogs.db
  • Application General Settings
    • Application Name: Gogs: Go Git Service
    • Repository Root Path: /data/git/gogs-repositories
    • Run User: git
    • Domain: 192.168.56.118
    • SSH Port: 22
    • HTTP Port: 3000
    • Application URL: http://192.168.56.118:3000/
    • Log Path: data/log
  • Optional Settings
    • Admin Account Settings: omitted
Once done, Gogs will persistent the settings and redirect to the home page.

2.3 Create Our First Repository

All these can be done via Gogs UIs:
Gogs - New Repository
Gogs - New Repository for Spring Boot, Jersey, Swagger










































2.4 Clone the Git Repository to Kick Off Development

By command line, it’s very simple and straightforward.
Open Git Bash and key in:

$ git clone http://192.168.56.118:3000/bright/springboot-jersey-swagger.git

Or we can do it by using Eclipse if you prefer to.

3. Create Maven-based Project Structure

Since development part is my strongest part and code says more than words, let’s make it short but clear during the experiment.
Apparently, as we follow Maven’s best practices to build the project so the folder structure is exactly Maven-driven:
---------------------------------------------
/
   LICENSE
   pom.xml
   README.md
├───src
   ├───main
      ├───java
         └───bright
             └───zheng
                 └───poc
                     └───api
                            ApiApplication.java
                         ├───config
                                JerseyConfig.java
                         ├───model
                                Hello.java
                         └───resources
                                 HelloResource.java
      └───resources
             application.yml
             bootstrap.yml
             log4j.xml
          ├───static
   └───test
       └───java
           └───bright
               └───zheng
                   └───poc
                       └───api
                               ApplicationTests.java
---------------------------------------------

Meanwhile, I’ll highlight some important files with necessary comments so that we can easily understand how to develop a RESTful application by using abovementioned frameworks.

3.1 pom.xml

/pom.xml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
       <modelVersion>4.0.0</modelVersion>

       <parent>
              <groupId>org.springframework.boot</groupId>
              <artifactId>spring-boot-starter-parent</artifactId>
              <version>1.3.5.RELEASE</version>
              <relativePath />
       </parent>

       <groupId>bright.zheng.poc.api</groupId>
       <artifactId>springboot-jersey-swagger</artifactId>
       <version>1.0.0-SNAPSHOT</version>
       <packaging>jar</packaging>

       <name>springboot-jersey-swagger-docker</name>
       <description>POC - Restful API by Spring Boot, Jersey, Swagger</description>


       <properties>
              <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
              <java.version>1.8</java.version>
             
              <!-- project related properties -->
              <start-class>bright.zheng.poc.api.ApiApplication</start-class>
              <swagger-jersey2-jaxrs>1.5.3</swagger-jersey2-jaxrs>
              <rest-assured>3.0.0</rest-assured>
       </properties>

       <dependencies>
              <dependency>
                     <groupId>org.springframework.boot</groupId>
                     <artifactId>spring-boot-starter-actuator</artifactId>
              </dependency>
              <dependency>
                     <groupId>org.springframework.boot</groupId>
                     <artifactId>spring-boot-starter-jersey</artifactId>
              </dependency>
              <dependency>
                     <groupId>org.springframework.boot</groupId>
                     <artifactId>spring-boot-starter-web</artifactId>
              </dependency>
              <dependency>
                     <groupId>io.swagger</groupId>
                     <artifactId>swagger-jersey2-jaxrs</artifactId>
                     <version>${swagger-jersey2-jaxrs}</version>
              </dependency>

              <!-- testing related dependencies -->
              <dependency>
                     <groupId>org.springframework.boot</groupId>
                     <artifactId>spring-boot-starter-test</artifactId>
                     <scope>test</scope>
              </dependency>
              <dependency>
                     <groupId>io.rest-assured</groupId>
                     <artifactId>rest-assured</artifactId>
                     <version>${rest-assured}</version>
                     <scope>test</scope>
              </dependency>
       </dependencies>

       <build>
              <plugins>
                     <plugin>
                           <groupId>org.springframework.boot</groupId>
                           <artifactId>spring-boot-maven-plugin</artifactId>
                     </plugin>
              </plugins>
       </build>
</project>

3.2 ApiApplication.java


\src\main\java\bright\zheng\poc\api\ApiApplication.java
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
package bright.zheng.poc.api;


import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.builder.SpringApplicationBuilder;
import org.springframework.boot.context.web.SpringBootServletInitializer;

/**
 * This is a typical boostrap class for Spring Boot based application
 * 
 * @author bright.zheng
 *
 */
@SpringBootApplication
public class ApiApplication
extends SpringBootServletInitializer {  
       @Override
       protected SpringApplicationBuilder configure(SpringApplicationBuilder builder) {
            return builder.sources(ApiApplication.class);
       }

       public static void main(String[] args) {
              SpringApplication.run(ApiApplication.class, args);
       }
}

3.3 HelloResource.java

\src\main\java\bright\zheng\poc\api\resources\HelloResource.java
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
package bright.zheng.poc.api.resources;

import javax.ws.rs.Consumes;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.Response.Status;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Component;

import bright.zheng.poc.api.model.Hello;
import io.swagger.annotations.Api;
import io.swagger.annotations.ApiOperation;
import io.swagger.annotations.ApiParam;
import io.swagger.annotations.ApiResponse;
import io.swagger.annotations.ApiResponses;

@Component
@Path("/v1/hello")
@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
@Api(value = "Hello API - say hello to the world", produces = "application/json")
public class HelloResource {

       private static final Logger LOGGER = LoggerFactory.getLogger(HelloResource.class);

       @GET                                     //JAX-RS Annotation
       @Path("/{name}")     //JAX-RS Annotation
       @ApiOperation(                           //Swagger Annotation
                     value = "Say hello by providing the name by URI, via standard json header",
                     response = Hello.class) 
       @ApiResponses(value = {           //Swagger Annotation
              @ApiResponse(code = 200, message = "Resource found"),
           @ApiResponse(code = 404, message = "Resource not found")
       })
       public Response sayHelloByGet(@ApiParam @PathParam("name") String name) {
              LOGGER.info("v1/hello/{} - {}", name, MediaType.APPLICATION_JSON);
              return this.constructHelloResponse(name, MediaType.APPLICATION_JSON);
       }
      
       ///////////////////////////////////////////////////////////////

       private Response constructHelloResponse(String name, String via) {
              //for testing how we handle 404
              if ("404".equals(name)) {
                     return Response.status(Status.NOT_FOUND).build();
              }
              Hello result = new Hello();
              result.setMsg(String.format("Hello %s - %s", name, via));
              return Response.status(Status.OK).entity(result).build();
       }
}



3.4 JerseyConfig.java

\src\main\java\bright\zheng\poc\api\config\JerseyConfig.java
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
package bright.zheng.poc.api.config;

import javax.annotation.PostConstruct;

import org.glassfish.jersey.server.ResourceConfig;
import org.glassfish.jersey.server.wadl.internal.WadlResource;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;

import bright.zheng.poc.api.resources.HelloResource;
import io.swagger.jaxrs.config.BeanConfig;
import io.swagger.jaxrs.listing.ApiListingResource;
import io.swagger.jaxrs.listing.SwaggerSerializers;

/**
 * Jersey Configuration
 *
 * <p>
 *
 * The endpoint will expose not only the real APIs,
 * but also two important documents:
 * <li> - swagger spec: /swagger.json</li>
 * <li> - WADL spec: /application.wadl</li>
 *
 * </p>
 *
 * @author bright.zheng
 *
 */
@Component
public class JerseyConfig extends ResourceConfig {

       @Value("${spring.jersey.application-path:/api}")
       private String apiPath;

       public JerseyConfig() {
              this.registerEndpoints();
       }
      
       @PostConstruct
       public void init() {
              this.configureSwagger();
       }

       private void registerEndpoints() {
              this.register(HelloResource.class);
              this.register(WadlResource.class);
       }

       private void configureSwagger() {
              this.register(ApiListingResource.class);
              this.register(SwaggerSerializers.class);

              BeanConfig config = new BeanConfig();
              config.setTitle("POC - Restful API by Spring Boot, Jersey, Swagger");
              config.setVersion("v1");
              config.setContact("Bright Zheng");
              config.setSchemes(new String[] { "http", "https" });
              config.setBasePath(this.apiPath);
              config.setResourcePackage("bright.zheng.poc.api.resources");
              config.setPrettyPrint(true);
              config.setScan(true);
       }

}


4. Swagger UI Integration

It would be great to have Swagger UI integration for automatic API documentation.
Simply do below steps:


5. Testing

Programmers write unit tests, of course. Here we go:

5.1 ApplicationTests.java

\src\test\java\bright\zheng\poc\api\ApplicationTests.java
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
package bright.zheng.poc.api;

import static io.restassured.RestAssured.expect;
import static org.hamcrest.Matchers.containsString;
import static org.hamcrest.Matchers.equalTo;

import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.test.SpringApplicationConfiguration;
import org.springframework.boot.test.WebIntegrationTest;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;

import io.restassured.RestAssured;

/**
 * Application level testing
 *
 * @author bright.zheng
 *
 */
@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = ApiApplication.class)
@WebIntegrationTest({"server.port=0"})
public class ApplicationTests {
      
    @Value("${local.server.port}")
    private int port;
      
    @Before
    public void setupURL() {
        RestAssured.baseURI = "http://localhost";
        RestAssured.port = port;
        RestAssured.basePath = "/";
    }
 
    /**
     *
     * {"app":{"name":"springboot-jersey-swagger"},"build":{"version":"1.0.0-SNAPSHOT"}}
     *
     * @throws Exception
     */
    @Test
    public void springroot_info() throws Exception {
       expect().
              body("app.name", equalTo("springboot-jersey-swagger")).
              body("build.version", equalTo("1.0.0-SNAPSHOT")).
           when().
              get("/info");
    }
   
    /**
     * {"msg":"Hello Bright - application/json"}
     *
     * @throws Exception
     */
    @Test
    public void hello_get() {
       expect().
              body("msg", containsString("Hello Bright")).
        when().
              get("/api/v1/hello/Bright");
    }

}



6. Run It


6.1 Start It Up


By Maven command line in Windows, it would be:
1
2
3
> cd /d D:\development\workspaces\java\POC\springboot-jersey-swagger
> mvn package
> java -jar target/springboot-jersey-swagger-1.0.0-SNAPSHOT.jar



By Eclipse, simply open ApiApplication.java and click Run menu -> Run As -> Java Application, it would be booted up within seconds.

6.2 Take a Look at the API Documentation UI provided by Swagger

Simply open browser and key in: http://localhost:8000/





























We can even try the services. Simply key in the parameter (name here) and click the “Try it out!” button, we can see everything in the same page including: 

    • Curl command if we want to reproduce it on Curl
    • URL we just requested
    • Response Code
    • Response Body
    • Response Headers

    7. Commit and Push to Git


    Commit and push our first draft of Hello Service to our local Git Server.

    By Git Bash:
    1
    2
    3
    4
    $ git status
    $ git add –A
    $ git commit –m “first draft of Hello Service”
    $ git push
    


    Note: I also synced the code to GitHub, follow below steps:
    • Create a GitHub repository: itstarting/springboot-jersey-swagger
    • Execute commands like:
    1
    2
    3
    4
    5
    6
    7
    8
    $ cd /D/development/workspaces/java/POC/springboot-jersey-swagger
    $ git remote add github https://github.com/brightzheng100/springboot-jersey-swagger.git
    $ git remote -v
    github  https://github.com/brightzheng100/springboot-jersey-swagger.git (fetch)
    github  https://github.com/brightzheng100/springboot-jersey-swagger.git (push)
    origin  http://192.168.56.118:3000/bright/springboot-jersey-swagger.git (fetch)
    origin  http://192.168.56.118:3000/bright/springboot-jersey-swagger.git (push) 
    $ git push -u github master
    

    By Eclipse it's very simple: 
    • Right click project -> team -> Commit…
    • Choose all items in “Unstaged Changes”, right click -> Add to Index, which will add all items into “Staged Changes”
    • Key in comments in “Commit Message” and click “Commit and Push” button and eventually we have below result.


    Well, the development works are done!



    For complete source code, please visit my GitHub. Enjoy coding!