Saturday, 27 October 2018

How-to Series: How to `bosh scp` files into BOSH-managed VMs' Folders


Issue / Problem Statement


If we bosh scp directly to folders under /var/vcap/jobs/xxx/config, you will encounter Permission denied like:

$ bosh -e lite -d concourse4 scp concourse.yml web:/var/vcap/jobs/uaa/config/

Using environment '192.168.50.6' as client 'admin'

Using deployment 'concourse4'

Task 3687. Done
web/4de05ffc-ae08-4dfa-a52a-a751e62b624f: stderr | Unauthorized use is strictly prohibited. All access and activity
web/4de05ffc-ae08-4dfa-a52a-a751e62b624f: stderr | is subject to logging and monitoring.
web/4de05ffc-ae08-4dfa-a52a-a751e62b624f: stderr | scp: /var/vcap/jobs/uaa/config//concourse.yml: Permission denied

Interestingly, if we bosh scp to some folders like `/tmp', it's fine:

$ bosh -e lite -d concourse4 scp concourse.yml web:/tmp/
Using environment '192.168.50.6' as client 'admin'

Using deployment 'concourse4'

Task 3689. Done
web/4de05ffc-ae08-4dfa-a52a-a751e62b624f: stderr | Unauthorized use is strictly prohibited. All access and activity
web/4de05ffc-ae08-4dfa-a52a-a751e62b624f: stderr | is subject to logging and monitoring.

Let's verify it:

$ bosh -e lite -d concourse4 ssh web -c 'ls -la /tmp/'
Using environment '192.168.50.6' as client 'admin'

Using deployment 'concourse4'

Task 3693. Done
web/4de05ffc-ae08-4dfa-a52a-a751e62b624f: stderr | Unauthorized use is strictly prohibited. All access and activity
web/4de05ffc-ae08-4dfa-a52a-a751e62b624f: stderr | is subject to logging and monitoring.
web/4de05ffc-ae08-4dfa-a52a-a751e62b624f: stdout | total 2128
web/4de05ffc-ae08-4dfa-a52a-a751e62b624f: stdout | drwxrwx--T  4 root                 vcap                    4096 Oct 27 13:27 .
web/4de05ffc-ae08-4dfa-a52a-a751e62b624f: stdout | drwxr-xr-x 34 root                 root                    4096 Oct 25 08:55 ..
web/4de05ffc-ae08-4dfa-a52a-a751e62b624f: stdout | -rw-r--r--  1 root                 root                  233394 Oct 25 08:57 ca-certificates.crt
web/4de05ffc-ae08-4dfa-a52a-a751e62b624f: stdout | -rw-r--r--  1 bosh_99ef981c59fa482 bosh_99ef981c59fa482    2496 Oct 27 13:27 concourse.yml
web/4de05ffc-ae08-4dfa-a52a-a751e62b624f: stdout | -rwxr-xr-x  1 root                 root                 1923450 Aug 24 05:46 garden-init
web/4de05ffc-ae08-4dfa-a52a-a751e62b624f: stdout | drwxr-xr-x  2 root                 root                    4096 Oct 25 08:57 hsperfdata_root
web/4de05ffc-ae08-4dfa-a52a-a751e62b624f: stdout | drwxr-x---  2 vcap                 vcap                    4096 Oct 25 08:57 hsperfdata_vcap
web/4de05ffc-ae08-4dfa-a52a-a751e62b624f: stderr | Connection to 10.244.0.104 closed.


Yes, we can see that the file concourse.yml has been successfully copied into our BOSH-managed VM node web.
Now if we really want to scp file(s) to the right place under /var/vcap/jobs/xxx/config, how?

Solution


As you have seen we can copy over to /tmp, let's wrap it up with some more scripts.

$ bosh -e lite -d concourse4 scp concourse.yml web:/tmp/
$ bosh -e lite -d concourse4 ssh web -c 'sudo cp /tmp/concourse.yml /var/vcap/jobs/uaa/config/'

Let's verify it again:

$ bosh -e lite -d concourse4 ssh web -c 'sudo ls -al /var/vcap/jobs/uaa/config/'
Using environment '192.168.50.6' as client 'admin'

Using deployment 'concourse4'

Task 3699. Done
web/4de05ffc-ae08-4dfa-a52a-a751e62b624f: stderr | Unauthorized use is strictly prohibited. All access and activity
web/4de05ffc-ae08-4dfa-a52a-a751e62b624f: stderr | is subject to logging and monitoring.
web/4de05ffc-ae08-4dfa-a52a-a751e62b624f: stdout | total 60
web/4de05ffc-ae08-4dfa-a52a-a751e62b624f: stdout | drwxr-x--- 3 root vcap  4096 Oct 27 13:34 .
web/4de05ffc-ae08-4dfa-a52a-a751e62b624f: stdout | drwxr-x--- 5 root vcap  4096 Oct 25 08:56 ..
web/4de05ffc-ae08-4dfa-a52a-a751e62b624f: stdout | -rw-r----- 1 root vcap   420 Oct 25 08:56 bpm.yml
web/4de05ffc-ae08-4dfa-a52a-a751e62b624f: stdout | -rw-r--r-- 1 root root  2496 Oct 27 13:34 concourse.yml
...
web/4de05ffc-ae08-4dfa-a52a-a751e62b624f: stderr | Connection to 10.244.0.104 closed.

Yes, we made it! The file has been copied to the desired folder, say /var/vcap/jobs/uaa/config/.

I also posted it in my gist, here.



References:

1. https://bosh.io/docs/cli-v2/#scp

Tuesday, 23 October 2018

How-to Series: How to Quickly Setup OpenLDAP for Testing Purposes


Issue / Problem Statements


Like it or not, LDAP still plays heavy roles in most of the organizations as user store and authentication authority.
To setup a proper LDAP Server may require some LDAP knowledge and experience. So how to setup a LDAP in a quick (say <10mins) and easy way becomes obvious requirement.

Solution:


Highlights:


  • Use Docker
  • OpenLDAP
  • With sample data (which can be customized of course) pre-loaded
  • Provide sample `ldapsearch`

Steps:


1. Prepare users and group data, the *.ldif files

$ mkdir testdata

$ cat testdata/1-users.ldif
dn: ou=people,dc=bright,dc=com
ou: people
description: All people in organisation
objectclass: organizationalunit

# admin1
dn: cn=admin1,ou=people,dc=bright,dc=com
objectClass: inetOrgPerson
sn: admin1
cn: admin1
uid: admin1
mail: admin1@bright.com
# secret
userPassword: {SSHA}RRN6AM9u0tpTEOn6oBcIt9X3BbFPKVk5

# admin2
dn: cn=admin2,ou=people,dc=bright,dc=com
objectClass: inetOrgPerson
sn: admin2
cn: admin2
uid: admin2
mail: admin2@bright.com
# secret
userPassword: {SSHA}RRN6AM9u0tpTEOn6oBcIt9X3BbFPKVk5

# developer
dn: cn=developer1,ou=people,dc=bright,dc=com
objectClass: inetOrgPerson
sn: developer1
cn: developer1
mail: developer1@bright.com
userPassword: {SSHA}RRN6AM9u0tpTEOn6oBcIt9X3BbFPKVk5

$ cat testdata/2-groups.ldif
dn: ou=groups,dc=bright,dc=com
objectClass: organizationalUnit
ou: groups

dn: cn=admins,ou=groups,dc=bright,dc=com
objectClass: groupOfNames
cn: admins
member: cn=admin1,ou=people,dc=bright,dc=com
member: cn=admin2,ou=people,dc=bright,dc=com

dn: cn=developers,ou=groups,dc=bright,dc=com
objectClass: groupOfNames
cn: developers
member: cn=admin1,ou=people,dc=bright,dc=com
member: cn=developer1,ou=people,dc=bright,dc=com


2. Start It Up

$ docker run --name my-openldap-container \
    --env LDAP_ORGANISATION="Bright Inc" \
    --env LDAP_DOMAIN="bright.com" \
    --env LDAP_ADMIN_PASSWORD="secret" \
    --volume "$(pwd)"/testdata:/container/service/slapd/assets/config/bootstrap/ldif/custom \
    -p 10389:389 \
    --detach \
    osixia/openldap:1.2.2 --copy-service --loglevel debug

Now the LDAP service is exposed from container port 389 to local port 10389.

Note: to clean it up, use below command:
$ docker stop $(docker ps -aqf "name=my-openldap-container") && \
  docker rm $(docker ps -aqf "name=my-openldap-container")

3. Test It Out

$ ldapsearch -LLL -x \

    -H ldap://localhost:10389 \
    -D "cn=admin,dc=bright,dc=com" -w secret \
    -b 'dc=bright,dc=com' \
    dn
dn: dc=bright,dc=com
dn: cn=admin,dc=bright,dc=com
dn: ou=people,dc=bright,dc=com
dn: cn=admin1,ou=people,dc=bright,dc=com
dn: cn=admin2,ou=people,dc=bright,dc=com
dn: cn=developer1,ou=people,dc=bright,dc=com
dn: ou=groups,dc=bright,dc=com
dn: cn=admins,ou=groups,dc=bright,dc=com
dn: cn=developers,ou=groups,dc=bright,dc=com

$ ldapsearch -LLL -x \
    -H ldap://localhost:10389 \
    -D "cn=admin,dc=bright,dc=com" -w secret \
    -b 'dc=bright,dc=com' \
    '(&(objectClass=groupOfNames)(member=cn=admin2,ou=people,dc=bright,dc=com))' \
    cn
dn: cn=admins,ou=groups,dc=bright,dc=com
cn: admins

That's it. Hope it helps!

Thursday, 11 October 2018

How-to Series: How To Correctly Enable Self-signed Cert for PCF - NSX-T Integration


Issue Description:

While trying to enable NSX-T for Pivotal Cloud Foundry (PCF), OpsMan complains about:

Error connecting to NSX IP: The NSX server's certificate is not signed with the provided NSX CA cert. Please provide the correct NSX CA cert', type: IaasConfigurationVerifier




Issue Analysis:

As the connectivity between PCF and NSX-T Manager is secured by SSL, one has to configure the CA certificate in PEM format that authenticates to the NSX-T Manager.
Refer to below screenshot for the configuration requirements.


Now the issue is how to get back the right CA cert and what should be the right way to handle this situation.

Solution:

If you're using a simple self-signed cert, you can probably try below "quick" solution -- simply use openssl tool to get the cert back, as suggested in the doc here:

openssl s_client -showcerts \
  -connect NSX-MANAGER-ADDRESS:443 < /dev/null 2> /dev/null | openssl x509

So that you can simply get the cert and put it there.

But sometimes it won't work so I'd always recommend a "better" way and the steps are as below:

1. Generate NSX-T CSR

Log on to the NSX Manager and navigate to System|Trust, click CSRs tab and then “Generate CSR”, populate the certificate request and click Save.
Select the new CSR and click Actions|Download CSR PEM to save the exported CSR in PEM format.


2. Ask Your CA to Sign This CSR

Submit the CSR file to your CA to get it signed and get back the new certificate.

Some organization has a CA chain and use the intermediate CA to sign the real CSR. At this case, please remember to make it a full cert chain by having certs concatenated (simply copy & paste to the end of the cert, one by one in a good text editor like Visual Studio Code) as:
- Newly signed NSX Certificate
- Intermediate CA
- Root CA

But sometimes there is no official internal CA, I'd always recommend to generate one so that you can have better control and easier trust link while handling big distributed systems like PCF.
I compiled a simple two-step process for doing that:
- Gist of generate-internal-ca.md
- Gist of sign-certs-with-internal-ca.md


3. Configure NSX Manager

Now assuming we already got our CSR signed and with you there is the concatenated certs.

In NSX Manager, select the CSRs tab and click Actions | Import Certificate for CSR.
In the window, paste in the concatenated certificates from above and click save.

Now you’ll have a new certificate and CA certs listed under Certificates tab.


4. Apply the Certs

Under Certificates tab, copy the full ID of the newly added certificate (not CA).
Note: somehow the UI only shows a portion of the ID by default, click it to display the full ID and copy it to the clip board.

Launch RESTClient in Firefox or Advanced REST client in Chrome.
Make sure have below request configuration:
- URL: https://<NSX Manager IP or FQDN>api/v1/node/services/http?action=apply_certificate&certificate_id=<certificate ID copied in previous step>
- Authentication: Basic Authentication, enter the NSX Manager credentials for Username and Password
- Method: POST

click SEND button and make sure that the response status code is 200.

Refresh browser session to NSX Manager GUI to confirm new certificate is in use.


Ref: 
1. https://brianragazzi.wordpress.com/2018/03/08/replacing-the-self-signed-certificate-on-nsx-t/

Monday, 18 September 2017

Deploy Vault, the great credential management tool, by BOSH

Yes, We Need Credential Management Tool

Credential management is critical in microservices world.
You can imagine that within microservices architecture, there are a series of services/apps deployed into our environment. How to provide a service, let's use MySQL as an example, to identified applications, creation of credentials is mandatory.
Now how to handover the credential to these identified applications? How to make sure the credentials won't be leaked to developers/operators?

Yes, in short, we need credential management tools like Vault.

Since BOSH is a great platform for deploying and managing software clusters. It's a good idea to let Vault work with BOSH.

Preparation & Requisites

1. Prepare Infra

The process is based on Google Cloud Platform (GCP).
But, as BOSH is the great platform to abstract the IaaSes, changing to another IaaS, for example AWS or Azure, is just a trivial thing to handle.

What we need to prepare are:

  • VPC with subnet(s). In my example, my VPC is as below
    • VPC Network: bosh-sandbox
    • Subnet: bosh-releases
    • Region: us-central1
    • IP address range: 10.0.100.0/22
  • A Service Account with necessary Roles, for example:
    • App Engine Admin
    • Compute Instance Admin (v1)
    • Compute Network Admin
    • Compute Storage Admin
  • A Ubuntu Jumpbox (within the same VPC)

2. Create BOSH Director

$ git clone https://github.com/cloudfoundry/bosh-deployment
$ bosh create-env bosh-deployment/bosh.yml \
    --state=state.json \
    --vars-store=creds.yml \
    -o bosh-deployment/gcp/cpi.yml \
    -v director_name=bosh-gcp \
    -v internal_cidr=10.0.100.0/22 \
    -v internal_gw=10.0.100.1 \
    -v internal_ip=10.0.100.6 \
    --var-file gcp_credentials_json=<CREDENTIAL JSON FILE> \
    -v project_id=<PROJECT ID> \
    -v zone=us-central1-a \
    -v tags=[internal,bosh] \
    -v network=bosh-sandbox \
    -v subnetwork=bosh-releases

3. Alias BOSH Env

$ bosh alias-env sandbox -e 10.0.100.6 --ca-cert <(bosh int ./creds.yml --path /director_ssl/ca)

4. Export & Login to the Director

$ export BOSH_CLIENT=admin && export BOSH_CLIENT_SECRET=`bosh int ./creds.yml --path /admin_password`

5. Prepare & Update Cloud Config

$ bosh -e sandbox update-cloud-config cloud-config-gcp.yml

6. Upload Stemcells

$ bosh -e sandbox upload-stemcell stemcells/light-bosh-stemcell-3421.11-google-kvm-ubuntu-trusty-go_agent.tgz

Prepare Manifest File

$ vi vault.yml

With below content:
---
name: vault

instance_groups:
- instances: 1
  name: vault
  networks: [
    {
      name: ((VAULT_NW_NAME)), 
      static_ips: ((VAULT_STATIC_IP))
    }
  ]
  persistent_disk: 4096
  properties:
    vault:
      backend:
        use_file: true
      ha:
        redirect: null
  vm_type: ((VAULT_VM_TYPE))
  stemcell: trusty
  azs: [((VAULT_AZ_NAME))]
  jobs:
  - name: vault
    release: vault

releases:
- name: vault
  version: 0.6.2
  url: https://bosh.io/d/github.com/cloudfoundry-community/vault-boshrelease?v=0.6.2
  sha1: 36fd3294f756372ff9fbbd6dfac11fe6030d02f9

stemcells:
- alias: trusty
  os: ubuntu-trusty
  version: latest

update:
  canaries: 1
  canary_watch_time: 1000-30000
  max_in_flight: 50
  serial: false
  update_watch_time: 1000-30000

Deploy Vault

$ bosh -e sandbox -d vault deploy vault.yml \
    -v VAULT_NW_NAME=network-z1-only \
    -v VAULT_STATIC_IP=10.0.100.10 \
    -v VAULT_VM_TYPE=small \
    -v VAULT_AZ_NAME=z1

Once it's done, we can verify:
$ bosh -e sandbox -d vault vms
Using environment '10.0.100.6' as client 'admin'

Task 118. Done

Deployment 'vault'

Instance                                    Process State  AZ  IPs          VM CID                                   VM Type
vault/8753e1bb-e486-44a2-8bba-25fa4c2278b4  running        z1  10.0.100.10  vm-86c20491-ebdc-4948-631f-e0ac733d0690  small

1 vms

Succeeded

Post Actions

After deployment is done, enable it by following steps.

1. Prepare Tools

$ wget https://releases.hashicorp.com/vault/0.8.2/vault_0.8.2_linux_amd64.zip
$ unzip vault_0.8.2_linux_amd64.zip
$ chmod +x vault && sudo mv vault /usr/local/bin/
$ vault -v
    Vault v0.8.2 ('9afe7330e06e486ee326621624f2077d88bc9511')

2. Unseal Vault

$ export VAULT_ADDR=http://10.0.100.10:8200
    Unseal Key 1: xxx
    Unseal Key 2: xxx
    Unseal Key 3: xxx
    Unseal Key 4: xxx
    Unseal Key 5: xxx
    Initial Root Token: xxx

    Vault initialized with 5 keys and a key threshold of 3. Please
    securely distribute the above keys. When the vault is re-sealed,
    restarted, or stopped, you must provide at least 3 of these keys
    to unseal it again.

    Vault does not store the master key. Without at least 3 keys,
    your vault will remain permanently sealed.
$ vault unseal
    Key (will be hidden):
    Sealed: true
    Key Shares: 5
    Key Threshold: 3
    Unseal Progress: 1
    Unseal Nonce: 1ffc83e5-2e5f-1038-4e68-0223f1544746
$ vault unseal
    Key (will be hidden):
    Sealed: true
    Key Shares: 5
    Key Threshold: 3
    Unseal Progress: 2
    Unseal Nonce: 1ffc83e5-2e5f-1038-4e68-0223f1544746
$ vault unseal
    Key (will be hidden):
    Sealed: false
    Key Shares: 5
    Key Threshold: 3
    Unseal Progress: 0
    Unseal Nonce:
$ vault auth
    Token (will be hidden):
    Successfully authenticated! You are now logged in.
    token: 5e3f7eba-2e27-fc74-7c55-f6084bad4b00
    token_duration: 0
    token_policies: [root]

3. Try Putting Values

Now, you can put secrets in the vault, and read them back out. For example:

$ vault write secret/test mykey=myvalue
    Success! Data written to: secret/test
$ vault read secret/test
    Key              Value
    ---              -----
    refresh_interval 768h0m0s
    mykey            myvalue
$ vault delete secret/test
    Success! Deleted 'secret/test' if it existed.

Yeah! Vault, managed by BOSH, is ready to rock.

Note: this is just the basic setup, please refer to vault-boshrelease for more advanced topics like HA, zero-downtime upgrade etc.




Thursday, 20 October 2016

Integrated Continuous Deployment (CD) Solution with Mesos, Zookeeper, Marathon on top of CI

Overview

After development of our RESTful webservices (see post here) and CI exercises (see post here), now we eventually reach the Continuous Deployment (CD) part.
Obviously, we’re going to experiment a Docker-based CD process.

Below components will be engaged to build up our integrated CD solution:

  • The Docker image we built and published previously in our private Docker Registry;
  • Apache Mosos: The DC/OS platform for managing our CPU, memory, storage, and other compute resources within our distributed clusters;
  • Apache ZooKeeper: A highly reliable distributed coordination tool for Apache Mosos cluster;
  • Mesosphere Marathon: The container orchestration platform for Apache Mesos.

What we’re going to focus in this chapter is as highlighted as below:



Table of Contents

Update docker-compose.yml By Adding CD Components

Having the infrastructure setup for CI previously, it’s pretty easy to add more components into our Docker Compose configuration file for our CD solution which includes:
  • ZooKeeper
  • Mesos: Master, Slave
  • Marathon

ZooKeeper

~/docker-compose.yml
…
# Zookeeper
zookeeper:
  image: jplock/zookeeper:3.4.5
  ports:
    - "2181"

Mesos: mesos-master, mesos-slave

~/docker-compose.yml
…
# Mesos Master
mesos-master:
  image: mesosphere/mesos-master:0.28.1-2.0.20.ubuntu1404
  hostname: mesos-master
  links:
    - zookeeper:zookeeper
  environment:
    - MESOS_ZK=zk://zookeeper:2181/mesos
    - MESOS_QUORUM=1
    - MESOS_WORK_DIR=/var/lib/mesos
    - MESOS_LOG_DIR=/var/log
  ports:
    - "5050:5050"

# Mesos Slave
mesos-slave:
  image: mesosphere/mesos-slave:0.28.1-2.0.20.ubuntu1404
  links:
    - zookeeper:zookeeper
    - mesos-master:mesos-master
  environment:
    - MESOS_MASTER=zk://zookeeper:2181/mesos
    - MESOS_EXECUTOR_REGISTRATION_TIMEOUT=5mins
    - MESOS_CONTAINERIZERS=docker,mesos
    - MESOS_ISOLATOR=cgroups/cpu,cgroups/mem
    - MESOS_LOG_DIR=/var/log
  volumes:
    - /var/run/docker.sock:/run/docker.sock
- /usr/bin/docker:/usr/bin/docker

    - /sys:/sys:ro
- mesosslace-stuff:/var/log

    - /lib/x86_64-linux-gnu/libsystemd-journal.so.0:/lib/x86_64-linux-gnu/libsystemd-journal.so.0
    - /usr/lib/x86_64-linux-gnu/libapparmor.so.1:/usr/lib/x86_64-linux-gnu/libapparmor.so.1
    - /lib/x86_64-linux-gnu/libcgmanager.so.0:/lib/x86_64-linux-gnu/libcgmanager.so.0
    - /lib/x86_64-linux-gnu/libnih.so.1:/lib/x86_64-linux-gnu/libnih.so.1
    - /lib/x86_64-linux-gnu/libnih-dbus.so.1:/lib/x86_64-linux-gnu/libnih-dbus.so.1
    - /lib/x86_64-linux-gnu/libgcrypt.so.11:/lib/x86_64-linux-gnu/libgcrypt.so.11
  expose:
    - "5051"

Marathon

~/docker-compose.yml
…
# Marathon
marathon:
  image: mesosphere/marathon
  links:
    - zookeeper:zookeeper
  ports:
    - "8080:8080"
  command: --master zk://zookeeper:2181/mesos --zk zk://zookeeper:2181/marathon

The Updated docker-compose.yml

Now the final updated ~/docker-compose.yml is as below:
# Git Server
gogs:
  image: 'gogs/gogs:latest'
  #restart: always
  hostname: '192.168.56.118'
  ports:
    - '3000:3000'
    - '1022:22'
  volumes:
    - '/data/gogs:/data'

# Jenkins CI Server
jenkins:
  image: devops/jenkinsci
#  image: jenkinsci/jenkins
#  links:
#    - marathon:marathon
  volumes:
    - /data/jenkins:/var/jenkins_home

    - /opt/jdk/java_home:/opt/jdk/java_home
    - /opt/maven:/opt/maven
    - /data/mvn_repo:/data/mvn_repo

    - /var/run/docker.sock:/var/run/docker.sock
    - /usr/bin/docker:/usr/bin/docker
    - /etc/default/docker:/etc/default/docker

    - /lib/x86_64-linux-gnu/libsystemd-journal.so.0:/lib/x86_64-linux-gnu/libsystemd-journal.so.0
    - /usr/lib/x86_64-linux-gnu/libapparmor.so.1:/usr/lib/x86_64-linux-gnu/libapparmor.so.1
    - /lib/x86_64-linux-gnu/libcgmanager.so.0:/lib/x86_64-linux-gnu/libcgmanager.so.0
    - /lib/x86_64-linux-gnu/libnih.so.1:/lib/x86_64-linux-gnu/libnih.so.1
    - /lib/x86_64-linux-gnu/libnih-dbus.so.1:/lib/x86_64-linux-gnu/libnih-dbus.so.1
    - /lib/x86_64-linux-gnu/libgcrypt.so.11:/lib/x86_64-linux-gnu/libgcrypt.so.11
  ports:
- "8081:8080"

# Private Docker Registry
docker-registry:
  image: registry
  volumes:
    - /data/registry:/var/lib/registry
  ports:
    - "5000:5000"

# Zookeeper
zookeeper:
  image: jplock/zookeeper:3.4.5
  ports:
    - "2181"

# Mesos Master
mesos-master:
  image: mesosphere/mesos-master:0.28.1-2.0.20.ubuntu1404
  hostname: mesos-master
  links:
    - zookeeper:zookeeper
  environment:
    - MESOS_ZK=zk://zookeeper:2181/mesos
    - MESOS_QUORUM=1
    - MESOS_WORK_DIR=/var/lib/mesos
    - MESOS_LOG_DIR=/var/log
  ports:
    - "5050:5050"

# Mesos Slave
mesos-slave:
  image: mesosphere/mesos-slave:0.28.1-2.0.20.ubuntu1404
  links:
    - zookeeper:zookeeper
    - mesos-master:mesos-master
  environment:
    - MESOS_MASTER=zk://zookeeper:2181/mesos
    - MESOS_EXECUTOR_REGISTRATION_TIMEOUT=5mins
    - MESOS_CONTAINERIZERS=docker,mesos
    - MESOS_ISOLATOR=cgroups/cpu,cgroups/mem
    - MESOS_LOG_DIR=/var/log
  volumes:
    - /var/run/docker.sock:/run/docker.sock
     - /usr/bin/docker:/usr/bin/docker

    - /sys:/sys:ro
    - mesosslace-stuff:/var/log

    - /lib/x86_64-linux-gnu/libsystemd-journal.so.0:/lib/x86_64-linux-gnu/libsystemd-journal.so.0
    - /usr/lib/x86_64-linux-gnu/libapparmor.so.1:/usr/lib/x86_64-linux-gnu/libapparmor.so.1
    - /lib/x86_64-linux-gnu/libcgmanager.so.0:/lib/x86_64-linux-gnu/libcgmanager.so.0
    - /lib/x86_64-linux-gnu/libnih.so.1:/lib/x86_64-linux-gnu/libnih.so.1
    - /lib/x86_64-linux-gnu/libnih-dbus.so.1:/lib/x86_64-linux-gnu/libnih-dbus.so.1
    - /lib/x86_64-linux-gnu/libgcrypt.so.11:/lib/x86_64-linux-gnu/libgcrypt.so.11
  expose:
    - "5051"

# Marathon
marathon:
  image: mesosphere/marathon
  links:
    - zookeeper:zookeeper
  ports:
    - "8080:8080"
  command: --master zk://zookeeper:2181/mesos --zk zk://zookeeper:2181/marathon

Spin Up the Docker Containers

$ docker-compose up

Verify Our Components

Jenkins: http://192.168.56.118:8081/
Marathon: http://192.168.56.118:8080/

Deployment By Using Mesos + Marathon

Marathon provides RESTful APIs for managing the behaviors we're going to load into the platform, such as deleting an application, deploying an application etc.
So the idea we're going to illustrate is to:
  1. Prepare the payload, or the Marathon deployment file;
  2. Wrap the interactions with Marathon as simple deployment scripts which can eventually be "plugged" into Jenkins as one more build steps to kick off the CD pipeline.

Marathon Deployment File

Firstly we need to create the Marathon deployment file, which is in JSON format, for scheduling our application on Marathon.
Let’s name it marathon-app-springboot-jersey-swagger.json and put it under /data/jenkins/workspace/springboot-jersey-swagger (which will be eventually mapped to Jenkins container’s path of “/var/jenkins_home/workspace/springboot-jersey-swagger”, or "${WORKSPACE}/" so that it’s reachable in Jenkins).
$ sudo touch /data/jenkins/workspace/springboot-jersey-swagger/marathon-app-springboot-jersey-swagger.json
$ sudo vim /data/jenkins/workspace/springboot-jersey-swagger/marathon-app-springboot-jersey-swagger.json

/data/jenkins/workspace/springboot-jersey-swagger/marathon-app-springboot-jersey-swagger.json
{
    "id": "springboot-jersey-swagger", 
    "container": {
      "docker": {
        "image": "192.168.56.118:5000/devops/springboot-jersey-swagger:latest",
        "network": "BRIDGE",
        "portMappings": [
          {"containerPort": 8000, "servicePort": 8000}
        ]
      }
    },
    "cpus": 0.2,
    "mem": 256.0,
    "instances": 1
}

Deployment Scripts

Once we have the Marathon deployment file to publish we can compile the deployment scripts that will remove the currently running application and redeploy it using the new image. 
There are better upgrade strategies out of the box but, I won’t discuss them here.
Let’s put it under /data/jenkins/workspace/springboot-jersey-swagger (which will be eventually mapped to Jenkins container’s below path so that it’s reachable in Jenkins) and don’t forget to grant it with execution permission.
$ sudo touch /data/jenkins/workspace/springboot-jersey-swagger/deploy-app-springboot-jersey-swagger.sh
$ sudo chmod +x /data/jenkins/workspace/springboot-jersey-swagger/deploy-app-springboot-jersey-swagger.sh
$ sudo vim /data/jenkins/workspace/springboot-jersey-swagger/deploy-app-springboot-jersey-swagger.sh

#!/bin/bash

# We can make it much more flexible if we expose some of below as parameters
APP="springboot-jersey-swagger"
MARATHON="192.168.56.118:8080"
PAYLOAD="${WORKSPACE}/marathon-app-springboot-jersey-swagger.json"
CONTENT_TYPE="Content-Type: application/json"

# Delete the old application
# Note: again, this can be enhanced for a better upgrade strategy
curl -X DELETE -H "${CONTENT_TYPE}" http://${MARATHON}/v2/apps/${APP}

# Wait for a while
sleep 2

# Post the application to Marathon
curl -X POST -H "${CONTENT_TYPE}" http://${MARATHON}/v2/apps -d@${PAYLOAD}

Plug It In By Adding One More Build Step In Jenkins

Simply add one more build step after the Maven step.
It's a "Execute Shell" build step and the commands are really simple:
cd ${WORKSPACE}
./deploy-app-springboot-jersey-swagger.sh
Save it.

Trigger the Process Again

By default, any code check-in will automatically trigger the build and deployment process.
But for testing purpose, we can click the “Build Now” to manually trigger it.
The Console Output is really verbose but it can help you understand the whole process. Check it out here.

Once the process is done in Jenkins (we can monitor the progress by viewing the Console Output), we can see our application exist in “Deployments” tab and quickly move into “Application” tab of Marathon which means that the deployment is done.
Note: if an application keeps pending in deployment because of “waiting for resource offer”, it’s most likely because of no sufficient resource (e.g. Mem) set in Mesos. In this case, we may need to add more Mesos Slave or fine tune the JSON deployment file with fewer resources.



One may notice that our application is under “Unknown”. Why? It’s because we haven’t set up our health check mechanism yet so Marathon doesn't really know whether the application is healthy or not.

Verify Our App

Once application is deployed by Mesos + Marathon, there will be new Docker container(s) spun up automatically.
But how can we access our application deployed by Mesos + Marathon? The port configuration part in Marathon may be a bit confusing and we will figure it out later. In current stage we have to see what port our app is exposed, by below command:
$ docker ps
CONTAINER ID        IMAGE                                                         COMMAND                  CREATED             STATUS              PORTS                                          NAMES
a020940ae0b5        192.168.56.118:5000/devops/springboot-jersey-swagger:latest   "java -jar /springboo"   8 minutes ago       Up 8 minutes        0.0.0.0:31489->8000/tcp                        mesos-e51e55d3-cd64-4369-948f-aaa433f12c1e-S0.b0e0d351-4da0-4af5-be1b-5e7dab876d06
...

As what I highlighted in red, the port exposed to host is 31489 so our application URL will be:
http://{the docker host ip}:31489, or http://192.168.56.118:31489 in my case.

Yes, once you reach here, our application's continuous deployment is now part of our integrated CI + CD pipeline. We did it!

Conclusion

Now it's time to recap our DevOps story.
I update a bit on the diagram which may be better for me to illustrate the process and components we have engaged.

The Development As Usual

Firstly, as usual, we develop our RESTful webservices by following Microservices architecture with components like Java, Spring Boot, Jersey, Swagger, Maven and more.
Our source code is hosted by Gogs, which is our private Git Repository.
We develop, do unit test, check in code -- everything is just as usual.

Related posts:

The Continuous Integration (CI) Comes To Help

Jenkins, acted as our CI Server, comes to help which detects changes in our Git Repository and triggers our integrated CI + CD pipeline.
For CI part, Jenkins engages Maven to run through tests, both Unit Test and SIT if any and build our typical Spring Boot artifact, an all-in-one fat Jar file. Meanwhile, with the Docker Maven plugin, docker-maven-plugin, Maven can take one step further to help build our desired Docker image where our application Jar file is being hosted.
The Docker image will be published to our private Docker Registry and our CI part ends.

Related posts:

The Final Show in Continuous Deployment (CD)

Apache Mesos, together with Marathon and ZooKeeper founds the CD world.
The trigger point is still in Jenkins -- by adding one more build step -- once CI part is done, Jenkins then executes our Shell scripts where the interactions with Marathon are wrapped.
By posting a predefined JSON payload, or the deployment file, to Marathon, it will deploy our application to Mesos platform. As a result, Docker container(s) will be spun up automatically with specific resources (e.g. CPU, RAM) allocated for our application.
Accordingly, the web services offered by our application are up and running.
That's our whole story of "from code to online services".

Future Improvements?

Yes, of course. Anyway, it's still a DevOps prototype.
Along the way, for simplicity purposes, there are some topics or concerns that haven't been properly addressed yet which include but are not limited to:
  • Using multi-server cluster to make it "real";
  • Introducing security and ACL to bring in necessary control;
  • Applying HAProxy setup for the service clustering;
  • Enabling auto scaling;
  • Adding health check mechanism for our application into Marathon;
  • Fine-tuning the application upgrade strategy;
  • Engaging Consul for automatic service discovery;
  • Having a centralized monitoring mechanism;
  • Trying it out with AWS or Azure;
  • And more

Final Words

Continuous Delivery (= Continuous Integration + Continuous Deployment) is a big topic and has variable solutions in the market now, be it commercial or open source.
I started working on this set-up mainly to make my hands dirty and also help developers and administrators to learn how to use a series of open source components to build up a streamlined CI+CD pipeline so that we can sense the beauty of continuous delivery.
As mentioned in previous chapter of "Future Improvements", a lot more things are still pending for exploring. But that's also the reason why architects like me love it the way that IT rocks, isn't it?

Enjoy DevOps and appreciate your comments if any!


Monday, 3 October 2016

Integrated Continuous Integration (CI) Solution With Jenkins, Maven, docker-maven-plugin, Private Docker Registry


In previous blog, I shared how to run Jenkins + Maven in Docker container here and we could already automate the Maven-based build automatically in our Jenkins CI server, which is great to kick off for the CI process.

But, while reviewing what we're going to achieve as below diagram, we realize that having a Spring Boot-powered jar file is not what we want, we want to have a Docker image built and registered into our private Docker Registry so that we're fully ready for the coming Continuous Deployment!

Integrated Jenkins + Maven + docker-maven-plugin

So let's focus on the highlighted part on above diagram to build an integrated Jenkins + Maven + docker-maven-plugin solution, by using Docker container technology also!

Build Up Our Private Docker Registry

Docker has provided a great Docker image for building up our private Docker Registry, here.
Let's do it by simply running through below commands:
$ sudo mkdir /data/registry
$ sudo chown -R devops:docker /data/registry
$ whoami
devops
$ pwd
/home/devops
$ vim docker-compose.yml
-------------------------------
# Private Docker Registry
docker-registry:
  image: registry
  volumes:
    - /data/registry:/var/lib/registry
  ports:
    - "5000:5000"
-------------------------------

It's almost done, let's spin it up:
$ docker-compose up

And try it out:
$ curl http://127.0.0.1:5000/
"\"docker-registry server\""

Well, our private Docker Registry works but wait…the Docker client can’t push image to it yet.
We need to make some changes at Docker's configuration before pushing images:
$ sudo vim /etc/default/docker
---------------------------
DOCKER_OPTS="--insecure-registry 192.168.56.118:5000"
---------------------------

And then restart the Docker daemon:
$ sudo service docker restart

Now our private Docker Registry is ready to rock.

But how to verify whether our Docker Registry is really working as what we expected?
It must be: 1) Ready for publishing; and 2) Ready for searching.

The idea is to use our greatest public available "hello-world" image which is lightweight enough to try, 960B!

$ docker pull hello-world
$ docker images
...
hello-world    latest                     690ed74de00f        11 months ago       960 B

$ docker tag 690ed74de00f 192.168.56.118:5000/hello-world
$ docker images
...
192.168.56.118:5000/hello-world    latest  690ed74de00f        11 months ago       960 B
hello-world                        latest  690ed74de00f        11 months ago       960 B

$ docker push 192.168.56.118:5000/hello-world
The push refers to a repository [192.168.56.118:5000/hello-world]
5f70bf18a086: Image successfully pushed
b652ec3a27e7: Image succewr12334567ssfully pushed
Pushing tag for rev [690ed74de00f] on {http://192.168.56.118:5000/v1/repositories/hello-world/tags/latest}

$ curl http://192.168.56.118:5000/v1/search?q=hello
{"num_results": 1, "query": "hello", "results": 
 [
  {"description": "", "name": "library/hello-world"}
 ]
}

Sure enough, our private Docker Registry is fully ready!

Okay, so far we already have a private Docker Registry, already can do a basic Maven build in Jenkins. What should we do next?

Enable the docker-maven-plugin

It's time to enable our glue, the docker-maven-plugin, in our project.
The purposes of using this plugin are to:

  1. Automate the Docker image build process on top of our typical build which the output might be jar, war. Now, we want to have a Docker image equipped with our application as the deliverable!
  2. Publish our built Docker image to our private Docker Registry where is the starting point of our Continuous Deployment!
<Frankly speaking, there were many issues encountered along my experiment like here but to make long story short, I'll go directly to the points>


Firstly, please refer to my previous blog "Some Practices to Run Maven + Jenkins In Docker Container" here for the basic setup of the environment.
Below successful experiment would be on top of this environment.

Explicitly Specify Docker Host in /etc/default/docker

Edit the /etc/default/docker as below:
#DOCKER_OPTS="--insecure-registry 192.168.56.118:5000"
DOCKER_OPTS="--insecure-registry 192.168.56.118:5000 -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock"

And restart the Docker daemon is required, of course:
$ sudo service docker restart

Enable the docker-maven-plugin In pom.xml

There are many Docker Maven plugins available and I simply choose spotify's one, here.
The configuration of the plugin in pom.xml is as below:
<!-- Docker image building plugin -->
<plugin>
 <groupId>com.spotify</groupId>
 <artifactId>docker-maven-plugin</artifactId>
 <version>0.4.13</version>
 <executions>
  <execution>
   <phase>package</phase>
   <goals>
    <goal>build</goal>
   </goals>
  </execution>
 </executions>
 <configuration>
  <dockerHost>${docker.host}</dockerHost>
  
  <baseImage>frolvlad/alpine-oraclejdk8:slim</baseImage>
  <imageName>${docker.registry}/devops/${project.artifactId}</imageName>
  <entryPoint>["java", "-jar", "/${project.build.finalName}.jar"]</entryPoint>
  
  <resources>
     <resource>
    <targetPath>/</targetPath>
    <directory>${project.build.directory}</directory>
    <include>${project.build.finalName}.jar</include>
     </resource>
  </resources>
  
  <forceTags>true</forceTags>
  <imageTags>
     <imageTag>${project.version}</imageTag>
  </imageTags>
 </configuration>
</plugin>
Note: For complete pom.xml file, please refer to my GitHub repository, here.

The Jenkins Job

Below screenshots are the major configuration parts of the Jenkins Job.

Source Code Management


Build Triggers


Maven Goals


Console Output

Here I also posted a typical Console Output of the Jenkins Job for reference as a Gist here.

Have A Check On What We Have

Firstly we can check whether our local repository has the cached image:
$ docker images
REPOSITORY                                             TAG                        IMAGE ID            CREATED             SIZE
...
192.168.56.118:5000/devops/springboot-jersey-swagger   1.0.0-SNAPSHOT             203fde08df84        44 minutes ago      191.9 MB
...

Great, then have a check whether this image has been registered into our private Docker Registry:
$ curl http://192.168.56.118:5000/v1/search?q=springboot-jersey-swagger
{"num_results": 1, "query": "springboot-jersey-swagger", "results": 
 [
  {"description": "", "name": "devops/springboot-jersey-swagger"}
 ]
}

Yeah! We made it!

Try Our Docker Image Out Also

To go deeper, we can also take a look on whether our application works or not:
docker run -d -p 8000:8000 192.168.56.118:5000/devops/springboot-jersey-swagger

Open the browser and key in: http://192.168.56.118:8000


Hooray, it works perfectly!

Conclusion

From this experiment we have learned that:
  1. By using a customized Docker container, we can put Jenkins, Maven and docker-maven-plugin together to automate our project build process and the deliverable can be our desired Docker image;
  2. The Docker image can be pushed automatically to our private Docker Registry within the pipeline which will enable us to make the Continuous Deployment happen soon.

So the CI part is done and let's look forward to our CD part soon!
Enjoy your CI experiments also!


Wednesday, 28 September 2016

Some Practices to Run Maven + Jenkins In Docker Container

Background

In Java world, it's very common for us to use Apache Maven as the build infra to manage build process.
How if we want to do it more interestingly by engaging together with Docker and Jenkins?

Issues We Have To Tackle

There already has pre-built Jenkins image for Docker, here.
But some known permission issues keep making noise like what somebody mentioned here, while mounting a host folder for the data generated as a volume -- and typically that's a good practice.

Meanwhile, Maven will try to download all necessary artifacts from Maven repositories -- be it from Internet-based public repositories or in-house private repositories -- to local folder as the cache to facilitate the build process. But, this cache may be very huge (e.g. gigabytes level) so it's not practical to put it into Docker container.
Is it possible to share it from the host folder instead so that even we spin up new containers these cached artifacts are still there?

So, these major issues/concerns should be fully addressed while think of solutioning. As a summary, these issues/concerns are as below:

  • How can we achieve the goal of having more control and flexibility while using Jenkins?
  • Can we share resources from host so that we can always keep Docker container lightweight?

 

Tools We're Going to Use

Before wee deep dive into detailed solutions, we assume that below tools that we are going to use have already been installed and configured properly, in our host server (e.g. mine is Ubuntu 14.x, 64bit):
  • Docker (e.g. mime is version 1.11.0, build 4dc5990)
  • Docker Compose (e.g. mime is version 1.6.2, build 4d72027)
  • JDK (e.g. mime is version 1.8.0_102)
  • Apache Maven (e.g. mime is version 3.3.3)

 

Solutions & Practices

1. Customize Our Jenkins Docker Image To Gain More Control and Flexibility

It's pretty straightforward to think about building our own docker image for Jenkins so that we can have more control and flexibility. Why not?

$ whoami
devops
$ pwd
/home/devops
$ mkdir jenkins
$ cd jenkins
$ touch Dockerfile
$ vim Dockerfile

~/jenkins/Dockerfile 
FROM jenkins

MAINTAINER Bright Zheng <bright.zheng AT outlook DOT com>

USER root

RUN GOSU_SHA=5ec5d23079e94aea5f7ed92ee8a1a34bbf64c2d4053dadf383992908a2f9dc8a \
        && curl -sSL -o /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/1.9/gosu-$(dpkg --print-architecture)" \
        && chmod +x /usr/local/bin/gosu && echo "$GOSU_SHA  /usr/local/bin/gosu" | sha256sum -c -

COPY entrypoint.sh /entrypoint.sh

ENTRYPOINT ["/entrypoint.sh"]

I believe some explanation is required here. 
  1. The image we're going to build is based on official jenkins Docker image; 
  2. We're trying to setup a simple but powerful tool gosu here which can be used for runing some specific scripts by specific user later;
  3. We will use a dedicated entry script file namely "entrypoint.sh" for our customize-able scripts

Okay, time to focus on our entry point:
$ touch entrypoint.sh
$ vim entrypoint.sh

~/jenkins/entrypoint.sh
#! /bin/bash
set -e

# chown on the fly
chown -R 1000 "$JENKINS_HOME"
chown -R 1000 "/data/mvn_repo"

# chmod for jdk and maven
chmod +x /opt/maven/bin/mvn
chmod +x /opt/jdk/java_home/bin/java
chmod +x /opt/jdk/java_home/bin/javac

exec gosu jenkins /bin/tini -- /usr/local/bin/jenkins.sh

Some more explanation as the important tricks come from here:
  1. Yes, we will change the $JENKINS_HOME's owner to user Jenkins (with uid=1000) which has been hardcoded in official Jenkins Dockerfile, here;
  2. We're going to expose a dedicated folder "/data/mvn_repo" to be the "home" folder for the cached Maven artifacts, which should be mounted from host as a Docker volume;
  3. We're going to grant execute permission for some Maven, JDK executable commands which would be mounted and shared from host also!
Now it's ready and let's build our customized Docker image -- we tag it as devops/jenkinsci.

docker build -t devops/jenkinsci .

BTW, if we run it behind proxy, some build-args are required to be part of our command, as below example:

docker build \
 --build-arg https_proxy=http://192.168.56.1:3128 \
 --build-arg http_proxy=http://192.168.56.1:3128 \
 -t devops/jenkinsci .

Once succeeded, we can use below command to verify whether the newly built Docker image is in the local Docker repository or not:

$ docker images

Now we have successfully got our customized Docker image for Jenkins.

2. Engage Docker Compose

What's next? Well, it's time to engage Docker Compose to spin up our container(s).
(BTW, if there is just one container to spin up, simply use docker run command can work as well. But by consideration of more types of components would be involved in, Docker Compose is a better choice definitely...)

Let's prepare the docker-compose.yml:
$ pwd
/home/devops
$ touch docker-compose.yml
$ vim docker-compose.yml

~/docker-compose.yml
# Jenkins CI Server
jenkins:
  # 1 - Yeah, we use our customized Jenkins image
image: devops/jenkinsci volumes: # 2 - JENKINS_HOME - /data/jenkins:/var/jenkins_home # 3 - Share host's resources to container - /opt/jdk/java_home:/opt/jdk/java_home - /opt/maven:/opt/maven - /data/mvn_repo:/data/mvn_repo # 4 - see explanation - /var/run/docker.sock:/var/run/docker.sock - /usr/bin/docker:/usr/bin/docker

    # 5 - see explanation
- /lib/x86_64-linux-gnu/libsystemd-journal.so.0:/lib/x86_64-linux-gnu/libsystemd-journal.so.0 - /usr/lib/x86_64-linux-gnu/libapparmor.so.1:/usr/lib/x86_64-linux-gnu/libapparmor.so.1 - /lib/x86_64-linux-gnu/libcgmanager.so.0:/lib/x86_64-linux-gnu/libcgmanager.so.0 - /lib/x86_64-linux-gnu/libnih.so.1:/lib/x86_64-linux-gnu/libnih.so.1 - /lib/x86_64-linux-gnu/libnih-dbus.so.1:/lib/x86_64-linux-gnu/libnih-dbus.so.1 - /lib/x86_64-linux-gnu/libgcrypt.so.11:/lib/x86_64-linux-gnu/libgcrypt.so.11 ports: - "8081:8080"

Again, the explanation is as below:
  1. We employ our own customized Jenkins image;
  2. We mount host's /data/jenkins folder to /var/jenkins_home which is the default JENKINS_HOME location by Jenkins;
  3. We share host's JDK, Maven and Maven local repository to the container by mounting as the volumes;
  4. That's from the instructions of the great Docker sharing solution (I'd like to name it "Docker with Docker", haha), see here;
  5. Some lib components have to be shared for above "Docker with Docker" solution.
To start it up, simply key in:
$ docker-compose up

After a few seconds, our Jenkins container will be ready for use and we can simply open browser of your choice and access URL like http://{IP}:8081/ -- a brand new and fully customized Jenkins will be showing in front of you.

What's next? Jenkins console might be your workspace where interesting stuff like build project creation and configuration will be your work items to get CI work.
You may refer to Jenkins documentation to get started.

Conclusion

From these practices, some useful points we have learned and gained are:
  1. We can gain more benefits (e.g. control, flexibility) if we'd customize our Jenkins image and the customization is pretty easy;
  2. We can share a lot of resources from our host, from libs, JDK, Maven to any other potentially heavy/big stuff, so that we can always make our Docker container as lightweight as possible.

Interesting right? Let me know if you have any comments.