Jenkins is an open source tool for Continuous Integration and Deployment that makes building and testing your applications easier and faster with automation.
The reason for this article is to demo the setup of a build job on your existing Jenkins machine which deploys to a Google Kubernetes Engine (GKE) cluster without having to setup another machine on GKE.
GKE
Google Kubernetes Engine is a cluster orchestration and management service by Google to help deploy and scale your Docker containers and container clusters running on Google Cloud Services. This is based on Google’s open source container management system Kubernetes and it allows you to interact with your cluster which inherently consists of a group of Compute Instances. A cluster can have multiple node pools where each node pool contains multiple nodes. Furthermore, a node is capable of serving multiple workloads and each of these workloads contain multiple pods. A pod contains the container which runs your build image. Here’s a handy comic to make you understand some of the core concepts of GKE and the features it offers.
Prerequisites
- GKE Cluster running on any region
- Jenkins server
- A Docker Image in Google Container Registry (GCR)
Prepare Jenkins Machine
- Login to GCP and create a service account with the following permissions and download the JSON credentials:
– Cloudbuild Admin or Cloudbuild Service Account
– Kubernetes Engine Admin/Service Account
– Storage Admin
– Cloud Run Admin - SSH into your Jenkins Server and login as jenkins user. If the user is not present, you can add it by using:
adduser jenkins sudo
sudo passwd jenkins #If no password set for jenkins user
sudo -u jenkins /bin/bash #to switch to jenkins user
Installation
- Install kubectl https://kubernetes.io/docs/tasks/tools/install-kubectl/
- Google Cloud SDK https://kubernetes.io/docs/tasks/tools/install-kubectl/
- Docker https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-18-04
If you have trouble using these commands without sudo from Jenkins user,
refer to Step 2 of the link above on how to execute the docker command without sudo.
GCloud Configuration
- Set project
gcloud config set project <project-name>
- Activate the service account created above:
gcloud auth activate-service-account account_name --key-file [KEY_FILE]
The key file is the JSON file downloaded after the creation of service account.
- To connect to the cluster,
~/.kube
should have a config YAML for the cluster. For this, go to Kuberenetes cluster, click connect and copy the command, execute it from Jenkins user:
gcloud container clusters get-credentials <cluster-name> --zone <zone-name> --project <project-name>
Verify that the credentials are working by running any kubectl
command, eg, kubectl get deployment
- To register
gcloud
as the docker credential helper:
gcloud auth configure-docker
Kubernetes Setup
This step can be skipped if you are using an existing cluster with a deployment running.
Cluster Creation
- Go to GCP and search for Kubernetes Engine on the left side panel. In the opened view, click on Clusters from the sidebar and click create new.
- In create view, select an appropriate name for your cluster and from the left panel, select node pool and add machine configuration. You can enable autoscaling, select minimum/maximum number of nodes to scale, and limit maximum number of pods in one node.
- In the Nodes tab, select the machine configuration based on your usage. These node settings act as a template which will be used when new nodes are created using the same node pool. Maximum boot disk size at the time of writing this is 100 GB, which is permanent.
- Also set the maximum pods per node in the Networking tab. By default this is set to 8.
- Hit Create.
Creating Deployment
- In the Kubernetes Engine side panel, click on
Workload
which is just beneath the Clusters option. Click deploy.
- Add your existing GCR image
- Configure the deployment/application name (DO NOT USE “_” in the deployment name) and click deploy.
Deployment Actions
Autoscale : Set autoscale settings here, minimum required, maximum to scale to (pods) and custom triggers for scaling.
Scale : Manually scale number of pods
Expose : Exposes the service on a port
Rolling update : Updates an existing deployment with a new image
Autoscale :Set autoscale settings here, minimum required, maximum to scale to (pods) and custom triggers for scaling
For this setup, we’re going with the rolling update assuming that you already have a deployment in place which needs to be replaced with a new image.
Jenkins Build Job
- Go to Jenkins and click on
New Item
on the left side and create a new freestyle project. - In the build step, select
Execute Shell
option and add:
# cd into working directory with DockerFile cd <work-dir> # To activate creds for the first time, can also be done in Jenkins machine directly and get credentials for kubectl gcloud auth activate-service-account account_name --key-file [KEY_FILE] # Get credentials for cluster gcloud container clusters get-credentials <cluster-name> --zone <zone-name> --project <project-name> # Build Image gcloud builds submit --tag gcr.io/<project-name>/nginx:${version} # To create new deployment kubectl create deployment <deployment-name> --image=gcr.io/<project-name>/nginx:${version} # For rolling update kubectl set image deployment/<app_name> nginx=gcr.io/<project-name>/<appname>/nginx:${version} --record
Note
- You can use Jenkins
${BUILD_NUMBER}
variable for incremental tagging of deployments. - Same build tag cannot be used to rebuild an image, this will result in Jenkins build failure.
- Docker needs to be up and running on jenkins machine before triggering this job.
- This deployment is a rolling update, so a deployment with that name should exist before triggering the job. For more on rolling updates: https://cloud.google.com/kubernetes-engine/docs/how-to/updating-apps
- For a manual rollback, you can parameterize
version
in build job and use a previous version. - To check logs for the running instance, go to GKE > Workloads > Workload name and select a pod. Click on View logs.
Conclusion
This demonstrates a simple CI/CD workflow with Jenkins, Docker and GKE. The main benefit of this is the flexibility it provides in terms of extending it for more images depending on your development needs without having to setup a machine on GKE cluster.
Thanks for reading! Don’t forget to follow us on Twitter or reach out to us at info@bewgle.com.