GitLab Archives - Piotr's TechBlog https://piotrminkowski.com/tag/gitlab/ Java, Spring, Kotlin, microservices, Kubernetes, containers Mon, 19 Oct 2020 07:20:59 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 GitLab Archives - Piotr's TechBlog https://piotrminkowski.com/tag/gitlab/ 32 32 181738725 Gitlab CI/CD on Kubernetes https://piotrminkowski.com/2020/10/19/gitlab-ci-cd-on-kubernetes/ https://piotrminkowski.com/2020/10/19/gitlab-ci-cd-on-kubernetes/#comments Mon, 19 Oct 2020 07:20:44 +0000 https://piotrminkowski.com/?p=9002 You can use GitLab CI/CD to build and deploy your applications on Kubernetes. It is not hard to integrate GitLab with Kubernetes. You can take an advantage of the GUI support to set up a connection with your Kubernetes cluster. Furthermore, GitLab CI provides a built-in container registry to store and share images. Preface In […]

The post Gitlab CI/CD on Kubernetes appeared first on Piotr's TechBlog.

]]>
You can use GitLab CI/CD to build and deploy your applications on Kubernetes. It is not hard to integrate GitLab with Kubernetes. You can take an advantage of the GUI support to set up a connection with your Kubernetes cluster. Furthermore, GitLab CI provides a built-in container registry to store and share images.

Preface

In this article, I will describe all the steps required to build and deploy your Java application on Kubernetes with GitLab CI/CD. First, we are going to run an instance of the GitLab server on the local Kubernetes cluster. Then, we will use the special GitLab features in order to integrate it with Kubernetes. After that, we will create a pipeline for our Maven application. You will learn how to build it, run automated tests, build a Docker image, and finally run it on Kubernetes with GitLab CI.
In this article, I’m going to focus on simplicity. I will show you how to run GitLab CI on Kubernetes with minimal effort and resources. With this in mind, I hope it will help to form an opinion about GitLab CI/CD. The more advanced, production installation may be easily performed with Helm. I will also describe that approach in the next section.

Run GitLab on Kubernetes

In order to easily start with GitLab CI on Kubernetes, we will use its image from the Docker Hub. We may choose between community and enterprise editions. First, we need to change the default external URL. We will also enable the container registry feature. To override both these settings we need to add the environment variable GITLAB_OMNIBUS_CONFIG. It can contain any of the GitLab configuration properties. Since I’m running the Kubernetes cluster locally, I also need to override the default URL of the Docker registry. You can easily get it by running the command docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' registry. The local image registry is running outside the Kubernetes cluster as a simple Docker container with the name registry.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: gitlab-deployment
spec:
  selector:
    matchLabels:
      app: gitlab
  template:
    metadata:
      labels:
        app: gitlab
    spec:
      containers:
      - name: gitlab
        image: gitlab/gitlab-ee
        env:
          - name: GITLAB_OMNIBUS_CONFIG
            value: "external_url 'http://gitlab-service.default/';gitlab_rails['registry_enabled'] = true;gitlab_rails['registry_api_url'] = \"http://172.17.0.2:5000\""
        ports:
        - containerPort: 80
          name: HTTP
        volumeMounts:
          - mountPath: /var/opt/gitlab
            name: data
      volumes:
        - name: data
          emptyDir: {}

Then we will also create the Kubernetes service gitlab-service. GitLab UI is available on port 80. We will use that service to access the GitLab UI outside the Kubernetes cluster.

apiVersion: v1
kind: Service
metadata:
  name: gitlab-service
spec:
  type: NodePort
  selector:
    app: gitlab
  ports:
  - port: 80
    targetPort: 80
    name: http

Finally, we can apply the GitLab manifest with Deployment and Service to Kubernetes. To do that you just need to execute the command kubectl apply -f k8s/gitlab.yaml. Let’s check the address of gitlab-service.

Of course, we may also install GitLab on Kubernetes with Helm. However, you should keep in mind that it will generate a full deployment with some core and optional components. Consequently, you will have to increase RAM assigned to your cluster to around 15MB to be able to run it. Here’s a list of required Helm commands. For more details, you may refer to the GitLab documentation.

$ helm repo add gitlab https://charts.gitlab.io/
$ helm repo update
$ helm upgrade --install gitlab gitlab/gitlab \
  --timeout 600s \
  --set global.hosts.domain=example.com \
  --set global.hosts.externalIP=10.10.10.10 \
  --set certmanager-issuer.email=me@example.com

Clone the source code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my repository sample-spring-boot-on-kubernetes. Then you should just follow my instructions 🙂

First, let’s create a new repository on GitLab. Of course, its name is sample-spring-boot-on-kubernetes as shown below.

gitlab-on-kubernetes-create-repo

Then, you should clone my example repository, and move it to your GitLab instance running on Kubernetes. Assuming the address of my local instance of GitHub is http://localhost:30129, I need to execute the following command.

$ git remote add gitlab http://localhost:30129/root/sample-spring-boot-on-kubernetes.git

To clarify, let’s display a list of Git remotes for the current repository.

$ git remote -v
gitlab  http://localhost:30129/root/sample-spring-microservices-kubernetes.git (fetch)
gitlab  http://localhost:30129/root/sample-spring-microservices-kubernetes.git (push)
origin  https://github.com/piomin/sample-spring-microservices-kubernetes.git (fetch)
origin  https://github.com/piomin/sample-spring-microservices-kubernetes.git (push)

Finally, we can push the source code to the GitLab repository using the gitlab remote.

$ git push gitlab

Configure GitLab integration with Kubernetes

After login to the GitLab UI, you should enable local HTTP requests. Do that you need to go to the admin section. Then click “Settings” -> “Network” -> “Outbound requests”. Finally, you need to check the box “Allow requests to the local network from web hooks and services”. We will use internal communication between GitLab and Kubernetes API, and between the GitLab CI runner and the GitLab master.

Now, we may configure the connection to the Kubernetes API. To do that you should go to the section “Kubernetes”, then click “Add Kubernetes cluster”, and finally switch to the tab “Connect existing cluster”. We need to provide some basic information about our cluster in the form. The name is required, and therefore I’m setting the same name as my Kubernetes context. You may leave a default value in the “Environment scope” field. In the “API URL” field I’m providing the internal address of Kubernetes API. It is http://kubernetes.default:443.

We also need to paste the cluster CA certificate. In order to obtain it, you should first find the secret with the prefix default-token-, and copy it with the following command.

$ kubectl get secret default-token-ttswt -o jsonpath="{['data']['ca\.crt']}" | base64 --decode

Finally, we should create a special ServiceAccount for GitLab with the cluster-admin role.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: gitlab
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: gitlab-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: gitlab
    namespace: kube-system

Then you should find the secret with the prefix gitlab in the kube-system namespace, and display its details. After that, we need to copy value of field token, and paste it to the form without decoding.

$ kubectl describe secret gitlab-token-5sk2v -n kube-system

Here’s the full information about our Kubernetes cluster required by GitLab. Let’s add it by clicking “Add Kubernetes cluster”.

Once you have successfully added the new Kubernetes cluster in the GitLab UI you need to display its details. In the tab “Applications” you should find the section “GitLab runner” and install it.

The GitLab runner is automatically deployed in the namespace gitlab-managed-apps. We can verify if it started succesfully.

$ kubectl get pod -n gitlab-managed-apps
NAME                                   READY        STATUS    RESTARTS       AGE
runner-gitlab-runner-5649dbf49-5mnjv   1/1          Running   0              5m56s

The GitLab runner tries to communicate with the GitLab master. To verify that everything works fine, we need to go to the section “Overview” -> “Runners”. If you see the IP address and version number, it means that the runner is able to communicate with the master. In case of any problems, you should take a look at the pod logs.

Create application pipeline

The GitLab CI/CD configuration file is available in the project root directory. Its name is .gitlab-ci.yml. It consists of 5 stages and uses the Maven Docker image for executing builds. It is automatically detected by GitLab CI. Let’s take a closer look at it.

First, we are running the build stage responsible for building the application for the source code. It just runs the command mvn compile. Then, we are running JUnit tests using mvn test command. If all the tests are passed, we may build a Docker image with our application. We use the Jib Maven plugin for it. It is able to build an image in the docker-less mode. Therefore, we don’t have to run an image with a Docker client. Jib builds an image and pushes it to the Docker registry. Finally, we can deploy our container on Kubernetes. To do that we are using the bitnami/kubectl image. It allows us to execute kubectl commands. In the first step, we are deploying the application in the test namespace. The last stage deploy-prod requires a manual approval. Both deploy stages are allowed only for a master branch.

image: maven:latest

stages:
  - build
  - test
  - image-build
  - deploy-tb
  - deploy-prod

build:
  stage: build
  script:
    - mvn compile

test:
  stage: test
  script:
    - mvn test

image-build:
  stage: image-build
  script:
    - mvn -s .m2/settings.xml -P jib compile jib:build

deploy-tb:
  image: bitnami/kubectl:latest
  stage: deploy-tb
  only:
    - master
  script:
    - kubectl apply -f k8s/deployment.yaml -n test

deploy-prod:
  image: bitnami/kubectl:latest
  stage: deploy-prod
  only:
    - master
  when: manual
  script:
    - kubectl apply -f k8s/deployment.yaml -n prod

We may push our application image to a remote or a local Docker registry. If you do not pass any address, by default Jib tries to push the image to the docker.io registry.

<plugin>
   <groupId>com.google.cloud.tools</groupId>
   <artifactId>jib-maven-plugin</artifactId>
   <version>2.4.0</version>
   <configuration>
      <to>piomin/sample-spring-boot-on-kubernetes</to>
   </configuration>
</plugin>

In order to push images to the docker.io registry, we need to provide client’s authentication credentials.

<servers>
   <server>
      <id>registry-1.docker.io</id>
      <username>${DOCKER_LOGIN}</username>
      <password>${DOCKER_PASSWORD}</password>
   </server>
</servers>

Here’s a similar configuration, but for the local instance of the registry.

<plugin>
   <groupId>com.google.cloud.tools</groupId>
   <artifactId>jib-maven-plugin</artifactId>
   <version>2.4.0</version>
   <configuration>
      <allowInsecureRegistries>true</allowInsecureRegistries>
      <to>172.17.0.2:5000/root/sample-spring-boot-on-kubernetes</to>
   </configuration>
</plugin>

Run GitLab CI pipeline on Kubernetes

Finally, we may run our GitLab CI/CD pipeline. We can push the change in the source code or just run the build manually. You can see the result for the master branch in the picture below.

gitlab-on-kubernetes-pipeline-finished

The last stage deploy-prod requires manual approval. We may confirm it by clicking the “Play” button.

gitlab-on-kubernetes-pipeline-approval

If you push changes to another branch than master, the pipeline will run just three stages. It will build the application, run tests, and build a Docker image.

gitlab-on-kubernetes-dev-build

You can also take advantage of the integrated container registry. You just need to set the right name of a Docker image. It should contain the GitLab owner name and image name. In that case, it is root/sample-spring-boot-on-kubernetes. I’m using my local Docker registry available at 172.17.0.2:5000.

Conclusion

GitLab seems to be a very interesting tool for building CI/CD processes on Kubernetes. It provides built-in integration with Kubernetes and a Docker container registry. Its documentation is at a very high level. In this article, I tried to show you that it is relatively easy to build CI/CD pipelines for Maven applications on Kubernetes. If you are interested in building a full CI/CD environment you may refer to the article How to setup continuous delivery environment. Enjoy 🙂

The post Gitlab CI/CD on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/10/19/gitlab-ci-cd-on-kubernetes/feed/ 2 9002
Local Continuous Delivery Environment with Docker and Jenkins https://piotrminkowski.com/2018/06/12/local-continuous-delivery-environment-with-docker-and-jenkins/ https://piotrminkowski.com/2018/06/12/local-continuous-delivery-environment-with-docker-and-jenkins/#comments Tue, 12 Jun 2018 21:06:25 +0000 https://piotrminkowski.wordpress.com/?p=6649 In this article I’m going to show you how to set up a continuous delivery environment for building Docker images of our Java applications on the local machine. Our environment will consist of Gitlab (optional, otherwise you can use hosted GitHub), Jenkins master, Jenkins JNLP slave with Docker, and private Docker registry. All those tools […]

The post Local Continuous Delivery Environment with Docker and Jenkins appeared first on Piotr's TechBlog.

]]>
In this article I’m going to show you how to set up a continuous delivery environment for building Docker images of our Java applications on the local machine. Our environment will consist of Gitlab (optional, otherwise you can use hosted GitHub), Jenkins master, Jenkins JNLP slave with Docker, and private Docker registry. All those tools will be run locally using their Docker images. Thanks to that you will be able to easily test it on your laptop, and then configure the same environment on production deployed on multiple servers or VMs. Let’s take a look on the architecture of the proposed solution.

art-docker-1

1. Running Jenkins Master

We use the latest Jenkins LTS image. Jenkins Web Dashboard is exposed on port 38080. Slave agents may connect master on default 50000 JNLP (Java Web Start) port.

$ docker run -d --name jenkins -p 38080:8080 -p 50000:50000 jenkins/jenkins:lts

After starting, you have to execute command docker logs jenkins in order to obtain an initial admin password. Find the following fragment in the logs, copy your generated password and paste in Jenkins start page available at http://192.168.99.100:38080.

art-docker-2

We have to install some Jenkins plugins to be able to checkout projects from Git repository, build applications from source code using Maven, and finally build and push a Docker image to a private registry. Here’s a list of required plugins:

  • Git Plugin – this plugin allows to use Git as a build SCM
  • Maven Integration Plugin – this plugin provides advanced integration for Maven 2/3
  • Pipeline Plugin – this is a suite of plugins that allows you to create continuous delivery pipelines as a code, and run them in Jenkins
  • Docker Pipeline Plugin – this plugin allows you to build and use Docker containers from pipelines

2. Building Jenkins Slave

Pipelines are usually run on different machines than machines with master nodes. Moreover, we need to have a Docker engine installed on that slave machine to be able to build Docker images. Although there are some ready Docker images with Docker-in-Docker and Jenkins client agent, I have never found the image with JDK, Maven, Git and Docker installed. This is most commonly used when building images for your microservices, so it is definitely worth having such an image with Jenkins image prepared.

Here’s the Dockerfile with Jenkins Docker-in-Docker slave with Git, Maven and OpenJDK installed. I used Docker-in-Docker as a base image (1). We can override some properties when running our container. You will probably have to override default Jenkins master address (2) and slave secret key (3). The rest of parameters are optional, but you can even decide to use an external Docker daemon by overriding DOCKER_HOST environment variable. We also download and install Maven (4) and create user with special sudo rights for running Docker (5). Finally we run entrypoint.sh script, which starts Docker daemon and Jenkins agent (6).

FROM docker:18-dind # (1)
MAINTAINER Piotr Minkowski
ENV JENKINS_MASTER http://localhost:8080 # (2)
ENV JENKINS_SLAVE_NAME dind-node
ENV JENKINS_SLAVE_SECRET "" # (3)
ENV JENKINS_HOME /home/jenkins
ENV JENKINS_REMOTING_VERSION 3.17
ENV DOCKER_HOST tcp://0.0.0.0:2375
RUN apk --update add curl tar git bash openjdk8 sudo

ARG MAVEN_VERSION=3.5.2 # (4)
ARG USER_HOME_DIR="/root"
ARG SHA=707b1f6e390a65bde4af4cdaf2a24d45fc19a6ded00fff02e91626e3e42ceaff
ARG BASE_URL=https://apache.osuosl.org/maven/maven-3/${MAVEN_VERSION}/binaries

RUN mkdir -p /usr/share/maven /usr/share/maven/ref \
  && curl -fsSL -o /tmp/apache-maven.tar.gz ${BASE_URL}/apache-maven-${MAVEN_VERSION}-bin.tar.gz \
  && echo "${SHA}  /tmp/apache-maven.tar.gz" | sha256sum -c - \
  && tar -xzf /tmp/apache-maven.tar.gz -C /usr/share/maven --strip-components=1 \
  && rm -f /tmp/apache-maven.tar.gz \
  && ln -s /usr/share/maven/bin/mvn /usr/bin/mvn

ENV MAVEN_HOME /usr/share/maven
ENV MAVEN_CONFIG "$USER_HOME_DIR/.m2"
# (5)
RUN adduser -D -h $JENKINS_HOME -s /bin/sh jenkins jenkins && chmod a+rwx $JENKINS_HOME
RUN echo "jenkins ALL=(ALL) NOPASSWD: /usr/local/bin/dockerd" > /etc/sudoers.d/00jenkins && chmod 440 /etc/sudoers.d/00jenkins
RUN echo "jenkins ALL=(ALL) NOPASSWD: /usr/local/bin/docker" > /etc/sudoers.d/01jenkins && chmod 440 /etc/sudoers.d/01jenkins
RUN curl --create-dirs -sSLo /usr/share/jenkins/slave.jar http://repo.jenkins-ci.org/public/org/jenkins-ci/main/remoting/$JENKINS_REMOTING_VERSION/remoting-$JENKINS_REMOTING_VERSION.jar && chmod 755 /usr/share/jenkins && chmod 644 /usr/share/jenkins/slave.jar

COPY entrypoint.sh /usr/local/bin/entrypoint
VOLUME $JENKINS_HOME
WORKDIR $JENKINS_HOME
USER jenkins
ENTRYPOINT ["/usr/local/bin/entrypoint"] # (6)

Here’s the script entrypoint.sh.

#!/bin/sh
set -e
echo "starting dockerd..."
sudo dockerd --host=unix:///var/run/docker.sock --host=$DOCKER_HOST --storage-driver=vfs &
echo "starting jnlp slave..."
exec java -jar /usr/share/jenkins/slave.jar \
	-jnlpUrl $JENKINS_URL/computer/$JENKINS_SLAVE_NAME/slave-agent.jnlp \
	-secret $JENKINS_SLAVE_SECRET

The source code with image definition is available on GitHub. You can clone the repository https://github.com/piomin/jenkins-slave-dind-jnlp.git, build image and then start container using the following commands.

$ docker build -t piomin/jenkins-slave-dind-jnlp .
$ docker run --privileged -d --name slave -e JENKINS_SLAVE_SECRET=5664fe146104b89a1d2c78920fd9c5eebac3bd7344432e0668e366e2d3432d3e -e JENKINS_SLAVE_NAME=dind-node-1 -e JENKINS_URL=http://192.168.99.100:38080 piomin/jenkins-slave-dind-jnlp

Building it is just an optional step, because the image is already available on my Docker Hub account.

art-docker-3

3. Enabling Docker-in-Docker Slave

To add a new slave node you need to navigate to section Manage Jenkins -> Manage Nodes -> New Node. Then define a permanent node with the name parameter filled. The most suitable name is the default name declared inside Docker image definition – dind-node. You also have to set a remote root directory, which should be equal to the path defined inside the container for JENKINS_HOME environment variable. In my case it is /home/jenkins. The slave node should be launched via Java Web Start (JNLP).

art-docker-4

New node is visible on the list of nodes as disabled. You should click in order to obtain its secret key.

art-docker-5

Finally, you may run your slave container using the following command containing a secret copied from node’s panel in Jenkins Web Dashboard.

$ docker run --privileged -d --name slave -e JENKINS_SLAVE_SECRET=fd14247b44bb9e03e11b7541e34a177bdcfd7b10783fa451d2169c90eb46693d -e JENKINS_URL=http://192.168.99.100:38080 piomin/jenkins-slave-dind-jnlp

If everything went according to plan you should see enabled node dind-node in the node’s list.

art-docker-6

4. Setting up Docker Private Registry

After deploying Jenkins master and slave, there is the last required element in architecture that has to be launched – private Docker registry. Because we will access it remotely (from Docker-in-Docker container) we have to configure secure TLS/SSL connection. To achieve it we should first generate a TLS certificate and key. We can use the openssl tool for it. We begin from generating a private key.

$ openssl genrsa -des3 -out registry.key 1024

Then, we should generate a certificate request file (CSR) by executing the following command.

$ openssl req -new -key registry.key -out registry.csr

Finally, we can generate a self-signed SSL certificate that is valid for 1 year using openssl command as shown below.

$ openssl x509 -req -days 365 -in registry.csr -signkey registry.key -out registry.crt

Don’t forget to remove the passphrase from your private key.

$ openssl rsa -in registry.key -out registry-nopass.key -passin pass:123456

You should copy generated .key and .crt files to your docker machine. After that you may run Docker registry using the following command.

$ docker run -d -p 5000:5000 --restart=always --name registry -v /home/docker:/certs -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/registry.crt -e REGISTRY_HTTP_TLS_KEY=/certs/registry-nopass.key registry:2

If a registry has been successfully started you should be able to access it over HTTPS by calling address https://192.168.99.100:5000/v2/_catalog from your web browser.

5. Creating application Dockerfile

The sample applications source code is available on GitHub in repository sample-spring-microservices-new (https://github.com/piomin/sample-spring-microservices-new.git). There are some modules with microservices. Each of them has Dockerfile created in the root directory. Here’s a typical Dockerfile for our microservice built on top of Spring Boot.

FROM openjdk:8-jre-alpine
ENV APP_FILE employee-service-1.0-SNAPSHOT.jar
ENV APP_HOME /app
EXPOSE 8090
COPY target/$APP_FILE $APP_HOME/
WORKDIR $VERTICLE_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $APP_FILE"]

6. Building pipeline through Jenkinsfile

This step is the most important phase of our exercise. We will prepare a pipeline definition, which combines together all the currently discussed tools and solutions. This pipeline definition is a part of every sample application source code. The change in Jenkinsfile is treated the same as a change in the source code responsible for implementing business logic.
Every pipeline is divided into stages. Every stage defines a subset of tasks performed through the entire pipeline. We can select the node, which is responsible for executing pipeline’s steps or leave it empty to allow random selection of the node. Because we have already prepared a dedicated node with Docker, we force the pipeline to be built by that node. In the first stage called Checkout we pull the source code from Git repository (1). Then we build an application binary using Maven command (2). Once the fat JAR file has been prepared we may proceed to building the application’s Docker image (3). We use methods provided by Docker Pipeline Plugin. Finally, we push the Docker image with a fat JAR file to secure the private Docker registry (4). Such an image may be accessed by any machine that has Docker installed and has access to our Docker registry. Here’s the full code of Jenkinsfile prepared for module config-service.

node('dind-node') {
    stage('Checkout') { # (1)
      git url: 'https://github.com/piomin/sample-spring-microservices-new.git', credentialsId: 'piomin-github', branch: 'master'
    }
    stage('Build') { # (2)
      dir('config-service') {
        sh 'mvn clean install'
        def pom = readMavenPom file:'pom.xml'
        print pom.version
        env.version = pom.version
        currentBuild.description = "Release: ${env.version}"
      }
    }
    stage('Image') {
      dir ('config-service') {
        docker.withRegistry('https://192.168.99.100:5000') {
          def app = docker.build "piomin/config-service:${env.version}" # (3)
          app.push() # (4)
        }
      }
    }
}

7. Creating Pipeline in Jenkins Web Dashboard

After preparing the application’s source code, Dockerfile and Jenkinsfile the only thing left is to create a pipeline using Jenkins UI. We need to select New Item -> Pipeline and type the name of our first Jenkins pipeline. Then go to Configure panel and select Pipeline script from SCM in Pipeline section. Inside the following form we should fill an address of Git repository, user credentials and a location of Jenkinsfile.

art-docker-7

8. Configure GitLab WebHook (Optionally)

If you run GitLab locally using its Docker image you will be able to configure webhook, which triggers run of your pipeline after pushing changes to Git repository. To run GitLab using Docker execute the following command.

$ docker run -d --name gitlab -p 10443:443 -p 10080:80 -p 10022:22
gitlab/gitlab-ce:latest

Before configuring the webhook in GitLab Dashboard we need to enable this feature for Jenkins pipeline. To achieve it we should first install GitLab Plugin.

art-docker-8

Then, you should come back to the pipeline’s configuration panel and enable GitLab build trigger. After that, webhook will be available for our sample pipeline called config-service-pipeline under URL http://192.168.99.100:38080/project/config-service-pipeline as shown in the following picture.

art-docker-9

Before proceeding to configuration of the webhook in GitLab Dashboard you should retrieve your Jenkins user API token. To achieve it go to the profile panel, select Configure and click button Show API Token.

art-docker-10

To add a new WebHook for your Git repository, you need to go to the section Settings -> Integrations and then fill the URL field with a webhook address copied from Jenkins pipeline. Then paste Jenkins user API token into field Secret Token. Leave the Push events checkbox selected.

art-docker-11

9. Running pipeline

Now, we may finally run our pipeline. If you use GitLab Docker container as a Git repository platform you just have to push changes in the source code. Otherwise you have to manually start the build of the pipeline. The first build will take a few minutes, because Maven has to download dependencies required for building an application. If everything will end with success you should see the following result on your pipeline dashboard.

art-docker-13

You can check out the list of images stored in your private Docker registry by calling the following HTTP API endpoint in your web browser: https://192.168.99.100:5000/v2/_catalog.

art-docker-12

You can check out the list of images stored in your private Docker registry by calling the following HTTP API endpoint in your web browser: https://192.168.99.100:5000/v2/_catalog.

The post Local Continuous Delivery Environment with Docker and Jenkins appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2018/06/12/local-continuous-delivery-environment-with-docker-and-jenkins/feed/ 2 6649
How to setup Continuous Delivery environment https://piotrminkowski.com/2017/02/10/how-to-setup-continuous-delivery-environment/ https://piotrminkowski.com/2017/02/10/how-to-setup-continuous-delivery-environment/#respond Fri, 10 Feb 2017 09:01:22 +0000 https://piotrminkowski.wordpress.com/?p=399 I have already read some interesting articles and books about Continuous Delivery, because I had to setup it inside my organization. The last document about this subject I can recommend is DZone Guide to DevOps. If you interested in this area of software development it can be really enlightening reading for you. The main purpose of my […]

The post How to setup Continuous Delivery environment appeared first on Piotr's TechBlog.

]]>
I have already read some interesting articles and books about Continuous Delivery, because I had to setup it inside my organization. The last document about this subject I can recommend is DZone Guide to DevOps. If you interested in this area of software development it can be really enlightening reading for you. The main purpose of my article is to show rather practical site of Continuous Delivery – tools which can be used to build such environment. I’m going to show how to build professional Continuous Delivery environment using:

  • Jenkins – most popular open source automation server
  • GitLab – web-based Git repository manager
  • Artifactory – open source Maven repository manager
  • Ansible – simple open source automation engine
  • SonarQube – open source platform for continuous code quality

Here’s picture showing our continuous delivery environment.

continuous_delivery

The changes pushed to Git repository managed by GitLab server are automatically propagated to Jenkins using webhook. We enable push and merge request triggers. SSL verification will be disabled. In the URL field we have to put jenkins pipeline address with authentication credentials (user and password) and secret token. This API token which is visible in jenkins user profile under Configure tab.

webhookHere’s jenkins pipeline configuration in ‘Build triggers’ section. We have to enable option ‘Build when a change is pushed to GitLab‘. GitLab CI Service URL is the address we have already set in GitLab webhook configuration. There are push and merge request enabled from all branches. It can also be added additional restriction for branch filtering: by name or by regex. To support such kind of trigger in jenkins you need have Gitlab plugin installed.

jenkins

There are two options of events which trigger jenkins build:

  • push – change in source code is pushed to git repository
  • merge request –  change in source code is pushed to one branch and then committer creates merge request to the build branch from GitLab management console

In case you would like to use first option you have to disable build branch protection to enable direct push to that branch. In case of using merge request branch protection need to be activated.

protection

Merge request from GitLab console is very intuitive. Under section ‘Merge request’ we are selecting source and target branch and confirm action.

merge

Ok, many words about GitLab and Jenkins integration… Now you know how to configure it. You only have to decide if you prefer push or merge request in your continuous delivery configuration. Merge request is used for code review in Gitlab – so it is useful additional step in your continuous pipeline. Let’s move on. We have to install some other plugins in jenkins to integrate it with Artifactory, SonarQube and Ansible. Here’s the full list of jenkins plugins I used for continuous delivery process inside my organization:

Here’s configuration on my jenkins pipeline for sample maven project.

[code]
node {

withEnv(["PATH+MAVEN=${tool ‘Maven3’}bin"]) {

stage(‘Checkout’) {
def branch = env.gitlabBranch
env.branch = branch
git url: ‘http://172.16.42.157/minkowp/start.git’, credentialsId: ‘5693747c-2f45-4557-ada2-a1da9bbfe0af’, branch: branch
}

stage(‘Test’) {
def pom = readMavenPom file: ‘pom.xml’
print "Build: " + pom.version
env.POM_VERSION = pom.version
sh ‘mvn clean test -Dmaven.test.failure.ignore=true’
junit ‘**/target/surefire-reports/TEST-*.xml’
currentBuild.description = "v${pom.version} (${env.branch})"
}

stage(‘QA’) {
withSonarQubeEnv(‘sonar’) {
sh ‘mvn org.sonarsource.scanner.maven:sonar-maven-plugin:3.2:sonar’
}
}

stage(‘Build’) {
def server = Artifactory.server "server1"
def buildInfo = Artifactory.newBuildInfo()
def rtMaven = Artifactory.newMavenBuild()
rtMaven.tool = ‘Maven3′
rtMaven.deployer releaseRepo:’libs-release-local’, snapshotRepo:’libs-snapshot-local’, server: server
rtMaven.resolver releaseRepo:’remote-repos’, snapshotRepo:’remote-repos’, server: server
rtMaven.run pom: ‘pom.xml’, goals: ‘clean install -Dmaven.test.skip=true’, buildInfo: buildInfo
publishBuildInfo server: server, buildInfo: buildInfo
}

stage(‘Deploy’) {
dir(‘ansible’) {
ansiblePlaybook playbook: ‘preprod.yml’
}
mail from: ‘ci@example.com’, to: ‘piotr.minkowski@play.pl’, subject: "Nowa wersja start: ‘${env.POM_VERSION}’", body: "WdroĹĽono nowa wersjÄ™ start ‘${env.POM_VERSION}’ na Ĺ›rodowisku preprodukcyjnym."
}

}
}
[/code]

There are five stages in my pipeline:

  1. Checkout – source code checkout from git branch. Branch name is sent as parameter by GitLab webhook
  2. Test – running JUnit test and creating test report visible in jenkins and changing job description
  3. QA – running source code scanning using SonarQube scanner
  4. Build – build package resolving artifacts from Artifactory and publishing new application release to Artifactory
  5. Deploy – deploying application package and configuration on server using ansible

Following Ansible website it is a simple automation language that can perfectly describe an IT application infrastructure. It’s easy-to-learn, self-documenting, and doesn’t require a grad-level computer science degree to read. Ansible using SSH keys to authenticate on the remote host. So you have to put your SSH key to authorized_keys file in the remote host before running ansible commands on it. The main idea of that that is to create playbook with set of ansible commands. Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process. Here is catalog structure with ansible configuration for application deploy.

start_ansible

 

 

 

 

 

 

Here’s my ansible playbook code. It defines remote host, user to connect and role name. This file is used inside jenkins pipeline on ansiblePlaybook step.

[code]

– hosts: pBPreprod
remote_user: default

roles:
– preprod
[/code]

Here’s main.yml file where we define set of ansible commands to on remote server.

[code]

– block:
– name: Copy configuration file
template: src=config.yml.j2 dest=/opt/start/config.yml

– name: Copy jar file
copy: src=../target/start.jar dest=/opt/start/start.jar

– name: Run jar file
shell: java -jar /opt/start/start.jar
[/code]

You can check out build results on jenkins console. There is also fine pipeline visualization with stage execution time. Each build history record has link to Artifactory build information and SonarQube scanner report.

jenkins

The post How to setup Continuous Delivery environment appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2017/02/10/how-to-setup-continuous-delivery-environment/feed/ 0 399