odo Archives - Piotr's TechBlog https://piotrminkowski.com/tag/odo/ Java, Spring, Kotlin, microservices, Kubernetes, containers Fri, 15 Mar 2024 10:12:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 odo Archives - Piotr's TechBlog https://piotrminkowski.com/tag/odo/ 32 32 181738725 Java Development with Odo on Podman, Kubernetes and OpenShift https://piotrminkowski.com/2024/03/15/java-development-with-odo-on-podman-kubernetes-and-openshift/ https://piotrminkowski.com/2024/03/15/java-development-with-odo-on-podman-kubernetes-and-openshift/#respond Fri, 15 Mar 2024 10:11:57 +0000 https://piotrminkowski.com/?p=15091 In this article, you will learn how to develop and deploy Java apps on Podman, Kubernetes, and OpenShift with odo. Odo is a fast and iterative CLI tool for developers who want to write, build, and deploy applications on Kubernetes-native environments. Thanks to odo you can focus on the most important aspect of programming – […]

The post Java Development with Odo on Podman, Kubernetes and OpenShift appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to develop and deploy Java apps on Podman, Kubernetes, and OpenShift with odo. Odo is a fast and iterative CLI tool for developers who want to write, build, and deploy applications on Kubernetes-native environments. Thanks to odo you can focus on the most important aspect of programming – code. I have already written an article about that tool in my blog some years ago. However, a lot has changed during that time.

Today, we will also focus more on Podman, and especially Podman Desktop, as an alternative to the Docker Desktop for local development. You will learn how to integrate the odo CLI with Podman. We will also use Podman for creating clusters and switching between several Kubernetes contexts. Our sample Java app is written in Spring Boot, exposes some REST endpoints over HTTP, and connects to the Postgres database. Let’s begin!

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. The sample Spring Boot app is located inside the micro-springboot/person-service directory. Once you clone the repo and go to that directory, you should just follow my further instructions.

Create Sample Spring Boot App

The app source code is not the most important thing in our exercise. However, let’s do a quick recap of its main parts. Here’s the Maven pom.xml with a list of dependencies. It includes standard Spring Boot starters for exposing REST endpoints and integrating with the Postgres database through JPA. It also uses additional libraries for generating OpenAPI docs (Springdoc) and creating entity views (Blaze Persistence).

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
  <groupId>org.postgresql</groupId>
  <artifactId>postgresql</artifactId>
  <scope>runtime</scope>
</dependency>
<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
  <groupId>org.springdoc</groupId>
  <artifactId>springdoc-openapi-starter-webmvc-ui</artifactId>
  <version>2.4.0</version>
</dependency>
<dependency>
  <groupId>com.blazebit</groupId>
  <artifactId>blaze-persistence-integration-spring-data-3.1</artifactId>
  <version>${blaze.version}</version>
</dependency>
<dependency>
  <groupId>com.blazebit</groupId>
  <artifactId>blaze-persistence-integration-hibernate-6.2</artifactId>
  <version>${blaze.version}</version>
  <scope>runtime</scope>
</dependency>
<dependency>
  <groupId>com.blazebit</groupId>
  <artifactId>blaze-persistence-entity-view-processor</artifactId>
  <version>${blaze.version}</version>
</dependency>

Here’s our @Entity model class:

@Entity
public class Person {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Integer id;
    private String firstName;
    private String lastName;
    private int age;
    @Enumerated(EnumType.STRING)
    private Gender gender;
    private Integer externalId;

}

Here’s the entity view interface used for returning persons in the REST endpoint. We can leverage the Blaze Persistence library to map between the JPA entity and a DTO view object.

@EntityView(Person.class)
public interface PersonView {

   @IdMapping
   Integer getId();
   void setId(Integer id);

   @Mapping("CONCAT(firstName,' ',lastName)")
   String getName();
   void setName(String name);
}

There are two repository interfaces. The first one is used for the modifications. It extends the standard Spring Data JPA CrudRepository.

public interface PersonRepository extends CrudRepository<Person, Integer> {}

The second one is dedicated just for read operation. It extends the Blaze Persistence EntityViewRepository interface.

@Transactional(readOnly = true)
public interface PersonViewRepository extends EntityViewRepository<PersonView, Integer> {
   PersonView findByAgeGreaterThan(int age);
}

In the @RestController implementation, we use both repository beans. Depending on the operation type, the API method uses the Spring Data JPA PersonRepository or the Blaze Persistence PersonViewRepository.

@RestController
@RequestMapping("/persons")
public class PersonController {

   private static final Logger LOG = LoggerFactory
      .getLogger(PersonController.class);
   private final PersonRepository repository;
   private final PersonViewRepository viewRepository;

   public PersonController(PersonRepository repository, 
                           PersonViewRepository viewRepository) {
      this.repository = repository;
      this.viewRepository = viewRepository;
   }

   @GetMapping
   public List<PersonView> getAll() {
       LOG.info("Get all persons");
       return (List<PersonView>) viewRepository.findAll();
   }

   @GetMapping("/{id}")
   public PersonView getById(@PathVariable("id") Integer id) {
      LOG.info("Get person by id={}", id);
      return viewRepository.findOne(id);
   }

   @GetMapping("/age/{age}")
   public PersonView getByAgeGreaterThan(@PathVariable("age") int age) {
      LOG.info("Get person by age={}", age);
      return viewRepository.findByAgeGreaterThan(age);
   }

   @DeleteMapping("/{id}")
   public void deleteById(@PathVariable("id") Integer id) {
      LOG.info("Delete person by id={}", id);
      repository.deleteById(id);
   }

   @PostMapping
   public Person addNew(@RequestBody Person person) {
      LOG.info("Add new person: {}", person);
      return repository.save(person);
   }

   @PutMapping
   public void update(@RequestBody Person person) {
      repository.save(person);
   }
}

Here are the app configuration settings in the Spring Boot application.yml file. The app creates the database schema on startup and uses environment variables to establish a connection with the target database.

spring:
  application:
    name: person-service
  datasource:
    url: jdbc:postgresql://${DATABASE_HOST}:5432/${DATABASE_NAME}
    username: ${DATABASE_USER}
    password: ${DATABASE_PASS}
  jpa:
    hibernate:
      ddl-auto: create
    properties:
      hibernate:
        show_sql: true
        format_sql: true

Now, our goal is to build the app container image and then run it on the target environment for further development. In our case, it is the Kubernetes or OpensShift cluster. We need a tool that doesn’t require a deep understanding of Kubernetes or even Podman. Here comes odo.

Create and Manage Devfiles

In the first step, we need to configure our project. Odo is based on the open standard for defining containerized environments called Devfile. Once we execute the odo init command inside the app root directory and choose the app type, it generates the devfile.yaml automatically. Of course, we can already keep the devfile.yaml in the Git repo. Thanks to that, we don’t need to initialize the odo configuration after cloning the repo. That’s what it is in my case. 

After generating the “devfile”, we need to change it a little bit. First of all, I switched to the Java 21 base image, instead of Java 17 generated by the odo init command. We will also add the environment variables used by the Spring Boot app to establish connection with the Postgres database. Here’s the udpated fragment of the devfile.yaml responsible for running app container.

components:
- container:
    command:
    - tail
    - -f
    - /dev/null
    endpoints:
    - name: http-springboot
      targetPort: 8080
    - exposure: none
      name: debug
      targetPort: 5858
    env:
    - name: DEBUG_PORT
      value: "5858"
    - name: DATABASE_HOST
      value: localhost
    - name: DATABASE_USER
      value: springboot
    - name: DATABASE_PASS
      value: springboot123
    - name: DATABASE_NAME
      value: sampledb
    image: registry.access.redhat.com/ubi9/openjdk-21:latest

I also included an additional container with the Postgres database. Thanks to that, odo will not only build and run the app container but also the container with a database required by that app. We use the registry.redhat.io/rhel9/postgresql-15 Postgres image from Red Hat official registry. We can set a default username, password and database using the POSTGRESQL_* envs supported by the Red Hat Postgres image.

- name: postgresql
  container:
    image: registry.redhat.io/rhel9/postgresql-15
    env:
      - name: POSTGRESQL_USER
        value: springboot
      - name: POSTGRESQL_PASSWORD
        value: springboot123
      - name: POSTGRESQL_DATABASE
        value: sampledb
    endpoints:
      - name: postgresql
        exposure: internal
        targetPort: 5432
        attributes:
          discoverable: 'true'
    memoryLimit: 512Mi
    mountSources: true
    volumeMounts:
      - name: postgresql-storage
        path: /var/lib/postgresql/data
- name: postgresql-storage
  volume:
    size: 256Mi

Here’s the whole devfile.yaml after our customizations. Of course, you can find it inside the GitHub repository.

commands:
- exec:
    commandLine: mvn clean -Dmaven.repo.local=/home/user/.m2/repository package -Dmaven.test.skip=true
    component: tools
    group:
      isDefault: true
      kind: build
    workingDir: ${PROJECT_SOURCE}
  id: build
- exec:
    commandLine: mvn -Dmaven.repo.local=/home/user/.m2/repository spring-boot:run
    component: tools
    group:
      isDefault: true
      kind: run
    workingDir: ${PROJECT_SOURCE}
  id: run
- exec:
    commandLine: java -Xdebug -Xrunjdwp:server=y,transport=dt_socket,address=${DEBUG_PORT},suspend=n -jar target/*.jar
    component: tools
    group:
      isDefault: true
      kind: debug
    workingDir: ${PROJECT_SOURCE}
  id: debug
components:
- container:
    command:
    - tail
    - -f
    - /dev/null
    endpoints:
    - name: http-springboot
      targetPort: 8080
    - exposure: none
      name: debug
      targetPort: 5858
    env:
    - name: DEBUG_PORT
      value: "5858"
    - name: DATABASE_HOST
      value: localhost
    - name: DATABASE_USER
      value: springboot
    - name: DATABASE_PASS
      value: springboot123
    - name: DATABASE_NAME
      value: sampledb
    image: registry.access.redhat.com/ubi9/openjdk-21:latest
    memoryLimit: 768Mi
    mountSources: true
    volumeMounts:
    - name: m2
      path: /home/user/.m2
  name: tools
- name: postgresql
  container:
# uncomment for Kubernetes
#    image: postgres:15
#    env:
#      - name: POSTGRES_USER
#        value: springboot
#      - name: POSTGRES_PASSWORD
#        value: springboot123
#      - name: POSTGRES_DB
#        value: sampledb
    image: registry.redhat.io/rhel9/postgresql-15
    env:
      - name: POSTGRESQL_USER
        value: springboot
      - name: POSTGRESQL_PASSWORD
        value: springboot123
      - name: POSTGRESQL_DATABASE
        value: sampledb
    endpoints:
      - name: postgresql
        exposure: internal
        targetPort: 5432
        attributes:
          discoverable: 'true'
    memoryLimit: 512Mi
    mountSources: true
    volumeMounts:
      - name: postgresql-storage
        path: /var/lib/postgresql/data
- name: postgresql-storage
  volume:
    size: 256Mi
- name: m2
  volume:
    size: 3Gi
metadata:
  description: Java application using Spring Boot® and OpenJDK 21
  displayName: Spring Boot®
  globalMemoryLimit: 2674Mi
  icon: https://raw.githubusercontent.com/devfile-samples/devfile-stack-icons/main/spring.svg
  language: Java
  name: person-service
  projectType: springboot
  tags:
  - Java
  - Spring
  version: 1.3.0
schemaVersion: 2.1.0

Prepare Development Environment with Podman

We have everything ready on the application side. Now, it is time to prepare a dev environment. Let’s run Podman Desktop. Podman Desktop comes with several useful features that simplify interaction with Kubernetes or OpenShift clusters for developers. It provides plugins that allow us to install Openshift Local, Minikube, or Kind on the laptop. We can also leverage the remote instance of OpenShift on the Sandbox Developer portal. It is active for 30 days and may be renewed.

odo-podman-kubernetes-desktop


Podman has one important advantage over Docker. It supports the Pod concept. So, for example, suppose we have a container that requires the Postgres container, with Podman we do not need to bind that database to a routable network.  With a pod, we are just binding to the localhost address and all containers in that pod can connect to it because of the shared network. Thanks to that, switching between Podman and Kubernetes is very simple with odo. In fact, we don’t need to change anything in the configuration.

For running the Kubernetes cluster on the local machine, we will Minikube. You can install and run Minikube by yourself or create it using the Podman Desktop. With Podman Desktop, we need to go to the Settings -> Resources section. Then find the Minikube tile as shown below and click the “Create new …” button.

odo-podman-kubernetes-minikube

Podman Desktop redirects us to the window with the creation form. Then, choose your preferred settings and click Create.

In order to create an OpenShift cluster we use a managed service available on the Red Hat Developer Sandbox website. Then, let’s choose “Start your Sandbox for free”.

You need to create an account on Red Hat Developer or sign in if you already did that. After that, you will be redirected to the Red Hat Hybrid Cloud Console, where you should choose Launch option on the “Red Hat OpenShift” tile as shown below.

In the OpenShift Console click on your username in the top-right corner and choose “Copy login command”.

odo-podman-kubernetes-ocp

Then, we can back to the Podman Desktop and paste the copied command in the Settings -> Resources -> Developer Sandbox section.

Deploy App with Database on Podman

Finally, let’s move from theory to practice. Assuming we have Podman running on our laptop, we can deploy our app with odo there by executing the following command:

$ odo dev --platform podman

The odo dev command deploys the app on the target environment and waits for any changes in the source files. Once it occurs, it redeploys the app.

The odo command exposes both the app HTTP port and the Postgres port outside Podman with port forwarding. For example, our Spring Boot app is available under the 20002 port. You can run the Swagger UI and call some endpoints to test the functionality.

odo-podman-kubernetes-swagger

Our app container can connect to the database over localhost, because both containers are running inside a single pod.

odo-podman-kubernetes-pods

Deploy App with a Database on Kubernetes

Then, we can switch to the Kubernetes cluster with our app. In this exercise, we will use Minikube. We can easily create and run the Minikube instance using the Podman Desktop plugin.

Once we create such an instance, the Kubernetes content is switched automatically. We can check it out with Podman Desktop as shown below.

Let’s deploy our app on Minikube. If we don’t activate the platform option, odo deploys on the default cluster context.

$ odo create namespace springboot
$ ode dev

Here’s the command result for my Minikube instance. Thanks to automatic port-forwarding we can access the app exactly in the same as before with Podman.

We can display a list of running with the following command:

kubectl get po
NAME                                 READY   STATUS    RESTARTS   AGE
person-service-app-b69bf8f7d-hk555   2/2     Running   0          2m50s

The full required YAML is generated automatically based on the devfile (e.g. environment variables). The only thing I changed for Kubernetes is the Postgres-based image used by odo. Instead of the image from the Red Hat registry, we are just using the official Postgres image from Docker Hub.

- name: postgresql
  container:
    image: postgres:15
    env:
      - name: POSTGRES_USER
        value: springboot
      - name: POSTGRES_PASSWORD
        value: springboot123
      - name: POSTGRES_DB
        value: sampledb

Deploy App with a Database on OpenShift

We can deploy the sample Spring Boot app on OpenShift exactly in the same way as in OpenShift. The only thing that can be changed is the Postgres image. This time we will back to the configuration used when deploying to podman. Instead of the image from the Docker Hub, we will use the registry.redhat.io/rhel9/postgresql-15 image.

As I mentioned before, we will use the remote OpenShift cluster on the Developer Sandbox. Podman Desktop provides the plugin for Developer Sandbox. With that plugin, we can map the OpenShift context to a specific name like dev-sandbox-context.

odo-podman-kubernetes-developer-sandbox

Then we can switch to the Kubernetes context related to Developer Sandbox using the Podman Desktop.

Finally, let’s run the app on the cluster with the following command:

$ odo dev

Here’s the output after running the odo dev command:

odo-podman-kubernetes-dev

We can verify that a pod is running on the OpenShift cluster. Just go to the Workloads -> Pods section in the OpenShift Console.

odo-podman-kubernetes-openshift-console

Thanks to automatic port-forwarding we can access the app on the local port. However, we can expose the service outside OpenShift with the Route object. Firstly, let’s display a list of Kubernetes services using Podman Desktop.

Then, we need to create the Route object with the following command:

$ oc expose svc/person-service-app 

Here’s our Route visible in the OpenShift Console. To access it we need to open the following address in the web browser:

odo-podman-kubernetes-openshift-route

Finally, let’s access the app Swagger UI using the exposed URL address:

Final Thoughts

With Podman and the odo CLI, we can configure our development space and easily run apps across different containerized and Kubernetes-native environments. Odo with Devfile standard can similarly run the app on Podman, Kubernetes, and OpenShift. You can control the whole process using Podman Desktop.

The post Java Development with Odo on Podman, Kubernetes and OpenShift appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/03/15/java-development-with-odo-on-podman-kubernetes-and-openshift/feed/ 0 15091
Development with OpenShift Dev Spaces https://piotrminkowski.com/2022/11/17/development-with-openshift-dev-spaces/ https://piotrminkowski.com/2022/11/17/development-with-openshift-dev-spaces/#respond Thu, 17 Nov 2022 16:59:49 +0000 https://piotrminkowski.com/?p=13724 In this article, you will learn how to use OpenShift Dev Spaces to simplify the development of containerized apps. OpenShift Dev Spaces is a Red Hat product based on the open-source Eclipse Che project optimized for running on OpenShift. Eclipse Che allows you to use your favorite IDE directly on Kubernetes. However, it is not […]

The post Development with OpenShift Dev Spaces appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to use OpenShift Dev Spaces to simplify the development of containerized apps. OpenShift Dev Spaces is a Red Hat product based on the open-source Eclipse Che project optimized for running on OpenShift. Eclipse Che allows you to use your favorite IDE directly on Kubernetes. However, it is not just a web-based IDE running in containers. It is also a concept that helps to organize software-defined developer environments inside your Kubernetes cluster.

If you are interested in similar articles about the differences between OpenShift and vanilla Kubernetes you can read my previous post about GitOps and multi-cluster environments. In the current article, we will also discuss the odo tool. If you need more information about it go the following post.

Introduction

Eclipse Che is a Kubernetes-native IDE and developer collaboration platform. OpenShift Dev Spaces is built on top of Eclipse Che and allows you to run it easily on OpenShift. We can install Dev Spaces using an operator. After that, you will get a ready platform that automatically integrates with the OpenShift authorization mechanism.

In this article, I’ll show you step-by-step how to install Dev Spaces on the OpenShift platform. However, you can as well try a hosted option. By default, OpenShift Dev Spaces runs as part of the Developer Sandbox. Developer Sandbox gives you immediate access to the cloud-managed OpenShift cluster for 30 days. You don’t have to install and configure anything there. Since everything is ready for use, you will just access your Dev Spaces dashboard to start development in one of the available IDEs including Theia, IntelliJ, and Visual Studio.

The picture visible below shows the architecture of our solution. Let’s imagine there are many developers working with our instance of OpenShift. Firstly, they need to login into the OpenShift cluster. Once they do it they can access a dashboard of Dev Spaces. Dev Spaces automatically creates a namespace for a developer based on username. It also automatically starts a pod containing our IDE after we choose a git repository with the app source code. Then we can use OpenShift developer tools to easily build the app from the source code and deploy it in the current namespace.

openshift-dev-spaces-arch

Prerequisites

In order to do the whole exercise with me, you need to have a running instance of OpenShift. Of course, you can use a developer sandbox, but you may have only a single user there. There are various methods of running OpenShift instance including a local instance or cloud-managed instances on AWS or Azure. You can find a detailed information about all available installation methods here. For running on the local computer use OpenShift Local.

Install and Configure Dev Spaces on OpenShift

Once we have a running instance of OpenShift, we may proceed to the Dev Spaces installation. You need to go the “Operator Hub” in OpenShift Console and choose the Red Hat OpenShift Dev Spaces operator. You can install it using default settings. That operator will also automatically install another operator – DevWorkspace. After you install Dev Spaces you would have to create an instance of CheCluster. You can find a link in the “Provided APIs” section.

openshift-dev-spaces-install

Then, you need to click the “Create CheCluster” button. You will be redirected into the creation form. You can also leave defaults there. I create my instance of CheCluster in the spaces namespace.

After creating CheCluster we will switch to the spaces namespace for a moment just to verify if everything works fine.

$ oc project spaces

You should have a similar list of pods as me:

$ oc get pod
NAME                                   READY   STATUS    RESTARTS   AGE
che-gateway-548fdd95b5-zhczp           4/4     Running   0          1m
devfile-registry-6cbbc6c87b-hzdcb      1/1     Running   0          1m
devspaces-86cfb5b664-bqs7l             1/1     Running   0          1m
devspaces-dashboard-56b68b4649-xlrgc   1/1     Running   0          1m
plugin-registry-89f7d7684-pw9wg        1/1     Running   0          1m
postgres-6cb6cb646f-6dvbq              1/1     Running   0          1m

You can easily access Dev Spaces dashboard through the DNS address using the OpenShift Route object.

Use Dev Spaces on OpenShift

I have already created three users on OpenShift: user1, user2 and user3. We can use a simple htpasswd authentication for that. At the beginning those users do not have access to any project (or namespace) on OpenShift. I also have the admin user for managing installation and viewing the status across all the namespaces.

Now, we will access Dev Spaces dashboard using each user one by one. Let’s see how it looks for the first user – user1. You can just put an address of the Git repository with the app source code and create a workspace. There are also some example repositories available, but you can use my repository containing some simple Quarkus apps.

openshift-dev-spaces-empty-workspace

By default, OpenShift Dev Spaces runs Theia as developer IDE. Since we would like to use IntelliJ, we need to customize a worspace creation command. We could pass the name of our IDE using the devfile.yaml file in the root directory of our repository. But we can as well pass it in the URL. The picture visible below illustrates the algorithm used for customizing workspace creation using URL. We just need to pass the repository URL and set the che-editor parameter with the che-incubator/che-idea/latest value.

Finally, we will start our new workspace. We need to wait a moment until the pod with our IDE starts.

Once it is ready for use, you will be redirected into the URL with your IDE. Now, you can start development 🙂

If you back for a moment to the Dev Spaces dashboard you will see the new workspace on the list of available workspaces for user1. You can do some actions on workspaces like restart or deletion.

Now, we will repeat exactly the same steps for user2 and user3. All these users will have their own instances of Dev Spaces in the separated namespaces <username>-devspaces. Let’s display a list of the DevWorkspaces objects across all the namespaces. They represent all the existing workspaces inside the whole OpenShift clusters. We can verify a status of the workspace (Running, Starting, Stopped) and URL.

$ oc get devworkspaces -A
NAMESPACE            NAME                          DEVWORKSPACE ID             PHASE      INFO
user1-devspaces      sample-quarkus-applications   workspace4b02dc6434b54a0e   Running    https://devspaces.apps.cluster-rh520e.gcp.redhatworkshops.io/workspace4b02dc6434b54a0e/idea-rhel8/8887/?backgroundColor=434343&wss
user2-devspaces      sample-quarkus-applications   workspaceadfbc8426d774988   Running    https://devspaces.apps.cluster-rh520e.gcp.redhatworkshops.io/workspaceadfbc8426d774988/idea-rhel8/8887/?backgroundColor=434343&wss
user3-devspaces      sample-quarkus-applications   workspace810c8d6cdb1a4c7d   Starting   Waiting for workspace deployment

Use IntelliJ on OpenShift

After running IntelliJ instance on OpenShift we can verify some settings. Of course, you can do everything the same as in standard IntelliJ on your computer. But what is important here, our development environment is preconfigured and ready for work. There are OpenJDK, Maven and oc client installed and configured. Moreover, there is also odo client, which is used to build and deploy the directly from local version of source code. The user is currently logged in into the OpenShift cluster. Since we are inside the cluster, we can act with it using internal networking. If we still need to install some additional components, we can prepare our own version of the devfile.yaml and put it e.g. in the Git repository root directory.

openshift-dev-spaces-oc

One of the most important thing here is that you are interacting with OpenShift cluster internally. That has a huge impact on deployment time when using inner-development loop tools like odo. That’s because you don’t have to upload the source code over the network. Let’s just try it. As I mentioned before odo is by default installed in your workspace. So now, the only thing you need to do is to choose one app from our sample Quarkus Git repository. For me, it is person-service.

$ cd person-service

In the first step, we need to create an app with odo. There are several components available depending on the language or even framework. You can list all of them by running the following command: odo catalog list components. Since our code is written in Quarkus we will choose the java-quarkus component.

$ odo create java-quarkus person

In order to build and deploy app on OpenShift just run the following command.

$ odo push

Let’s analyze what happened. Here is the output from the odo push command. It automatically creates a Route to expose app outside the cluster. After performing the Maven build it finally pushes the app with the name person to OpenShift.

To view the status of the cluster we can install the OpenShift Toolkit plugin in IntelliJ.

Let’s display a list of deployments in the user1-devspaces namespace. As you see our Quarkus app is deployed under the person-app name. I also had to deploy Postgresql (person-db) on OpenShift since the our apps connects to the database.

openshift-dev-spaces-status

Finally, if you want want do have an inner-development loop with odo and Dev Spaces just run the command odo watch in the IntelliJ terminal.

Customize OpenShift Dev Spaces

We can customize the behaviour of Dev Spaces by modifying the CheCluster object. Here are the default settings. We can override e.g. :

  • The namespace name template (1)
  • The duration after which a workspace will be idled if there is no activity (2).
  • The maximum duration a workspace runs (3).
  • Storage option from per user into the mode where each workspace has its own individual PVC (4).
apiVersion: org.eclipse.che/v2
kind: CheCluster
metadata:
  name: devspaces
  namespace: spaces
spec:
  components:
    cheServer:
      debug: false
      logLevel: INFO
    database:
      credentialsSecretName: postgres-credentials
      externalDb: false
      postgresDb: dbche
      postgresHostName: postgres
      postgresPort: '5432'
      pvc:
        claimSize: 1Gi
    imagePuller:
      enable: false
    metrics:
      enable: true
  devEnvironments:
    defaultNamespace:
      template: <username>-devspaces # (1)
    secondsOfInactivityBeforeIdling: 1800 # (2)
    secondsOfRunBeforeIdling: -1 # (3)
    storage:
      pvcStrategy: per-user # (4)
  networking:
    auth:
      gateway:
        configLabels:
          app: che
          component: che-gateway-config

So, if there is no activity Dev Spaces should automatically destroy the pod with your IDE after 30 minutes. Of course, you can change the value of that timeout. However, you can modify the YAML of each DevWorkspace object to shut it down manually. You just need to set the spec.started parameter value to false.

Let’s verify the status of DevWorkspace objects after disabling all of them manually.

$ oc get devworkspace -A
NAMESPACE            NAME                          DEVWORKSPACE ID             PHASE     INFO
user1-devspaces      sample-quarkus-applications   workspace4b02dc6434b54a0e   Stopped   Stopped
user2-devspaces      sample-quarkus-applications   workspaceadfbc8426d774988   Stopped   Stopped
user3-devspaces      sample-quarkus-applications   workspace810c8d6cdb1a4c7d   Stopped   Stopped

Final Thoughts

OpenShift Dev Spaces helps you to standardize development process across the whole organization on OpenShift. Thanks to that tool you can accelerate project and developer onboarding. As a zero-install development environment that runs in your browser, it makes it easy for anyone to join your team and contribute to a project. It may be especially useful for enabling fast inner-development loop with the remote Kubernetes or OpenShift clusters.

The post Development with OpenShift Dev Spaces appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2022/11/17/development-with-openshift-dev-spaces/feed/ 0 13724
Java Development on OpenShift with odo https://piotrminkowski.com/2021/02/05/java-development-on-openshift-with-odo/ https://piotrminkowski.com/2021/02/05/java-development-on-openshift-with-odo/#respond Fri, 05 Feb 2021 16:06:58 +0000 https://piotrminkowski.com/?p=9425 OpenShift Do (odo) is a CLI tool for running applications on OpenShift. Opposite to the oc client, it is a tool for developers. It automates all the things required to deploy your application on OpenShift. Thanks to odo you can focus on the most important aspect – code. In order to start, you just need […]

The post Java Development on OpenShift with odo appeared first on Piotr's TechBlog.

]]>
OpenShift Do (odo) is a CLI tool for running applications on OpenShift. Opposite to the oc client, it is a tool for developers. It automates all the things required to deploy your application on OpenShift. Thanks to odo you can focus on the most important aspect – code. In order to start, you just need to download the latest release from GitHub and add it to your path.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. It is the simple Spring Boot application we are going to deploy on OpenShift. Then you should just follow my instructions.

Before we begin with OpenShift odo

It is important to understand some key concepts related to odo before we begin. If you want to deploy a new application you should create a component. Each component can be run and deployed separately. There are two types of components: Devfile and S2I. The S2I component is based on the Source-To-Image process. Therefore, your application is built on the server-side with the S2I builder. With the Devfile component, you run the application with the mvn spring-boot:run command. Here’s the list of available components. We will use the java component.

openshift-odo-component-list

You can also check out the list of services by executing the command odo catalog list services. A service is a software that your component links to or depends on. However, I won’t focus on that feature. Currently, the Service Catalog is deprecated in OpenShift. Instead, you can operators with odo the same as you would use templates.

The Spring Boot application

Let’s take a moment on discussing our sample Spring Boot application. It is a REST-based application, which connects to MongoDB. We will use JDK 11 for compilation.

<groupId>pl.piomin.samples</groupId>
<artifactId>sample-spring-boot-on-kubernetes</artifactId>
<version>1.0-SNAPSHOT</version>

<properties>
    <java.version>11</java.version>
</properties>

<dependencies>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-web</artifactId>
   </dependency>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-data-mongodb</artifactId>
   </dependency>
   ...
</dependencies>

The MongoDB connection settings are set inside application.yml. We will use environment variables for getting credentials and database name. We will set these values in the local configuration using odo.

spring:
  application:
    name: sample-spring-boot-on-kubernetes
  data:
    mongodb:
      uri: mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@mongodb/${MONGO_DATABASE}

Creating configuration with odo

Ok, let’s by creating a new component. We need to choose java S2I type and then set a name of our component.

$ odo create java sample-spring-boot

After that odo is creating a file config.yaml inside the .odo directory in the project root catalog. It’s pretty simple.

kind: LocalConfig
apiversion: odo.dev/v1alpha1
ComponentSettings:
  Type: java
  SourceLocation: ./
  SourceType: local
  Ports:
  - 8443/TCP
  - 8778/TCP
  - 8080/TCP
  Application: app
  Project: pminkows-workshop
  Name: sample-spring-boot

Then, we will add some additional configuration properties. Firstly, we need to expose our application outside the cluster. The command visible below creates a Route to the Service on OpenShift on the 8080 port.

$ odo url create --port 8080

In the next step, we add environment variables to the odo configuration responsible for establishing MongoDB connection: MONGO_USERNAME, MONGO_PASSWORD, and MONGO_DATABASE. Just to simplify, I created a standalone instance of MongoDB on OpenShift using the template. Here are the values in the mongodb secret.

Let’s set environment variables using odo config command. Of course, all things are still available only in the local configuration.

$ odo config set --env MONGO_USERNAME=userJ5Q
$ odo config set --env MONGO_PASSWORD=UrfgtUKohNOFVqbQ
$ odo config set --env MONGO_DATABASE=sampledb 

Finally, we are ready to deploy our application on OpenShift. To do that, we need to execute odo push command. If you would like to see the logs from the build add option --show-log.

openshift-odo-push

Instead of using environment variables to provide MongoDB connection settings, you may take advantage of odo link command. This feature helps to connect an odo component to a service or another component. However, to use it you first need to install the Service Binding Operator on OpenShift. Then you may install a database like MongoDB with an operator and link to your application component.

Verify S2I build on OpenShift

After running odo push let’s switch to oc client. Our application is running on OpenShift as shown below. The same with MongoDB.

We can also navigate to the OpenShift Management Console. The environment variables set inside odo configuration has been applied to the DeploymentConfig.

Now, let’s say we need to provide some changes in our code. Fortunately, we can execute the command that watches for changes in the directory for current component. After detecting such a change, the new version of application is immediately deployed on OpenShift.

$ odo watch
Waiting for something to change in /Users/pminkows/IdeaProjects/sample-spring-boot-on-kubernetes

Using OpenShift odo in Devfile mode

In opposition to the S2I approach, we may use Devfile mode. In order to do that we will choose the java-springboot component.

As a result, OpenShift odo creates devfile.yaml in the project root directory.

schemaVersion: 2.0.0
metadata:
  name: java-springboot
  version: 1.1.0
starterProjects:
  - name: springbootproject
    git:
      remotes:
        origin: "https://github.com/odo-devfiles/springboot-ex.git"
components:
  - name: tools
    container:
      image: quay.io/eclipse/che-java11-maven:nightly
      memoryLimit: 768Mi
      mountSources: true
      endpoints:
      - name: '8080-tcp'
        targetPort: 8080
      volumeMounts:
        - name: m2
          path: /home/user/.m2
  - name: m2
    volume:
      size: 3Gi
commands:
  - id: build
    exec:
      component: tools
      commandLine: "mvn clean -Dmaven.repo.local=/home/user/.m2/repository package -Dmaven.test.skip=true"
      group:
        kind: build
        isDefault: true
  - id: run
    exec:
      component: tools
      commandLine: "mvn -Dmaven.repo.local=/home/user/.m2/repository spring-boot:run"
      group:
        kind: run
        isDefault: true
  - id: debug
    exec:
      component: tools
      commandLine: "java -Xdebug -Xrunjdwp:server=y,transport=dt_socket,address=${DEBUG_PORT},suspend=n -jar target/*.jar"
      group:
        kind: debug
        isDefault: true

Of course, all next steps are the same as for S2I component.

Install OpenShift Intellij Plugin

If you do not like command-line tools, you may install the Intellij OpenShift plugin as well. It uses odo for build and deploy. The good news is that the latest version of this plugin supports odo in 2.0.3 version.

openshift-odo-intellij

Thanks to that plugin you can for example easily create component with OpenShift odo.

Conclusion

With odo you can easily deploy your application on OpenShift in a few seconds. You may also continuously watch for changes in the code, and immediately deploy a new version of application. Moreover, you don’t need to create any Kubernetes YAML manifests.

OpenShift odo is a similar tool to Skaffold. If you would like to compare both these tools used to run the Spring Boot application you may read the following article about Skaffold.

The post Java Development on OpenShift with odo appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/02/05/java-development-on-openshift-with-odo/feed/ 0 9425