devfile Archives - Piotr's TechBlog https://piotrminkowski.com/tag/devfile/ Java, Spring, Kotlin, microservices, Kubernetes, containers Fri, 15 Mar 2024 10:12:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 devfile Archives - Piotr's TechBlog https://piotrminkowski.com/tag/devfile/ 32 32 181738725 Java Development with Odo on Podman, Kubernetes and OpenShift https://piotrminkowski.com/2024/03/15/java-development-with-odo-on-podman-kubernetes-and-openshift/ https://piotrminkowski.com/2024/03/15/java-development-with-odo-on-podman-kubernetes-and-openshift/#respond Fri, 15 Mar 2024 10:11:57 +0000 https://piotrminkowski.com/?p=15091 In this article, you will learn how to develop and deploy Java apps on Podman, Kubernetes, and OpenShift with odo. Odo is a fast and iterative CLI tool for developers who want to write, build, and deploy applications on Kubernetes-native environments. Thanks to odo you can focus on the most important aspect of programming – […]

The post Java Development with Odo on Podman, Kubernetes and OpenShift appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to develop and deploy Java apps on Podman, Kubernetes, and OpenShift with odo. Odo is a fast and iterative CLI tool for developers who want to write, build, and deploy applications on Kubernetes-native environments. Thanks to odo you can focus on the most important aspect of programming – code. I have already written an article about that tool in my blog some years ago. However, a lot has changed during that time.

Today, we will also focus more on Podman, and especially Podman Desktop, as an alternative to the Docker Desktop for local development. You will learn how to integrate the odo CLI with Podman. We will also use Podman for creating clusters and switching between several Kubernetes contexts. Our sample Java app is written in Spring Boot, exposes some REST endpoints over HTTP, and connects to the Postgres database. Let’s begin!

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. The sample Spring Boot app is located inside the micro-springboot/person-service directory. Once you clone the repo and go to that directory, you should just follow my further instructions.

Create Sample Spring Boot App

The app source code is not the most important thing in our exercise. However, let’s do a quick recap of its main parts. Here’s the Maven pom.xml with a list of dependencies. It includes standard Spring Boot starters for exposing REST endpoints and integrating with the Postgres database through JPA. It also uses additional libraries for generating OpenAPI docs (Springdoc) and creating entity views (Blaze Persistence).

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
  <groupId>org.postgresql</groupId>
  <artifactId>postgresql</artifactId>
  <scope>runtime</scope>
</dependency>
<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
  <groupId>org.springdoc</groupId>
  <artifactId>springdoc-openapi-starter-webmvc-ui</artifactId>
  <version>2.4.0</version>
</dependency>
<dependency>
  <groupId>com.blazebit</groupId>
  <artifactId>blaze-persistence-integration-spring-data-3.1</artifactId>
  <version>${blaze.version}</version>
</dependency>
<dependency>
  <groupId>com.blazebit</groupId>
  <artifactId>blaze-persistence-integration-hibernate-6.2</artifactId>
  <version>${blaze.version}</version>
  <scope>runtime</scope>
</dependency>
<dependency>
  <groupId>com.blazebit</groupId>
  <artifactId>blaze-persistence-entity-view-processor</artifactId>
  <version>${blaze.version}</version>
</dependency>

Here’s our @Entity model class:

@Entity
public class Person {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Integer id;
    private String firstName;
    private String lastName;
    private int age;
    @Enumerated(EnumType.STRING)
    private Gender gender;
    private Integer externalId;

}

Here’s the entity view interface used for returning persons in the REST endpoint. We can leverage the Blaze Persistence library to map between the JPA entity and a DTO view object.

@EntityView(Person.class)
public interface PersonView {

   @IdMapping
   Integer getId();
   void setId(Integer id);

   @Mapping("CONCAT(firstName,' ',lastName)")
   String getName();
   void setName(String name);
}

There are two repository interfaces. The first one is used for the modifications. It extends the standard Spring Data JPA CrudRepository.

public interface PersonRepository extends CrudRepository<Person, Integer> {}

The second one is dedicated just for read operation. It extends the Blaze Persistence EntityViewRepository interface.

@Transactional(readOnly = true)
public interface PersonViewRepository extends EntityViewRepository<PersonView, Integer> {
   PersonView findByAgeGreaterThan(int age);
}

In the @RestController implementation, we use both repository beans. Depending on the operation type, the API method uses the Spring Data JPA PersonRepository or the Blaze Persistence PersonViewRepository.

@RestController
@RequestMapping("/persons")
public class PersonController {

   private static final Logger LOG = LoggerFactory
      .getLogger(PersonController.class);
   private final PersonRepository repository;
   private final PersonViewRepository viewRepository;

   public PersonController(PersonRepository repository, 
                           PersonViewRepository viewRepository) {
      this.repository = repository;
      this.viewRepository = viewRepository;
   }

   @GetMapping
   public List<PersonView> getAll() {
       LOG.info("Get all persons");
       return (List<PersonView>) viewRepository.findAll();
   }

   @GetMapping("/{id}")
   public PersonView getById(@PathVariable("id") Integer id) {
      LOG.info("Get person by id={}", id);
      return viewRepository.findOne(id);
   }

   @GetMapping("/age/{age}")
   public PersonView getByAgeGreaterThan(@PathVariable("age") int age) {
      LOG.info("Get person by age={}", age);
      return viewRepository.findByAgeGreaterThan(age);
   }

   @DeleteMapping("/{id}")
   public void deleteById(@PathVariable("id") Integer id) {
      LOG.info("Delete person by id={}", id);
      repository.deleteById(id);
   }

   @PostMapping
   public Person addNew(@RequestBody Person person) {
      LOG.info("Add new person: {}", person);
      return repository.save(person);
   }

   @PutMapping
   public void update(@RequestBody Person person) {
      repository.save(person);
   }
}

Here are the app configuration settings in the Spring Boot application.yml file. The app creates the database schema on startup and uses environment variables to establish a connection with the target database.

spring:
  application:
    name: person-service
  datasource:
    url: jdbc:postgresql://${DATABASE_HOST}:5432/${DATABASE_NAME}
    username: ${DATABASE_USER}
    password: ${DATABASE_PASS}
  jpa:
    hibernate:
      ddl-auto: create
    properties:
      hibernate:
        show_sql: true
        format_sql: true

Now, our goal is to build the app container image and then run it on the target environment for further development. In our case, it is the Kubernetes or OpensShift cluster. We need a tool that doesn’t require a deep understanding of Kubernetes or even Podman. Here comes odo.

Create and Manage Devfiles

In the first step, we need to configure our project. Odo is based on the open standard for defining containerized environments called Devfile. Once we execute the odo init command inside the app root directory and choose the app type, it generates the devfile.yaml automatically. Of course, we can already keep the devfile.yaml in the Git repo. Thanks to that, we don’t need to initialize the odo configuration after cloning the repo. That’s what it is in my case. 

After generating the “devfile”, we need to change it a little bit. First of all, I switched to the Java 21 base image, instead of Java 17 generated by the odo init command. We will also add the environment variables used by the Spring Boot app to establish connection with the Postgres database. Here’s the udpated fragment of the devfile.yaml responsible for running app container.

components:
- container:
    command:
    - tail
    - -f
    - /dev/null
    endpoints:
    - name: http-springboot
      targetPort: 8080
    - exposure: none
      name: debug
      targetPort: 5858
    env:
    - name: DEBUG_PORT
      value: "5858"
    - name: DATABASE_HOST
      value: localhost
    - name: DATABASE_USER
      value: springboot
    - name: DATABASE_PASS
      value: springboot123
    - name: DATABASE_NAME
      value: sampledb
    image: registry.access.redhat.com/ubi9/openjdk-21:latest

I also included an additional container with the Postgres database. Thanks to that, odo will not only build and run the app container but also the container with a database required by that app. We use the registry.redhat.io/rhel9/postgresql-15 Postgres image from Red Hat official registry. We can set a default username, password and database using the POSTGRESQL_* envs supported by the Red Hat Postgres image.

- name: postgresql
  container:
    image: registry.redhat.io/rhel9/postgresql-15
    env:
      - name: POSTGRESQL_USER
        value: springboot
      - name: POSTGRESQL_PASSWORD
        value: springboot123
      - name: POSTGRESQL_DATABASE
        value: sampledb
    endpoints:
      - name: postgresql
        exposure: internal
        targetPort: 5432
        attributes:
          discoverable: 'true'
    memoryLimit: 512Mi
    mountSources: true
    volumeMounts:
      - name: postgresql-storage
        path: /var/lib/postgresql/data
- name: postgresql-storage
  volume:
    size: 256Mi

Here’s the whole devfile.yaml after our customizations. Of course, you can find it inside the GitHub repository.

commands:
- exec:
    commandLine: mvn clean -Dmaven.repo.local=/home/user/.m2/repository package -Dmaven.test.skip=true
    component: tools
    group:
      isDefault: true
      kind: build
    workingDir: ${PROJECT_SOURCE}
  id: build
- exec:
    commandLine: mvn -Dmaven.repo.local=/home/user/.m2/repository spring-boot:run
    component: tools
    group:
      isDefault: true
      kind: run
    workingDir: ${PROJECT_SOURCE}
  id: run
- exec:
    commandLine: java -Xdebug -Xrunjdwp:server=y,transport=dt_socket,address=${DEBUG_PORT},suspend=n -jar target/*.jar
    component: tools
    group:
      isDefault: true
      kind: debug
    workingDir: ${PROJECT_SOURCE}
  id: debug
components:
- container:
    command:
    - tail
    - -f
    - /dev/null
    endpoints:
    - name: http-springboot
      targetPort: 8080
    - exposure: none
      name: debug
      targetPort: 5858
    env:
    - name: DEBUG_PORT
      value: "5858"
    - name: DATABASE_HOST
      value: localhost
    - name: DATABASE_USER
      value: springboot
    - name: DATABASE_PASS
      value: springboot123
    - name: DATABASE_NAME
      value: sampledb
    image: registry.access.redhat.com/ubi9/openjdk-21:latest
    memoryLimit: 768Mi
    mountSources: true
    volumeMounts:
    - name: m2
      path: /home/user/.m2
  name: tools
- name: postgresql
  container:
# uncomment for Kubernetes
#    image: postgres:15
#    env:
#      - name: POSTGRES_USER
#        value: springboot
#      - name: POSTGRES_PASSWORD
#        value: springboot123
#      - name: POSTGRES_DB
#        value: sampledb
    image: registry.redhat.io/rhel9/postgresql-15
    env:
      - name: POSTGRESQL_USER
        value: springboot
      - name: POSTGRESQL_PASSWORD
        value: springboot123
      - name: POSTGRESQL_DATABASE
        value: sampledb
    endpoints:
      - name: postgresql
        exposure: internal
        targetPort: 5432
        attributes:
          discoverable: 'true'
    memoryLimit: 512Mi
    mountSources: true
    volumeMounts:
      - name: postgresql-storage
        path: /var/lib/postgresql/data
- name: postgresql-storage
  volume:
    size: 256Mi
- name: m2
  volume:
    size: 3Gi
metadata:
  description: Java application using Spring Boot® and OpenJDK 21
  displayName: Spring Boot®
  globalMemoryLimit: 2674Mi
  icon: https://raw.githubusercontent.com/devfile-samples/devfile-stack-icons/main/spring.svg
  language: Java
  name: person-service
  projectType: springboot
  tags:
  - Java
  - Spring
  version: 1.3.0
schemaVersion: 2.1.0

Prepare Development Environment with Podman

We have everything ready on the application side. Now, it is time to prepare a dev environment. Let’s run Podman Desktop. Podman Desktop comes with several useful features that simplify interaction with Kubernetes or OpenShift clusters for developers. It provides plugins that allow us to install Openshift Local, Minikube, or Kind on the laptop. We can also leverage the remote instance of OpenShift on the Sandbox Developer portal. It is active for 30 days and may be renewed.

odo-podman-kubernetes-desktop


Podman has one important advantage over Docker. It supports the Pod concept. So, for example, suppose we have a container that requires the Postgres container, with Podman we do not need to bind that database to a routable network.  With a pod, we are just binding to the localhost address and all containers in that pod can connect to it because of the shared network. Thanks to that, switching between Podman and Kubernetes is very simple with odo. In fact, we don’t need to change anything in the configuration.

For running the Kubernetes cluster on the local machine, we will Minikube. You can install and run Minikube by yourself or create it using the Podman Desktop. With Podman Desktop, we need to go to the Settings -> Resources section. Then find the Minikube tile as shown below and click the “Create new …” button.

odo-podman-kubernetes-minikube

Podman Desktop redirects us to the window with the creation form. Then, choose your preferred settings and click Create.

In order to create an OpenShift cluster we use a managed service available on the Red Hat Developer Sandbox website. Then, let’s choose “Start your Sandbox for free”.

You need to create an account on Red Hat Developer or sign in if you already did that. After that, you will be redirected to the Red Hat Hybrid Cloud Console, where you should choose Launch option on the “Red Hat OpenShift” tile as shown below.

In the OpenShift Console click on your username in the top-right corner and choose “Copy login command”.

odo-podman-kubernetes-ocp

Then, we can back to the Podman Desktop and paste the copied command in the Settings -> Resources -> Developer Sandbox section.

Deploy App with Database on Podman

Finally, let’s move from theory to practice. Assuming we have Podman running on our laptop, we can deploy our app with odo there by executing the following command:

$ odo dev --platform podman

The odo dev command deploys the app on the target environment and waits for any changes in the source files. Once it occurs, it redeploys the app.

The odo command exposes both the app HTTP port and the Postgres port outside Podman with port forwarding. For example, our Spring Boot app is available under the 20002 port. You can run the Swagger UI and call some endpoints to test the functionality.

odo-podman-kubernetes-swagger

Our app container can connect to the database over localhost, because both containers are running inside a single pod.

odo-podman-kubernetes-pods

Deploy App with a Database on Kubernetes

Then, we can switch to the Kubernetes cluster with our app. In this exercise, we will use Minikube. We can easily create and run the Minikube instance using the Podman Desktop plugin.

Once we create such an instance, the Kubernetes content is switched automatically. We can check it out with Podman Desktop as shown below.

Let’s deploy our app on Minikube. If we don’t activate the platform option, odo deploys on the default cluster context.

$ odo create namespace springboot
$ ode dev

Here’s the command result for my Minikube instance. Thanks to automatic port-forwarding we can access the app exactly in the same as before with Podman.

We can display a list of running with the following command:

kubectl get po
NAME                                 READY   STATUS    RESTARTS   AGE
person-service-app-b69bf8f7d-hk555   2/2     Running   0          2m50s

The full required YAML is generated automatically based on the devfile (e.g. environment variables). The only thing I changed for Kubernetes is the Postgres-based image used by odo. Instead of the image from the Red Hat registry, we are just using the official Postgres image from Docker Hub.

- name: postgresql
  container:
    image: postgres:15
    env:
      - name: POSTGRES_USER
        value: springboot
      - name: POSTGRES_PASSWORD
        value: springboot123
      - name: POSTGRES_DB
        value: sampledb

Deploy App with a Database on OpenShift

We can deploy the sample Spring Boot app on OpenShift exactly in the same way as in OpenShift. The only thing that can be changed is the Postgres image. This time we will back to the configuration used when deploying to podman. Instead of the image from the Docker Hub, we will use the registry.redhat.io/rhel9/postgresql-15 image.

As I mentioned before, we will use the remote OpenShift cluster on the Developer Sandbox. Podman Desktop provides the plugin for Developer Sandbox. With that plugin, we can map the OpenShift context to a specific name like dev-sandbox-context.

odo-podman-kubernetes-developer-sandbox

Then we can switch to the Kubernetes context related to Developer Sandbox using the Podman Desktop.

Finally, let’s run the app on the cluster with the following command:

$ odo dev

Here’s the output after running the odo dev command:

odo-podman-kubernetes-dev

We can verify that a pod is running on the OpenShift cluster. Just go to the Workloads -> Pods section in the OpenShift Console.

odo-podman-kubernetes-openshift-console

Thanks to automatic port-forwarding we can access the app on the local port. However, we can expose the service outside OpenShift with the Route object. Firstly, let’s display a list of Kubernetes services using Podman Desktop.

Then, we need to create the Route object with the following command:

$ oc expose svc/person-service-app 

Here’s our Route visible in the OpenShift Console. To access it we need to open the following address in the web browser:

odo-podman-kubernetes-openshift-route

Finally, let’s access the app Swagger UI using the exposed URL address:

Final Thoughts

With Podman and the odo CLI, we can configure our development space and easily run apps across different containerized and Kubernetes-native environments. Odo with Devfile standard can similarly run the app on Podman, Kubernetes, and OpenShift. You can control the whole process using Podman Desktop.

The post Java Development with Odo on Podman, Kubernetes and OpenShift appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/03/15/java-development-with-odo-on-podman-kubernetes-and-openshift/feed/ 0 15091
Java Development on OpenShift with odo https://piotrminkowski.com/2021/02/05/java-development-on-openshift-with-odo/ https://piotrminkowski.com/2021/02/05/java-development-on-openshift-with-odo/#respond Fri, 05 Feb 2021 16:06:58 +0000 https://piotrminkowski.com/?p=9425 OpenShift Do (odo) is a CLI tool for running applications on OpenShift. Opposite to the oc client, it is a tool for developers. It automates all the things required to deploy your application on OpenShift. Thanks to odo you can focus on the most important aspect – code. In order to start, you just need […]

The post Java Development on OpenShift with odo appeared first on Piotr's TechBlog.

]]>
OpenShift Do (odo) is a CLI tool for running applications on OpenShift. Opposite to the oc client, it is a tool for developers. It automates all the things required to deploy your application on OpenShift. Thanks to odo you can focus on the most important aspect – code. In order to start, you just need to download the latest release from GitHub and add it to your path.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. It is the simple Spring Boot application we are going to deploy on OpenShift. Then you should just follow my instructions.

Before we begin with OpenShift odo

It is important to understand some key concepts related to odo before we begin. If you want to deploy a new application you should create a component. Each component can be run and deployed separately. There are two types of components: Devfile and S2I. The S2I component is based on the Source-To-Image process. Therefore, your application is built on the server-side with the S2I builder. With the Devfile component, you run the application with the mvn spring-boot:run command. Here’s the list of available components. We will use the java component.

openshift-odo-component-list

You can also check out the list of services by executing the command odo catalog list services. A service is a software that your component links to or depends on. However, I won’t focus on that feature. Currently, the Service Catalog is deprecated in OpenShift. Instead, you can operators with odo the same as you would use templates.

The Spring Boot application

Let’s take a moment on discussing our sample Spring Boot application. It is a REST-based application, which connects to MongoDB. We will use JDK 11 for compilation.

<groupId>pl.piomin.samples</groupId>
<artifactId>sample-spring-boot-on-kubernetes</artifactId>
<version>1.0-SNAPSHOT</version>

<properties>
    <java.version>11</java.version>
</properties>

<dependencies>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-web</artifactId>
   </dependency>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-data-mongodb</artifactId>
   </dependency>
   ...
</dependencies>

The MongoDB connection settings are set inside application.yml. We will use environment variables for getting credentials and database name. We will set these values in the local configuration using odo.

spring:
  application:
    name: sample-spring-boot-on-kubernetes
  data:
    mongodb:
      uri: mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@mongodb/${MONGO_DATABASE}

Creating configuration with odo

Ok, let’s by creating a new component. We need to choose java S2I type and then set a name of our component.

$ odo create java sample-spring-boot

After that odo is creating a file config.yaml inside the .odo directory in the project root catalog. It’s pretty simple.

kind: LocalConfig
apiversion: odo.dev/v1alpha1
ComponentSettings:
  Type: java
  SourceLocation: ./
  SourceType: local
  Ports:
  - 8443/TCP
  - 8778/TCP
  - 8080/TCP
  Application: app
  Project: pminkows-workshop
  Name: sample-spring-boot

Then, we will add some additional configuration properties. Firstly, we need to expose our application outside the cluster. The command visible below creates a Route to the Service on OpenShift on the 8080 port.

$ odo url create --port 8080

In the next step, we add environment variables to the odo configuration responsible for establishing MongoDB connection: MONGO_USERNAME, MONGO_PASSWORD, and MONGO_DATABASE. Just to simplify, I created a standalone instance of MongoDB on OpenShift using the template. Here are the values in the mongodb secret.

Let’s set environment variables using odo config command. Of course, all things are still available only in the local configuration.

$ odo config set --env MONGO_USERNAME=userJ5Q
$ odo config set --env MONGO_PASSWORD=UrfgtUKohNOFVqbQ
$ odo config set --env MONGO_DATABASE=sampledb 

Finally, we are ready to deploy our application on OpenShift. To do that, we need to execute odo push command. If you would like to see the logs from the build add option --show-log.

openshift-odo-push

Instead of using environment variables to provide MongoDB connection settings, you may take advantage of odo link command. This feature helps to connect an odo component to a service or another component. However, to use it you first need to install the Service Binding Operator on OpenShift. Then you may install a database like MongoDB with an operator and link to your application component.

Verify S2I build on OpenShift

After running odo push let’s switch to oc client. Our application is running on OpenShift as shown below. The same with MongoDB.

We can also navigate to the OpenShift Management Console. The environment variables set inside odo configuration has been applied to the DeploymentConfig.

Now, let’s say we need to provide some changes in our code. Fortunately, we can execute the command that watches for changes in the directory for current component. After detecting such a change, the new version of application is immediately deployed on OpenShift.

$ odo watch
Waiting for something to change in /Users/pminkows/IdeaProjects/sample-spring-boot-on-kubernetes

Using OpenShift odo in Devfile mode

In opposition to the S2I approach, we may use Devfile mode. In order to do that we will choose the java-springboot component.

As a result, OpenShift odo creates devfile.yaml in the project root directory.

schemaVersion: 2.0.0
metadata:
  name: java-springboot
  version: 1.1.0
starterProjects:
  - name: springbootproject
    git:
      remotes:
        origin: "https://github.com/odo-devfiles/springboot-ex.git"
components:
  - name: tools
    container:
      image: quay.io/eclipse/che-java11-maven:nightly
      memoryLimit: 768Mi
      mountSources: true
      endpoints:
      - name: '8080-tcp'
        targetPort: 8080
      volumeMounts:
        - name: m2
          path: /home/user/.m2
  - name: m2
    volume:
      size: 3Gi
commands:
  - id: build
    exec:
      component: tools
      commandLine: "mvn clean -Dmaven.repo.local=/home/user/.m2/repository package -Dmaven.test.skip=true"
      group:
        kind: build
        isDefault: true
  - id: run
    exec:
      component: tools
      commandLine: "mvn -Dmaven.repo.local=/home/user/.m2/repository spring-boot:run"
      group:
        kind: run
        isDefault: true
  - id: debug
    exec:
      component: tools
      commandLine: "java -Xdebug -Xrunjdwp:server=y,transport=dt_socket,address=${DEBUG_PORT},suspend=n -jar target/*.jar"
      group:
        kind: debug
        isDefault: true

Of course, all next steps are the same as for S2I component.

Install OpenShift Intellij Plugin

If you do not like command-line tools, you may install the Intellij OpenShift plugin as well. It uses odo for build and deploy. The good news is that the latest version of this plugin supports odo in 2.0.3 version.

openshift-odo-intellij

Thanks to that plugin you can for example easily create component with OpenShift odo.

Conclusion

With odo you can easily deploy your application on OpenShift in a few seconds. You may also continuously watch for changes in the code, and immediately deploy a new version of application. Moreover, you don’t need to create any Kubernetes YAML manifests.

OpenShift odo is a similar tool to Skaffold. If you would like to compare both these tools used to run the Spring Boot application you may read the following article about Skaffold.

The post Java Development on OpenShift with odo appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/02/05/java-development-on-openshift-with-odo/feed/ 0 9425