REST Archives - Piotr's TechBlog https://piotrminkowski.com/tag/rest/ Java, Spring, Kotlin, microservices, Kubernetes, containers Mon, 01 Dec 2025 10:59:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 REST Archives - Piotr's TechBlog https://piotrminkowski.com/tag/rest/ 32 32 181738725 Spring Boot Built-in API Versioning https://piotrminkowski.com/2025/12/01/spring-boot-built-in-api-versioning/ https://piotrminkowski.com/2025/12/01/spring-boot-built-in-api-versioning/#comments Mon, 01 Dec 2025 10:59:38 +0000 https://piotrminkowski.com/?p=15867 This article explains how to use Spring Boot built-in API versioning feature to expose different versions of REST endpoints. This is one of the most interesting updates introduced with Spring Boot 4. API versioning can be implemented using Spring Web’s standard REST API capabilities. If you’re interested in this approach, check out my somewhat outdated […]

The post Spring Boot Built-in API Versioning appeared first on Piotr's TechBlog.

]]>
This article explains how to use Spring Boot built-in API versioning feature to expose different versions of REST endpoints. This is one of the most interesting updates introduced with Spring Boot 4. API versioning can be implemented using Spring Web’s standard REST API capabilities. If you’re interested in this approach, check out my somewhat outdated article on the subject here.

Interestingly, the Micronaut framework also provides built-in API versioning. You can read more about it in the framework’s documentation here.

Source Code

Feel free to use my source code if you’d like to try it out yourself. To do that, you must clone my sample GitHub repository. Then you should only follow my instructions.

Introduction

The Spring Boot example application discussed in this article features two versions of the data model returned by the API. Below is the basic structure of the Person object, which is shared across all API versions.

public abstract class Person {

	private Long id;
	private String name;
	private Gender gender;

	public Person() {

	}
	
	public Person(Long id, String name, Gender gender) {
		this.id = id;
		this.name = name;
		this.gender = gender;
	}

	public Long getId() {
		return id;
	}

	public void setId(Long id) {
		this.id = id;
	}

	public String getName() {
		return name;
	}

	public void setName(String name) {
		this.name = name;
	}

	public Gender getGender() {
		return gender;
	}

	public void setGender(Gender gender) {
		this.gender = gender;
	}

}
Java

The scenario assumes that we choose the option that returns the same age for a person in two different ways. This is a somewhat pessimistic version, but it is the one we want to examine. In the first method, we return JSON containing the birthdate. In the second method, we return the age field. Below is the PersonOld object implementing the first approach.

@Schema(name = "Person")
public class PersonOld extends Person {

	@JsonFormat(shape = JsonFormat.Shape.STRING, pattern = "yyyy-MM-dd")
	private LocalDate birthDate;

	public PersonOld() {

	}	
	
	public PersonOld(Long id, String name, Gender gender, LocalDate birthDate) {
		super(id, name, gender);
		this.birthDate = birthDate;
	}

	public LocalDate getBirthDate() {
		return birthDate;
	}

	public void setBirthDate(LocalDate birthDate) {
		this.birthDate = birthDate;
	}

}
Java

Here, we see the PersonCurrent object, which contains the age field instead of the previously used birthDate.

@Schema(name = "Person")
public class PersonCurrent extends Person {

	private int age;

	public PersonCurrent() {

	}

	public PersonCurrent(Long id, String name, Gender gender, int age) {
		super(id, name, gender);
		this.age = age;
	}

	public int getAge() {
		return age;
	}

	public void setAge(int age) {
		this.age = age;
	}

}
Java

Design API for Versioning with Spring Boot

API Methods

Now we can design an API that supports different object versions on one hand and two distinct versioning methods on the other. In the first method, we will use the HTTP header, and in the second, the request path. For clarity, below is a table of REST API methods for HTTP header-based versioning.

Method typeMethod pathDescription
POST/personsAdd a new person, v1.2 for PersonCurrent, v1.[0-1] for PersonOld
PUT/persons/{id}Update a person, v1.2 for PersonCurrent, v1.1 for PersonOld
DELETE/persons/{id}Delete a person
GET/persons/{id}Find a person by ID, v1.2 for PersonCurrent

Here, in turn, is a table for versioning based on the request path.

Method typeMethod pathDescription
POST/persons/v1.0, /persons/v1.1Add a new person (PersonOld)
POST/persons/v1.2Add a new person (PersonCurrent)
PUT/persons/v1.0Update a person – v1.0 deprecated
PUT/persons/v1.1/{id}Update a person with ID (PersonOld)
PUT/persons/v1.2/{id}Update a person with ID (PersonCurrent)
DELETE/persons/v1.0, /persons/v1.1, …Delete a person
GET/persons/v1.0/{id}, /persons/v1.1Find a person by ID, v1.0[1] for PersonOld
GET/persons/v1.2/{id}Find a person by ID, v1.2 for PersonCurrent

Spring Boot Implementation

To enable the built-in API versioning mechanism in Spring Web MVC, use spring.mvc.apiversion.* properties. The following configuration defines both of the API versioning methods mentioned above. In the header-based method, set its name. The header name used for testing purposes is api-version. In request path versioning, we must set the index of the path segment dedicated to the field with version. In our case, it is 1, because the version is read from the segment after the 0th element in the path, which is /persons. Please note that the two types of versions are only activated for testing purposes. Typically, you should select and use one API versioning method.

spring:
  mvc:
    apiversion:
      default: v1.0
      use:
        header: api-version
        path-segment: 1
Plaintext

Let’s continue by implementing individual API controllers. We use the @RestController approach for each versioning method. Now, in each annotation that specifies an HTTP method, we can include the version field. The mechanism maps the api-version header to the version field in the annotation. We can use syntax like v1.0+ to specify a version higher than v1.0.

@RestController
@RequestMapping("/persons-via-headers")
public class PersonControllerWithHeaders {

	@Autowired
	PersonMapper mapper;
	@Autowired
	PersonRepository repository;

	@PostMapping(version = "v1.0+")
	public PersonOld add(@RequestBody PersonOld person) {
		return (PersonOld) repository.add(person);
	}

	@PostMapping(version = "v1.2")
	public PersonCurrent add(@RequestBody PersonCurrent person) {
		return (PersonCurrent) repository.add(person);
	}
	
	@PutMapping(version = "v1.0")
	@Deprecated
	public PersonOld update(@RequestBody PersonOld person) {
		return (PersonOld) repository.update(person);
	}
	
	@PutMapping(value = "/{id}", version = "v1.1")
	public PersonOld update(@PathVariable("id") Long id, @RequestBody PersonOld person) {
		return (PersonOld) repository.update(person);
	}
	
	@PutMapping(value = "/{id}", version = "v1.2")
	public PersonCurrent update(@PathVariable("id") Long id, @RequestBody PersonCurrent person) {
		return mapper.map((PersonOld) repository.update(person));
	}
	
	@GetMapping(value = "/{id}", version = "v1.0+")
	public PersonOld findByIdOld(@PathVariable("id") Long id) {
		return (PersonOld) repository.findById(id);
	}

	@GetMapping(value = "/{id}", version = "v1.2")
	public PersonCurrent findById(@PathVariable("id") Long id) {
		return mapper.map((PersonOld) repository.findById(id));
	}
	
	@DeleteMapping("/{id}")
	public void delete(@PathVariable("id") Long id) {
		repository.delete(id);
	}
	
}
Java

Then, we can implement a similar approach, but this time based on the request path. Here’s our @RestController.

@RestController
@RequestMapping("/persons")
public class PersonController {

	@Autowired
	PersonMapper mapper;
	@Autowired
	PersonRepository repository;

	@PostMapping(value = "/{version}", version = "v1.0+")
	public PersonOld add(@RequestBody PersonOld person) {
		return (PersonOld) repository.add(person);
	}

	@PostMapping(value = "/{version}", version = "v1.2")
	public PersonCurrent add(@RequestBody PersonCurrent person) {
		return (PersonCurrent) repository.add(person);
	}
	
	@PutMapping(value = "/{version}", version = "v1.0")
	@Deprecated
	public PersonOld update(@RequestBody PersonOld person) {
		return (PersonOld) repository.update(person);
	}
	
	@PutMapping(value = "/{version}/{id}", version = "v1.1")
	public PersonOld update(@PathVariable("id") Long id, @RequestBody PersonOld person) {
		return (PersonOld) repository.update(person);
	}
	
	@PutMapping(value = "/{version}/{id}", version = "v1.2")
	public PersonCurrent update(@PathVariable("id") Long id, @RequestBody PersonCurrent person) {
		return mapper.map((PersonOld) repository.update(person));
	}
	
	@GetMapping(value = "/{version}/{id}", version = "v1.0+")
	public PersonOld findByIdOld(@PathVariable("id") Long id) {
		return (PersonOld) repository.findById(id);
	}
	
	@GetMapping(value = "/{version}/{id}", version = "v1.2")
	public PersonCurrent findById(@PathVariable("id") Long id) {
		return mapper.map((PersonOld) repository.findById(id));
	}
	
	@DeleteMapping(value = "/{version}/{id}", version = "v1.0+")
	public void delete(@PathVariable("id") Long id) {
		repository.delete(id);
	}
	
}
Java

Let’s start our application using the command below.

mvn spring-boot:run
ShellSession

We can test the REST endpoints of both controllers using the following curl commands. Below are the calls and the expected results.

$ curl http://localhost:8080/persons/v1.1/1
{"id":1,"name":"John Smith","gender":"MALE","birthDate":"1977-01-20"}

$ curl http://localhost:8080/persons/v1.2/1
{"id":1,"name":"John Smith","gender":"MALE","age":48}

$ curl -X POST http://localhost:8080/persons/v1.0 -d "{\"id\":1,\"name\":\"John Smith\",\"gender\":\"MALE\",\"birthDate\":\"1977-01-20\"}" -H "Content-Type: application/json"
{"id":6,"name":"John Smith","gender":"MALE","birthDate":"1977-01-20"}

$ curl -X POST http://localhost:8080/persons/v1.2 -d "{\"name\":\"John Smith\",\"gender\":\"MALE\",\"age\":40}" -H "Content-Type: application/json"
{"id":7,"name":"John Smith","gender":"MALE","age":40}
ShellSession

Testing API versioning with Spring Boot REST client

Importantly, Spring also offers support for versioning on the HTTP client side. This applies to both RestClient and WebClient, as well as their testing implementations. I don’t know if you’ve had a chance to use RestTestClient in your tests yet. After initializing the client instance, set the versioning method using apiVersionInserter. Then, when calling a given HTTP method, you can set the version number by calling apiVersion(...) with the version number as an argument. Below is a class that tests versioning using an HTTP header.

@SpringBootTest
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class PersonControllerWithHeadersTests {

    private WebApplicationContext context;
    private RestTestClient restTestClient;

    @BeforeEach
    public void setup(WebApplicationContext context) {
        restTestClient = RestTestClient.bindToApplicationContext(context)
                .baseUrl("/persons-via-headers")
                .apiVersionInserter(ApiVersionInserter.useHeader("api-version"))
                .build();
    }

    @Test
    @Order(1)
    void addV0() {
        restTestClient.post()
                .body(Instancio.create(PersonOld.class))
                .apiVersion("v1.0")
                .exchange()
                .expectStatus().is2xxSuccessful()
                .expectBody(PersonOld.class)
                .value(personOld -> assertNotNull(personOld.getId()));
    }

    @Test
    @Order(2)
    void addV2() {
        restTestClient.post()
                .body(Instancio.create(PersonCurrent.class))
                .apiVersion("v1.2")
                .exchange()
                .expectStatus().is2xxSuccessful()
                .expectBody(PersonCurrent.class)
                .value(personCurrent -> assertNotNull(personCurrent.getId()))
                .value(personCurrent -> assertTrue(personCurrent.getAge() > 0));
    }

    @Test
    @Order(3)
    void findByIdV0() {
        restTestClient.get()
                .uri("/{id}", 1)
                .apiVersion("v1.0")
                .exchange()
                .expectStatus().is2xxSuccessful()
                .expectBody(PersonOld.class)
                .value(personOld -> assertNotNull(personOld.getId()));
    }

    @Test
    @Order(3)
    void findByIdV2() {
        restTestClient.get()
                .uri("/{id}", 2)
                .apiVersion("v1.2")
                .exchange()
                .expectStatus().is2xxSuccessful()
                .expectBody(PersonCurrent.class)
                .value(personCurrent -> assertNotNull(personCurrent.getId()))
                .value(personCurrent -> assertTrue(personCurrent.getAge() > 0));
    }

    @Test
    @Order(3)
    void findByIdV2ToV1Compability() {
        restTestClient.get()
                .uri("/{id}", 1)
                .apiVersion("v1.2")
                .exchange()
                .expectStatus().is2xxSuccessful()
                .expectBody(PersonCurrent.class)
                .value(personCurrent -> assertNotNull(personCurrent.getId()))
                .value(personCurrent -> assertTrue(personCurrent.getAge() > 0));
    }
}
Java

And here are similar tests, but this time for versioning based on the request path.

@SpringBootTest
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class PersonControllerTests {

    private WebApplicationContext context;
    private RestTestClient restTestClient;

    @BeforeEach
    public void setup(WebApplicationContext context) {
        restTestClient = RestTestClient.bindToApplicationContext(context)
                .baseUrl("/persons")
                .apiVersionInserter(ApiVersionInserter.usePathSegment(1))
                .build();
    }

    @Test
    @Order(1)
    void addV0() {
        restTestClient.post()
                .apiVersion("v1.1")
                .body(Instancio.create(PersonOld.class))
                .exchange()
                .expectBody(PersonOld.class)
                .value(personOld -> assertNotNull(personOld.getId()));
    }

    @Test
    @Order(2)
    void addV2() {
        restTestClient.post()
                .apiVersion("v1.2")
                .body(Instancio.create(PersonCurrent.class))
                .exchange()
                .expectBody(PersonCurrent.class)
                .value(personCurrent -> assertNotNull(personCurrent.getId()))
                .value(personCurrent -> assertTrue(personCurrent.getAge() > 0));
    }

    @Test
    @Order(3)
    void findByIdV0() {
        restTestClient.get().uri("/{id}", 1)
                .apiVersion("v1.0")
                .exchange()
                .expectBody(PersonOld.class)
                .value(personOld -> assertNotNull(personOld.getId()));
    }

    @Test
    @Order(3)
    void findByIdV2() {
        restTestClient.get().uri("/{id}", 2)
                .apiVersion("v1.2")
                .exchange()
                .expectBody(PersonCurrent.class)
                .value(personCurrent -> assertNotNull(personCurrent.getId()))
                .value(personCurrent -> assertTrue(personCurrent.getAge() > 0));
    }

    @Test
    @Order(3)
    void findByIdV2ToV1Compability() {
        restTestClient.get().uri("/{id}", 1)
                .apiVersion("v1.2")
                .exchange()
                .expectBody(PersonCurrent.class)
                .value(personCurrent -> assertNotNull(personCurrent.getId()))
                .value(personCurrent -> assertTrue(personCurrent.getAge() > 0));
    }

    @Test
    @Order(4)
    void delete() {
        restTestClient.delete().uri("/{id}", 5)
                .apiVersion("v1.2")
                .exchange()
                .expectStatus().is2xxSuccessful();
    }
}
Java

Here are my test results.

spring-boot-api-versioning-tests

OpenAPI for Spring Boot API versioning

I also tried to check what support for API versioning looks like on the Springdoc side. This project provides an OpenAPI implementation for Spring MVC. For Spring Boot 4, we must use at least 3.0.0 version of Springdoc.

<dependency>
  <groupId>org.springdoc</groupId>
  <artifactId>springdoc-openapi-starter-webmvc-ui</artifactId>
  <version>3.0.0</version>
</dependency>
XML

My goal was to divide the API into groups based on version for a path segment approach. Unfortunately, attempting this type of implementation results in an HTTP 400 response for both the /v3/api-docs and /swagger-ui.html URLs. That’s why I created an issue in Springdoc GitHub repository here. Once they fixed problems or eventually explain what I should improve in my implementation, I’ll update the article.

	@Bean
	public GroupedOpenApi personApiViaHeaders() {
		return GroupedOpenApi.builder()
				.group("person-via-headers")
				.pathsToMatch("/persons-via-headers/**")
				.build();
	}

	@Bean
	public GroupedOpenApi personApi10() {
		return GroupedOpenApi.builder()
				.group("person-api-1.0")
				.pathsToMatch("/persons/v1.0/**")
				.build();
	}

	@Bean
	public GroupedOpenApi personApi11() {
		return GroupedOpenApi.builder()
				.group("person-api-1.1")
				.pathsToMatch("/persons/v1.1/**")
				.build();
	}

	@Bean
	public GroupedOpenApi personApi12() {
		return GroupedOpenApi.builder()
				.group("person-api-1.2")
				.pathsToMatch("/persons/v1.2/**")
				.build();
	}
Java

Conclusion

Built-in API versioning support is one of the main features in Spring Boot 4. It works very smoothly. Importantly, API versioning is supported on both the server and client sides. We can also easily integrate it in JUnit tests with RestTestClient and WebTestClient. This article demonstrates Spring MVC implementation, but you can also use the built-in versioning API for Spring Boot applications based on the reactive WebFlux stack.

The post Spring Boot Built-in API Versioning appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2025/12/01/spring-boot-built-in-api-versioning/feed/ 1 15867
Micro Frontend with React https://piotrminkowski.com/2022/10/11/micro-frontend-with-react/ https://piotrminkowski.com/2022/10/11/micro-frontend-with-react/#comments Tue, 11 Oct 2022 11:50:42 +0000 https://piotrminkowski.com/?p=13557 In this article, you will learn how to build micro-frontend apps using React. It is quite an uncommon article on my blog since I’m usually writing about Java, Spring Boot, or Kubernetes. However, sometimes you may want to build a nice-looking frontend for your backend written e.g. in Spring Boot. In this article, you will […]

The post Micro Frontend with React appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to build micro-frontend apps using React. It is quite an uncommon article on my blog since I’m usually writing about Java, Spring Boot, or Kubernetes. However, sometimes you may want to build a nice-looking frontend for your backend written e.g. in Spring Boot. In this article, you will find a receipt for that. Our app will do some basic CRUD operations and communicate with the Spring Boot backend over the REST API. I’ll focus on simplifying your experience with React to show you which libraries to choose and how to use them. Let’s begin.

If you are also interested in Spring Boot and microservices you can my article about best practices for building microservices using the Spring Boot framework.

Source Code

If you would like to try this exercise yourself, you may always take a look at my source code. In order to do that, you need to clone my GitHub repository. This time we have two apps since there are backend and frontend. If you would like to run the Spring Boot directly from the code also clone the following repository. After that, just follow my instructions.

Prerequisites

Before we begin, we need to install some tools. Of course, you need npm to build and run our React app. I used a version 8.19.2 of npm. In order to run the Spring Boot backend app locally, you should have Docker or Maven with JDK.

Assuming you have Maven and JDK and you want to run it directly from the code just execute the following command:

$ mvn spring-boot:run

With Docker just run the app using the latest image from my registry:

$ docker run -d --name sample-spring-boot -p 8080:8080 \
  piomin/sample-spring-kotlin-microservice:latest

After running the Spring Boot app you can display the list of available REST endpoints by opening the Swagger UI page http://localhost:8080/swagger-ui.html.

Micro Frontend with React – Architecture

Here’s our architecture. First, we are going to run the Spring Boot app and expose it on the local port 8080. Then we will run the React app that listens on a port 3000 and communicates with the backend over REST API.

micro-frontend-react-arch

We will use the following React libraries:

  • MUI (Material UI for React) – React UI components, which implement Google’s Material Design 
  • React Redux – an implementation of Redux JS for React to centralize the state of apps using the store component 
  • Redux Saga – an intuitive Redux side effect manager that allows us to dispatch an action asynchronously and connect to the Redux store
  • Axios – the promise-based HTTP client for the browser and node.js
  • React Router – declarative, client-side routing for React

Later, I will show you how those libraries will help you to organize your project. For now, let’s just take a look at the structure of our source code. There are three components: Home displays the list of all persons, AddPerson allows adding of a new person, and GetPerson displays the details of a selected person.

micro-frontend-react-structure

Let’s take a look at our package.json file.

{
  "name": "react",
  "version": "1.0.0",
  "description": "React Micro Frontend",
  "keywords": [
    "react",
    "starter"
  ],
  "main": "src/index.js",
  "dependencies": {
    "@emotion/react": "11.10.4",
    "@emotion/styled": "11.10.4",
    "@mui/material": "5.10.8",
    "@mui/x-data-grid": "latest",
    "axios": "1.1.2",
    "react": "18.2.0",
    "react-dom": "18.2.0",
    "react-redux": "8.0.2",
    "react-router-dom": "6.4.2",
    "react-scripts": "5.0.1",
    "redux": "4.2.0",
    "redux-saga": "1.2.1"
  },
  "devDependencies": {
    "@babel/runtime": "7.13.8",
    "typescript": "4.1.3"
  },
  "scripts": {
    "start": "react-scripts start",
    "build": "react-scripts build",
    "test": "react-scripts test --env=jsdom",
    "eject": "react-scripts eject"
  },
  "browserslist": [
    ">0.2%",
    "not dead",
    "not ie <= 11",
    "not op_mini all"
  ]
}

We can create React apps using two different approaches. The first of them is based on functional components, while the second is based on class components. I won’t compare them, since these are React basics and you can read more about them in tutorials. I’ll choose the first approach based on functions.

Communicate with the Backend over REST API

Let’s start unusual – with the REST client implementation. We use the Axios library for communication over HTTP and redux-saga for watching and propagating events (actions). For each type of action, there are two functions. The “watch” function waits on dispatched action. Then it just calls another function for performing HTTP calls. All the “watch” functions are our sagas (in fact they implement the popular SAGA pattern), so we need to export them outside the module. The Axios client is pretty intuitive. We can call for example GET endpoint without any parameters or POST JSON payload. Here’s the implementation available in the sagas/index.js file.

import { call, put, takeEvery, all } from "redux-saga/effects";
import axios from "axios";
import { 
  ADD_PERSON, 
  ADD_PERSON_FAILURE, 
  ADD_PERSON_SUCCESS, 
  GET_ALL_PERSONS, 
  GET_ALL_PERSONS_FAILURE, 
  GET_ALL_PERSONS_SUCCESS, 
  GET_PERSON_BY_ID, 
  GET_PERSON_BY_ID_FAILURE, 
  GET_PERSON_BY_ID_SUCCESS } from "../actions/types";

const apiUrl = "http://localhost:8080/persons";

function* getPersonById(action) {
  try {
    const person = yield call(axios, apiUrl + "/" + action.payload.id);
    yield put({ type: GET_PERSON_BY_ID_SUCCESS, payload: person });
  } catch (e) {
    yield put({ type: GET_PERSON_BY_ID_FAILURE, message: e.message });
  }
}

function* getAllPersons(action) {
  try {
    const persons = yield call(axios, apiUrl);
    yield put({ type: GET_ALL_PERSONS_SUCCESS, payload: persons });
  } catch (e) {
    yield put({ type: GET_ALL_PERSONS_FAILURE, message: e.message });
  }
}

function* addPerson(action) {
  try {
    const person = yield call(axios, {
      method: "POST",
      url: apiUrl,
      data: action.payload
    });
    yield put({ type: ADD_PERSON_SUCCESS, payload: person });
  } catch (e) {
    yield put({ type: ADD_PERSON_FAILURE, message: e.message });
  }
}

function* watchGetPerson() {
  yield takeEvery(GET_PERSON_BY_ID, getPersonById);
}

function* watchGetAllPersons() {
  yield takeEvery(GET_ALL_PERSONS, getAllPersons);
}

function* watchAddPerson() {
  yield takeEvery(ADD_PERSON, addPerson);
}

export default function* rootSaga() {
  yield all([watchGetPerson(), watchGetAllPersons(), watchAddPerson()]);
}

Redux Saga works asynchronously. It listens for the action and propagates a new event after receiving a response from the backend. There are three actions handled by the component visible above: GET /persons, GET /persons/{id}, and POST /persons. Depending on the result they emit *_SUCCESS or *_FAILURE events. Here’s a dictionary in the file actions/types.js with all the events handled/emitted by our app:

export const GET_ALL_PERSONS = "GET_ALL_PERSONS";
export const GET_ALL_PERSONS_SUCCESS = "GET_ALL_PERSONS_SUCCESS";
export const GET_ALL_PERSONS_FAILURE = "GET_ALL_PERSONS_FAILURE";

export const GET_PERSON_BY_ID = "GET_PERSON_BY_ID";
export const GET_PERSON_BY_ID_SUCCESS = "GET_PERSON_BY_ID_SUCCESS";
export const GET_PERSON_BY_ID_FAILURE = "GET_PERSON_BY_ID_FAILURE";

export const ADD_PERSON = "ADD_PERSON";
export const ADD_PERSON_SUCCESS = "ADD_PERSON_SUCCESS";
export const ADD_PERSON_FAILURE = "ADD_PERSON_FAILURE";

Also, let’s take a look a the action/index.js file. It contains three functions for dispatching actions. Those functions are then used by the React components. Each action has a type field and payload. The payload may e.g. contain a body that is sent as a JSON to the backend (1).

import { 
    ADD_PERSON, 
    GET_PERSON_BY_ID, 
    GET_ALL_PERSONS } from "./types";

export function getPersonById(payload) {
  return { type: GET_PERSON_BY_ID, payload };
}

export function getAllPersons(payload) {
  return { type: GET_ALL_PERSONS, payload };
}

export function addPerson(payload) { // (1)
  return { type: ADD_PERSON, payload };
}

Configure React Redux and Redux Saga

To make everything work properly we need to prepare some configurations. In the previous step, we have already created an implementation of sagas responsible for handling asynchronous actions dispatched by the React components. Now, we need to configure the Redux Saga library to handle those actions properly. In the same step, we also create a Redux store to handle the current global state of the React app. The configuration is available in the store/index.js file.

import { createStore, applyMiddleware, compose } from "redux";
import createSagaMiddleware from "redux-saga";
import rootReducer from "../reducers/index";
import rootSaga from "../sagas/index";

const sagaMiddleware = createSagaMiddleware(); // (1)
const composeEnhancers = window.__REDUX_DEVTOOLS_EXTENSION_COMPOSE__ || compose; // (2)

const store = createStore(
  rootReducer,
  composeEnhancers(applyMiddleware(sagaMiddleware))
); // (3)

sagaMiddleware.run(rootSaga); // (4)

export default store;

The only way to change the global state of the app is to take action. In order to handle the actions, let’s create a component called sagaMiddleware (1). Then we need to register sagas (4) and connect the store to the redux-saga middleware (3). We will also enable Redux Dev Tools for the Saga middleware (2). It would be helpful during development. The store requires reducers. That’s a very important part of the Redux concept. In redux nomenclature “reducer” is a function that takes a current state value and an action object that described “what happened”. As a result, it returns a new state value.

Here’s our reducer implementation provided in the reducers/index.js file:

import { 
  ADD_PERSON,
  ADD_PERSON_SUCCESS, 
  GET_ALL_PERSONS_SUCCESS, 
  GET_PERSON_BY_ID_SUCCESS } from "../actions/types";

const initialState = {
  persons: [],
  person: {},
  newPersonId: null,
}; // (1)

function rootReducer(state = initialState, action) {
  switch(action.type) {
    case GET_ALL_PERSONS_SUCCESS: // (2)
      return {
        ...state,
        persons: action.payload.data
      };
    case ADD_PERSON:
      return {
        ...state,
        person: action.payload.data,
        newPersonId: null
      };
    case ADD_PERSON_SUCCESS: // (3)
      return {
        ...state,
        person: {
          name: "",
          gender: "",
          age: 0
        },
        newPersonId: action.payload.data.id
      };
    case GET_PERSON_BY_ID_SUCCESS: // (4)
      return {
        ...state,
        person: action.payload.data
      };
    default:
      return state;
  }
}

export default rootReducer;

Let’s analyze what happened here. We need to define the initial state of the store for our micro frontend React app (1). It contains the list of all persons retrieved from the backend (persons), the current displayed or newly added person (person) and the id of a new person (newPersonId). For the GET_ALL_PERSONS action it puts the elements received from the backend API to the persons array (2). For the ADD_PERSON result, it resets the state of the person object and set the id of the new person in the newPersonId field (3). Finally, we set the current person details in the person object for the GET_PERSON_BY_ID result (4).

Create React Components

We have already created all the components responsible for handling actions, the state store, and communicating with the backend. It’s time to create our first React component. We will start with the Home component responsible for getting and displaying a list of all persons. Here’s the full code of the component available in components/Home.js. Let’s analyze step-by-step what happened here. The order of further steps is logical.

import { connect } from "react-redux";
import React, { useEffect } from "react";
import { useNavigate } from "react-router-dom"; // (9)

import { Button, Stack } from "@mui/material";
import { DataGrid } from '@mui/x-data-grid';

import { getAllPersons } from "../actions/index"; // (4)

// (7)
const columns = [
  { field: 'id', headerName: 'ID', width: 70 },
  { field: 'name', headerName: 'Name', width: 130, editable: true },
  { field: 'age', headerName: 'Age', type: 'number', width: 90, editable: true },
  { field: 'gender', headerName: 'Gender', width: 100 },
];

function Home({ getAllPersons, persons }) { // (5)

  let navigate = useNavigate(); // (10)

  // (8)
  useEffect(() => {
    getAllPersons()
  }, []);

  function handleClick() { // (11)
    navigate("/add");
  }

  function handleSelection(p, e) { // (13)
    navigate("/details/" + p.id);
  }

  return(
    <Stack spacing={2}>
      <Stack direction="row">
        <Button variant="outlined" onClick={handleClick}>Add person</Button>
      </Stack>
      <div style={{ height: 400, width: '100%' }}> // (6)
        <DataGrid
          rows={persons}
          columns={columns}
          pageSize={5}
          onRowDoubleClick={handleSelection} // (12)
        />
      </div>
    </Stack>
  );
}

function mapStateToProps(state) { // (2)
  return {
    persons: state.persons,
  };
}

function mapDispatchToProps(dispatch) { // (3)
  return {
    getAllPersons: () => dispatch(getAllPersons({})),
  };
}

export default connect(mapStateToProps, mapDispatchToProps)(Home); // (1)

(1) – we need to connect our component to the Redux store. The react-redux connect method takes two input arguments mapStateToProps and mapDispatchToProps

(2) – the mapStateToProps is used for selecting the part of the data from the store that the connected component needs. It’s frequently referred to as just mapState for short. The Home component requires the persons array from the global state store

(3) – as the second argument passed into connectmapDispatchToProps is used for dispatching actions to the store – dispatch is a function of the Redux store. You can call store.dispatch to dispatch an action. This is the only way to trigger a state change. Since we just need to dispatch the GET_ALL_PERSONS action in the Home component we define a single action there

(4) – we need to import the action definition

(5) – the actions and state fields mapped by the connect method need to be declared as the component props

(6) – we use the Material DataGrid component to display the table with persons. It takes the persons prop as the input argument. We also need to define a list of table columns (7).

(7) – the definition of columns contained by the DataGrid component. It displays the id, name, age and gender fields of each person on the list.

(8) – with the React useEffect method we dispatch the GET_ALL_PERSONS action on load. In fact, we are just calling the getAllPersons() function defined within the actions, which creates and fires events asynchronously

(9) – from the Home component we can navigate to the other app pages represented by two other components AddPerson and GetPerson. In order to do that we first need to import the useNavigate method provided by React Router.

(10) – let’s call the useNavigate method declared in the previous step to get a handle to the navigate component

(11) – there is a Material Button on the page that redirects us the /add context handled by the AddPerson component

(12) – firstly let’s add the onRowDoubleClick listener to our DataGrid. It fires after you double-click on the selected row from the table

(13) – then we get the id field of the row and navigate to the /details/:id context.

Configure React App and Routing

That could be our first step. However, now we can analyze from the perspective of all previously created components or definitions as a final part of our configuration. We need to import the Redux store definition (1) and our React components (2). We also need to configure routing for our three components (3) using React Router library. Especially the last path is interesting. We use a dynamic parameter based on the person id field. Finally, let’s set the store and router providers (4).

import React from "react";
import { createRoot } from "react-dom/client";
import { Provider } from "react-redux";
import {
  createBrowserRouter,
  RouterProvider
} from "react-router-dom";
import store from "./store/index"; // (1)
import Home from "./components/Home"; // (2)
import AddPerson from "./components/AddPerson";
import GetPerson from "./components/GetPerson";

const root = document.getElementById("root");
const rootReact = createRoot(root);

const router = createBrowserRouter([
  {
    path: "/",
    element: <Home />,
  },
  {
    path: "/add",
    element: <AddPerson />,
  },
  {
    path: "/details/:id",
    element: <GetPerson />,
  },
]); // (3)

rootReact.render(
  <Provider store={store}>
    <RouterProvider router={router} />
  </Provider>
); // (4)

Let’s build the app by executing the following command:

$ npm install

Now, we can run our micro frontend React app with the following command:

$ npm start

Here’s our app home page:

micro-frontend-react-main-page

Add and Get Data in React Micro Frontend

There are two other components responsible for adding (AddPerson) and getting (GetPerson) data. Let’s start with the AddPerson component. The logic of that component is pretty similar to the previously described Home component. We need to import the addPerson method form actions (1). We also use person and newPersonId field from the state store (2). The ADD_PERSON action is dispatched on the “Save” button clicked (3). After adding a new person we are displaying a message with the id generated by the backend app (4).

import { connect } from "react-redux";
import { Form } from "react-router-dom";
import { TextField, Button, MenuItem, Alert, Grid } from "@mui/material"

import { addPerson } from "../actions/index"; // (1)

function AddPerson({ addPerson, person, newPersonId }) { // (2)

  function handleChangeName(e) {
    person.name = e.target.value;
  }

  function handleChangeAge(e) {
    person.age = e.target.value;
  }

  function handleChangeGender(e) {
    person.gender = e.target.value;
  }

  function handleClick(e) {
    addPerson(person); // (3)
  }

  return(
    <Form method="post">
      <Grid container spacing={2} direction="column">
        <Grid item xs={6}> // (4)
          {newPersonId != null ?
          <Alert variant="filled" severity="success">New person added: {newPersonId}</Alert> : ""
          }
        </Grid>
        <Grid item xs={3}>
          <TextField id="name" label="Name" variant="outlined" onChange={handleChangeName} value={person?.name} />
        </Grid>
        <Grid item xs={3}>
          <TextField id="gender" select label="Gender" onChange={handleChangeGender} value={person?.gender} >
            <MenuItem value={'MALE'}>Male</MenuItem>
            <MenuItem value={'FEMALE'}>Female</MenuItem>
          </TextField>
        </Grid>
        <Grid item xs={3}>
          <TextField id="age" label="Age" inputProps={{ inputMode: 'numeric' }} onChange={handleChangeAge} value={person?.age} />
        </Grid>
        <Grid item xs={3}>
          <Button variant="outlined" onClick={handleClick}>Save</Button>
        </Grid>
      </Grid>
    </Form>
  );
}

function mapStateToProps(state) {
    return {
      person: state.person,
      newPersonId: state.newPersonId,
    };
  }
  
function mapDispatchToProps(dispatch) {
  return {
    addPerson: (payload) => dispatch(addPerson(payload)),
  };
}

export default connect(mapStateToProps, mapDispatchToProps)(AddPerson);

Here’s our page for adding a new person:

Just click the “SAVE” button. After a successful operation you will see the following message on the same page:

We can back to the list. As you see our new person is there:

Now we double-click on the selected row. I would probably need to work on the look of that component 🙂 But it works fine – it displays the details of the person with the id equal to 4.

Let’s take a look at the code of the component responsible for displaying those details. We need to import the getPersonById method from actions (1). The component dispatches the GET_PERSON_BY_ID action on the page load (2). It takes the id parameter from the route context path /details/:id with the React Router useParams method (3). Then it just displays all the current person fields (4).

import { connect } from "react-redux";
import React, { useEffect } from "react";
import { useParams } from "react-router-dom";
import { Paper, Avatar, Grid } from "@mui/material"

import { getPersonById } from "../actions/index"; // (1)

function GetPerson({ getPersonById, person }) {

  let { id } = useParams(); // (3)

  // (2)
  useEffect(() => {
    getPersonById({id: id})
  }, []);

  // (4)
  return(
    <Grid container spacing={2} direction="column">
      <Grid item direction="row">
        <Grid item><Avatar>U</Avatar></Grid> 
        <Grid item>USER DETAILS</Grid>
      </Grid>
      <Grid item xs={3}>
        <Paper>Name: <b>{person?.name}</b></Paper>
      </Grid>
      <Grid item xs={3}>
        <Paper>Gender: <b>{person?.gender}</b></Paper>
      </Grid>
      <Grid item xs={3}>
        <Paper>Age: <b>{person?.age}</b></Paper>
      </Grid>
    </Grid>
    
  )
}

function mapStateToProps(state) {
    return {
      person: state.person,
    };
  }
  
function mapDispatchToProps(dispatch) {
  return {
    getPersonById: (payload) => dispatch(getPersonById(payload)),
  };
}

export default connect(mapStateToProps, mapDispatchToProps)(GetPerson);

Final Thoughts

I read some tutorials about React, but I didn’t find any that is providing detailed, step-by-step instructions on how to build a micro frontend that communicates with the backend over REST API. Some of them were too complicated, some were too basic or outdated. My point is to give you an up-to-date receipt on how to build a micro-frontend using the most interesting and useful libraries that help you organize your project well.

The post Micro Frontend with React appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2022/10/11/micro-frontend-with-react/feed/ 2 13557
Microprofile Java Microservices on WildFly https://piotrminkowski.com/2020/12/14/microprofile-java-microservices-on-wildfly/ https://piotrminkowski.com/2020/12/14/microprofile-java-microservices-on-wildfly/#respond Mon, 14 Dec 2020 14:26:31 +0000 https://piotrminkowski.com/?p=9200 In this guide, you will learn how to implement the most popular Java microservices patterns with the MicroProfile project. We’ll look at how to create a RESTful application using JAX-RS and CDI. Then, we will run our microservices on WildFly as bootable JARs. Finally, we will deploy them on OpenShift in order to use its […]

The post Microprofile Java Microservices on WildFly appeared first on Piotr's TechBlog.

]]>
In this guide, you will learn how to implement the most popular Java microservices patterns with the MicroProfile project. We’ll look at how to create a RESTful application using JAX-RS and CDI. Then, we will run our microservices on WildFly as bootable JARs. Finally, we will deploy them on OpenShift in order to use its service discovery and config maps.

The MicroProfile project breathes a new life into Java EE. Since the rise of microservices Java EE had lost its dominant position in the JVM enterprise area. As a result, application servers and EJBs have been replaced by lightweight frameworks like Spring Boot. MicroProfile is an answer to that. It defines Java EE standards for building microservices. Therefore it can be treated as a base to build more advanced frameworks like Quarkus or KumuluzEE.

If you are interested in frameworks built on top of MicroProfile, Quarkus is a good example: Quick Guide to Microservices with Quarkus on OpenShift. You can always implement your custom service discovery implementation for MicroProfile microservices. You should try with Consul: Quarkus Microservices with Consul Discovery.

Source code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my repository sample-microprofile-microservices. Then you should go to the employee-service and department-service directories, and just follow my instructions 🙂

1. Running on WildFly

A few weeks ago WildFly has introduced the “Fat JAR” packaging feature. This feature is fully supported since WildFly 21. We can apply it during a Maven build by including wildfly-jar-maven-plugin to the pom.xml file. What is important, we don’t have to re-design an application to run it inside a bootable JAR.

In order to use the “Fat JAR” packaging feature, we need to add the package execution goal. Then we should install two features inside the configuration section. The first of them, the jaxrs-server feature, is a layer that allows us to build a typical REST application. The second of them, the microprofile-platform feature, enables MicroProfile on the WildFly server.

<profile>
   <id>bootable-jar</id>
   <activation>
      <activeByDefault>true</activeByDefault>
   </activation>
   <build>
      <finalName>${project.artifactId}</finalName>
      <plugins>
         <plugin>
            <groupId>org.wildfly.plugins</groupId>
            <artifactId>wildfly-jar-maven-plugin</artifactId>
            <version>2.0.2.Final</version>
            <executions>
               <execution>
                  <goals>
                     <goal>package</goal>
                  </goals>
               </execution>
            </executions>
            <configuration>
               <feature-pack-location>
                  wildfly@maven(org.jboss.universe:community-universe)#${version.wildfly}
               </feature-pack-location>
               <layers>
                  <layer>jaxrs-server</layer>
                  <layer>microprofile-platform</layer>
               </layers>
            </configuration>
         </plugin>
      </plugins>
   </build>
</profile>

Finally, we just need to execute the following command to build and run our “Fat JAR” application on WildFly.

$ mvn package wildfly-jar:run

If we run multiple applications on the same machine, we would have to override default HTTP and management ports. To do that we need to add the jvmArguments section inside configuration. We may insert there any number of JVM arguments. In that case, the required arguments are jboss.http.port and jboss.management.http.port.

<configuration>
   ...
   <jvmArguments>
      <jvmArgument>-Djboss.http.port=8090</jvmArgument>
      <jvmArgument>-Djboss.management.http.port=9090</jvmArgument>
   </jvmArguments>
</configuration>

2. Creating JAX-RS applications

In the first step, we will create simple REST applications with JAX-RS. WildFly provides all the required libraries, but we need to include both these artifacts for the compilation phase.

<dependency>
   <groupId>org.jboss.spec.javax.ws.rs</groupId>
   <artifactId>jboss-jaxrs-api_2.1_spec</artifactId>
   <scope>provided</scope>
</dependency>
<dependency>
   <groupId>jakarta.enterprise</groupId>
   <artifactId>jakarta.enterprise.cdi-api</artifactId>
   <scope>provided</scope>
</dependency>

Then, we should set the dependencyManagement section. We will use BOM provided by WildFly for both MicroProfile and Jakarta EE.

<dependencyManagement>
   <dependencies>
      <dependency>
         <groupId>org.wildfly.bom</groupId>
         <artifactId>wildfly-jakartaee8-with-tools</artifactId>
         <version>${version.wildfly}</version>
         <type>pom</type>
         <scope>import</scope>
      </dependency>
      <dependency>
         <groupId>org.wildfly.bom</groupId>
         <artifactId>wildfly-microprofile</artifactId>
         <version>${version.wildfly}</version>
         <type>pom</type>
         <scope>import</scope>
      </dependency>
   </dependencies>
</dependencyManagement>

Here’s the JAX-RS controller inside employee-service. It uses an in-memory repository bean. It also injects a random delay to all exposed HTTP endpoints with the @Delay annotation. To clarify, I’m just setting it for future use, in order to present the metrics and fault tolerance features.

@Path("/employees")
@ApplicationScoped
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
@Delay
public class EmployeeController {

   @Inject
   EmployeeRepository repository;

   @POST
   public Employee add(Employee employee) {
      return repository.add(employee);
   }

   @GET
   @Path("/{id}")
   public Employee findById(@PathParam("id") Long id) {
      return repository.findById(id);
   }

   @GET
   public List<Employee> findAll() {
      return repository.findAll();
   }

   @GET
   @Path("/department/{departmentId}")
   public List<Employee> findByDepartment(@PathParam("departmentId") Long departmentId) {
      return repository.findByDepartment(departmentId);
   }

   @GET
   @Path("/organization/{organizationId}")
   public List<Employee> findByOrganization(@PathParam("organizationId") Long organizationId) {
      return repository.findByOrganization(organizationId);
   }

}

Here’s a definition of the delay interceptor class. It is annotated with a base @Interceptor and custom @Delay. It injects a random delay between 0 and 1000 milliseconds to each method invoke.

@Interceptor
@Delay
public class AddDelayInterceptor {

   Random r = new Random();

   @AroundInvoke
   public Object call(InvocationContext invocationContext) throws Exception {
      Thread.sleep(r.nextInt(1000));
      System.out.println("Intercept");
      return invocationContext.proceed();
   }

}

Finally, let’s just take a look on the custom @Delay annotation.

@InterceptorBinding
@Target({METHOD, TYPE})
@Retention(RUNTIME)
public @interface Delay {
}

3. Enable metrics for MicroProfile microservices

Metrics is one of the core MicroProfile modules. Data is exposed via REST over HTTP under the /metrics base path in two different data formats for GET requests. These formats are JSON and OpenMetrics. The OpenMetrics text format is supported by Prometheus. In order to enable the MicroProfile metrics, we need to include the following dependency to Maven pom.xml.

<dependency>
   <groupId>org.eclipse.microprofile.metrics</groupId>
   <artifactId>microprofile-metrics-api</artifactId>
   <scope>provided</scope>
</dependency>

To enable the basic metrics we just need to annotate the controller class with @Timed.

@Path("/employees")
@ApplicationScoped
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
@Delay
@Timed
public class EmployeeController {
   ...
}

The /metrics endpoint is available under the management port. Firstly, let’s send some test requests, for example to the GET /employees endpoint. The application employee-service is available on http://localhost:8080/. Then let’s call the endpoint http://localhost:9990/metrics. Here’s a full list of metrics generated for the findAll method. Similar metrics would be generated for all other HTTP endpoints.

4. Generate OpenAPI specification

The REST API specification is another essential thing for all microservices. So, it is not weird that the OpenAPI module is a part of a MicroProfile core. The API specification is automatically generated after including the microprofile-openapi-api module. This module is a part microprofile-platform layer defined for wildfly-jar-maven-plugin.

After starting the application we may access OpenAPI documentation by calling http://localhost:8080/openapi endpoint. Then, we can copy the result to the Swagger editor. The graphical representation of the employee-service API is visible below.

microprofile-java-microservices-openapi

5. Microservices inter-communication with MicroProfile REST client

The department-service calls endpoint GET /employees/department/{departmentId} from the employee-service. Then it returns a department with a list of all assigned employees.

@Getter
@Setter
@NoArgsConstructor
@AllArgsConstructor
public class Department {
   private Long id;
   private String name;
   private Long organizationId;
   private List<Employee> employees = new ArrayList<>();
}

Of course, we need to include the REST client module to the Maven dependencies.

<dependency>
   <groupId>org.eclipse.microprofile.rest.client</groupId>
   <artifactId>microprofile-rest-client-api</artifactId>
   <scope>provided</scope>
</dependency>

The MicroProfile REST module allows defining a client declaratively. We should annotate the client interface with @RegisterRestClient. The rest of the implementation is rather obvious.

@Path("/employees")
@RegisterRestClient(baseUri = "http://employee-service:8080")
public interface EmployeeClient {

   @GET
   @Path("/department/{departmentId}")
   @Produces(MediaType.APPLICATION_JSON)
   List<Employee> findByDepartment(@PathParam("departmentId") Long departmentId);
}

Finally, we just need to inject the EmployeeClient bean to the controller class.

@Path("/departments")
@ApplicationScoped
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
@Timed
public class DepartmentController {

   @Inject
   DepartmentRepository repository;
   @Inject
   EmployeeClient employeeClient;

   @POST
   public Department add(Department department) {
      return repository.add(department);
   }

   @GET
   @Path("/{id}")
   public Department findById(@PathParam("id") Long id) {
      return repository.findById(id);
   }

   @GET
   public List<Department> findAll() {
      return repository.findAll();
   }

   @GET
   @Path("/organization/{organizationId}")
   public List<Department> findByOrganization(@PathParam("organizationId") Long organizationId) {
      return repository.findByOrganization(organizationId);
   }

   @GET
   @Path("/organization/{organizationId}/with-employees")
   public List<Department> findByOrganizationWithEmployees(@PathParam("organizationId") Long organizationId) {
      List<Department> departments = repository.findByOrganization(organizationId);
      departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
      return departments;
   }

}

The MicroProfile project does not implement service discovery patterns. There are some frameworks built on top of MicroProfile that provide such kind of implementation, for example, KumuluzEE. If you do not deploy our applications on OpenShift you may add the following entry in your /etc/hosts file to test it locally.

127.0.0.1 employee-service

Finally, let’s call endpoint GET /departments/organization/{organizationId}/with-employees. The result is visible in the picture below.

6. Java microservices fault tolerance with MicroProfile

To be honest, fault tolerance handling is my favorite feature of MicroProfile. We may configure them on the controller methods using annotations. We can choose between @Timeout, @Retry, @Fallback and @CircuitBreaker. Alternatively, it is possible to use a mix of those annotations on a single method. As you probably remember, we injected a random delay between 0 and 1000 milliseconds into all the endpoints exposed by employee-service. Now, let’s consider the method inside department-service that calls endpoint GET /employees/department/{departmentId} from employee-service. Firstly, we will annotate that method with @Timeout as shown below. The current timeout is 500 ms.

public class DepartmentController {

   @Inject
   DepartmentRepository repository;
   @Inject
   EmployeeClient employeeClient;

   ...

   @GET
   @Path("/organization/{organizationId}/with-employees")
   @Timeout(500)
   public List<Department> findByOrganizationWithEmployees(@PathParam("organizationId") Long organizationId) {
      List<Department> departments = repository.findByOrganization(organizationId);
      departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
      return departments;
   }

}

Before calling the method, let’s create an exception mapper. If TimeoutException occurs, the department-service endpoint will return status HTTP 504 - Gateway Timeout.

@Provider
public class TimeoutExceptionMapper implements 
      ExceptionMapper<TimeoutException> {

   public Response toResponse(TimeoutException e) {
      return Response.status(Response.Status.GATEWAY_TIMEOUT).build();
   }

}

Then, we may proceed to call our test endpoint. Probably 50% of requests will finish with the result visible below.

On the other hand, we may enable a retry mechanism for such an endpoint. After that, the change for receive status HTTP 200 OK becomes much bigger than before.

@GET
@Path("/organization/{organizationId}/with-employees")
@Timeout(500)
@Retry(retryOn = TimeoutException.class)
public List<Department> findByOrganizationWithEmployees(@PathParam("organizationId") Long organizationId) {
   List<Department> departments = repository.findByOrganization(organizationId);
   departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
   return departments;
}

7. Deploy MicroProfile microservices on OpenShift

We can easily deploy MicroProfile Java microservices on OpenShift using the JKube plugin. It is a successor of the deprecated Fabric8 Maven Plugin. Eclipse JKube is a collection of plugins and libraries that are used for building container images using Docker, JIB or S2I build strategies. It generates and deploys Kubernetes and OpenShift manifests at compile time too. So, let’s add openshift-maven-plugin to the pom.xml file.

The configuration visible below sets 2 replicas for the deployment and enforces using health checks. In addition to this, openshift-maven-plugin generates the rest of a deployment config based on Maven pom.xml structure. For example, it generates employee-service-deploymentconfig.yml, employee-service-route.yml, and employee-service-service.yml for the employee-service application.

<plugin>
   <groupId>org.eclipse.jkube</groupId>
   <artifactId>openshift-maven-plugin</artifactId>
   <version>1.0.2</version>
   <executions>
      <execution>
         <id>jkube</id>
         <goals>
            <goal>resource</goal>
            <goal>build</goal>
         </goals>
      </execution>
   </executions>
   <configuration>
      <resources>
         <replicas>2</replicas>
      </resources>
      <enricher>
         <config>
            <jkube-healthcheck-wildfly-jar>
               <enforceProbes>true</enforceProbes>
            </jkube-healthcheck-wildfly-jar>
         </config>
      </enricher>
   </configuration>
</plugin>

In order to deploy the application on OpenShift we need to run the following command.

$ mvn oc:deploy -P bootable-jar-openshift

Since the property enforceProbes has been enabled openshift-maven-plugin adds liveness and readiness probes to the DeploymentConfig. Therefore, we need to implement both these endpoints in our MicroProfile applications. MicroProfile provides a smart mechanism for creating liveness and readiness health checks. We just need to annotate the class with @Liveness or @Readiness, and implement the HealthCheck interface. Here’s the example implementation of the liveness endpoint.

@Liveness
@ApplicationScoped
public class LivenessEndpoint implements HealthCheck {
   @Override
   public HealthCheckResponse call() {
      return HealthCheckResponse.up("Server up");
   }
}

On the other hand, the implementation of the readiness probe also verifies the status of the repository bean. Of course, it is just a simple example.

@Readiness
@ApplicationScoped
public class ReadinessEndpoint implements HealthCheck {
   @Inject
   DepartmentRepository repository;

   @Override
   public HealthCheckResponse call() {
      HealthCheckResponseBuilder responseBuilder = HealthCheckResponse
         .named("Repository up");
      List<Department> departments = repository.findAll();
      if (repository != null && departments.size() > 0)
         responseBuilder.up();
      else
         responseBuilder.down();
      return responseBuilder.build();
   }
}

After deploying both employee-service and department-service application we may verify a list of DeploymentConfigs.

We can also navigate to the OpenShift console. Let’s take a look at a list of running pods. There are two instances of the employee-service and a single instance of department-service.

microprofile-java-microservices-openshift-pods

8. MicroProfile OpenTracing with Jaeger

Tracing is another important pattern in microservices architecture. The OpenTracing module is a part of MicroProfile specification. Besides the microprofile-opentracing-api library we also need to include the opentracing-api module.

<dependency>
   <groupId>org.eclipse.microprofile.opentracing</groupId>
   <artifactId>microprofile-opentracing-api</artifactId>
   <scope>provided</scope>
</dependency>
<dependency>
   <groupId>io.opentracing</groupId>
   <artifactId>opentracing-api</artifactId>
   <version>0.31.0</version>
</dependency>

By default, MicroProfile OpenTracing integrates the application with Jaeger. If you are testing our sample microservices on OpenShift, you may install Jaeger using an operator. Otherwise, we may just start it on the Docker container. The Jaeger UI is available on the address http://localhost:16686.

$ docker run -d --name jaeger \
-p 6831:6831/udp \
-p 16686:16686 \
jaegertracing/all-in-one:1.16.0

We don’t have to do anything more than adding the required dependencies to enable tracing. However, it is worth overriding the names of recorded operations. We may do it by annotating a particular method with @Traced and then by setting parameter operationName. The implementation of findByOrganizationWithEmployees method in the department-service is visible below.

public class DepartmentController {

   @Inject
   DepartmentRepository repository;
   @Inject
   EmployeeClient employeeClient;

   ...

   @GET
   @Path("/organization/{organizationId}/with-employees")
   @Timeout(500)
   @Retry(retryOn = TimeoutException.class)
   @Traced(operationName = "findByOrganizationWithEmployees")
   public List<Department> findByOrganizationWithEmployees(@PathParam("organizationId") Long organizationId) {
      List<Department> departments = repository.findByOrganization(organizationId);
      departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
      return departments;
   }
   
}

We can also take a look at the fragment of implementation of EmployeeController.

public class EmployeeController {

   @Inject
   EmployeeRepository repository;

   ...
   
   @GET
   @Traced(operationName = "findAll")
   public List<Employee> findAll() {
      return repository.findAll();
   }

   @GET
   @Path("/department/{departmentId}")
   @Traced(operationName = "findByDepartment")
   public List<Employee> findByDepartment(@PathParam("departmentId") Long departmentId) {
      return repository.findByDepartment(departmentId);
   }
   
}

Before running the applications we should at least set the environment variable JAEGER_SERVICE_NAME. It configures the name of the application visible by Jaeger. For example, before starting the employee-service application we should set the value JAEGER_SERVICE_NAME=employee-service. Finally, let’s send some test requests to the department-service endpoint GET departments/organization/{organizationId}/with-employees.

$ curl http://localhost:8090/departments/organization/1/with-employees
$ curl http://localhost:8090/departments/organization/2/with-employees

After sending some test requests we may go to the Jaeger UI. The picture visible below shows the history of requests processed by the method findByOrganizationWithEmployees inside department-service.

As you probably remember, this method calls a method from the employee-service, and configures timeout and retries in case of failure. The picture below shows the details about a single request processed by the method findByOrganizationWithEmployees. To clarify, it has been retried once.

microprofile-java-microservices-jeager-details

Conclusion

This article guides you through the most important steps of building Java microservices with MicroProfile. You may learn how to implement tracing, health checks, OpenAPI, and inter-service communication with a REST client. after reading you are able to run your MicroProfile Java microservices locally on WildFly, and moreover deploy them on OpenShift using a single maven command. Enjoy 🙂

The post Microprofile Java Microservices on WildFly appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/12/14/microprofile-java-microservices-on-wildfly/feed/ 0 9200
Quarkus OAuth2 and security with Keycloak https://piotrminkowski.com/2020/09/16/quarkus-oauth2-and-security-with-keycloak/ https://piotrminkowski.com/2020/09/16/quarkus-oauth2-and-security-with-keycloak/#respond Wed, 16 Sep 2020 07:27:40 +0000 https://piotrminkowski.com/?p=8811 Quarkus OAuth2 support is based on the WildFly Elytron Security project. In this article, you will learn how to integrate your Quarkus application with the OAuth2 authorization server like Keycloak. Before starting with Quarkus security it is worth to find out how to build microservices in Quick guide to microservices with Quarkus on OpenShift, and […]

The post Quarkus OAuth2 and security with Keycloak appeared first on Piotr's TechBlog.

]]>
Quarkus OAuth2 support is based on the WildFly Elytron Security project. In this article, you will learn how to integrate your Quarkus application with the OAuth2 authorization server like Keycloak.

Before starting with Quarkus security it is worth to find out how to build microservices in Quick guide to microservices with Quarkus on OpenShift, and how to easily deploy your application on Kubernetes in Guide to Quarkus on Kubernetes.

Source code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my repository sample-quarkus-applications. Then go to the employee-secure-service directory, and just follow my instructions 🙂 The good idea is to read the article Guide to Quarkus with Kotlin before you move on.

Using Quarkus OAuth2 for securing endpoints

In the first step, we need to include Quarkus modules for REST and OAuth2. Of course, our applications use some other modules, but those two are required.

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-elytron-security-oauth2</artifactId>
</dependency>
<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-resteasy-jackson</artifactId>
</dependency>

Let’s discuss a typical implementation of the REST controller with Quarkus. Quarkus OAuth2 provides a set of annotations for setting permissions. We can allow to call an endpoint by any user with @PermitAll annotation. The annotation @DenyAll indicates that the given endpoint cannot be accessed by anyone. We can also define a list of roles allowed for calling a given endpoint with @RolesAllowed.

The controller contains different types of CRUD methods. I defined three roles: viewer, manager, and admin. The viewer role allows calling only GET methods. The manager role allows calling GET and POST methods. Finally, the admin role allows calling all the methods. You can see the final implementation of the controller class below.

@Path("/employees")
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
class EmployeeResource(val repository: EmployeeRepository) {

    @POST
    @Transactional
    @RolesAllowed(value = ["manager", "admin"])
    fun add(employee: Employee): Response {
        repository.persist(employee)
        return Response.ok(employee).status(201).build()
    }

    @DELETE
    @Path("/{id}")
    @Transactional
    @RolesAllowed("admin")
    fun delete(@PathParam id: Long) {
        repository.deleteById(id)
    }

    @GET
    @PermitAll
    fun findAll(): List<Employee> = repository.listAll()

    @GET
    @Path("/{id}")
    @RolesAllowed(value = ["manager", "admin", "viewer"])
    fun findById(@PathParam id: Long): Employee?
            = repository.findById(id)

    @GET
    @Path("/first-name/{firstName}/last-name/{lastName}")
    @RolesAllowed(value = ["manager", "admin", "viewer"])
    fun findByFirstNameAndLastName(@PathParam firstName: String,
                          @PathParam lastName: String): List<Employee>
            = repository.findByFirstNameAndLastName(firstName, lastName)

    @GET
    @Path("/salary/{salary}")
    @RolesAllowed(value = ["manager", "admin", "viewer"])
    fun findBySalary(@PathParam salary: Int): List<Employee>
            = repository.findBySalary(salary)

    @GET
    @Path("/salary-greater-than/{salary}")
    @RolesAllowed(value = ["manager", "admin", "viewer"])
    fun findBySalaryGreaterThan(@PathParam salary: Int): List<Employee>
            = repository.findBySalaryGreaterThan(salary)

}

Running Keycloak

We are running Keycloak on a Docker container. By default, Keycloak exposes API and a web console on port 8080. However, that port number must be different than the Quarkus application port, so we are overriding it with 8888. We also need to set a username and password to the admin console.

$ docker run -d --name keycloak -p 8888:8080 -e KEYCLOAK_USER=quarkus -e KEYCLOAK_PASSWORD=quarkus123 jboss/keycloak

Create client on Keycloak

First, we need to create a client with a given name. Let’s say this name is quarkus. The client credentials are used during the authorization process. It is important to choose confidential in the “Access Type” section and enable option “Direct Access Grants”.

quarkus-oauth2-keycloak-client

Then we may switch to the “Credentials” tab, and copy the client secret.

Configure Quarkus OAuth2 connection to Keycloak

In the next steps, we will use two HTTP endpoints exposed by Keycloak. First of them, token_endpoint allows you to generate new access tokens. The second endpoint introspection_endpoint is used to retrieve the active state of a token. In other words, you can use it to validate access or refresh token.

The Quarkus OAuth2 module expects three configuration properties. These are the client’s name, the client’s secret, and the address of the introspection endpoint. The last property quarkus.oauth2.role-claim is responsible for setting the name of claim used to load the roles. The list of roles is a part of the response returned by the introspection endpoint. Let’s take a look at the final list of configuration properties for integration with my local instance of Keycloak.

quarkus.oauth2.client-id=quarkus
quarkus.oauth2.client-secret=7dd4d516-e06d-4d81-b5e7-3a15debacebf
quarkus.oauth2.introspection-url=http://localhost:8888/auth/realms/master/protocol/openid-connect/token/introspect
quarkus.oauth2.role-claim=roles

Create users and roles on Keycloak

Our application uses three roles: viewer, manager, and admin. Therefore, we will create three test users on Keycloak. Each of them has a single role assigned. The manager role is a composite role, and it contains the viewer role. The same with the admin, that contains both manager and viewer. Here’s the full list of test users.

quarkus-oauth2-keycloak-users

Of course, we also need to define roles. In the picture below, I highlighted the roles used by our application.

Before proceeding to the tests, we need to do one thing. We have to edit the client scope responsible for displaying a list of roles. To do that go to the section “Client Scopes”, and then find the roles scope. After editing it, you should switch to the “Mappers” tab. Finally, you need to find and edit the “realm roles” entry. The value of a field “Token Claim Name” should be the same as the value set in the quarkus.oauth2.role-claim property. I highlighted it in the picture below. In the next section, I’ll show you how Quarkus OAuth2 retrieves roles from the introspection endpoint.

quarkus-oauth2-keycloak-clientclaim

Analyzing Quarkus OAuth2 authorization process

In the first step, we are calling the Keycloak token endpoint to obtain a valid access token. We may choose between five supported grant types. Because I want to authorize with a user password I’m setting parameter grant_type to password. We also need to set client_id, client_secret, and of course user credentials. A test user in the request visible below is test_viewer. It has the role viewer assigned.

$ curl -X POST http://localhost:8888/auth/realms/master/protocol/openid-connect/token \
-d "grant_type=password" \ 
-d "client_id=quarkus" \
-d "client_secret=7dd4d516-e06d-4d81-b5e7-3a15debacebf" \
-d "username=test_viewer" \
-d "password=123456"

{
    "access_token": "eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIilWRfdX...",
    "expires_in": 1800,
    "refresh_expires_in": 1800,
    "refresh_token": "eyJhbGciOiJIUzI1NiIsInR5cCIgOiAiSldUIiwia2...",
    "token_type": "bearer",
    "not-before-policy": 1600100798,
    "session_state": "cf9862b0-f97a-43a7-abbb-a267fff5e71e",
    "scope": "email profile"
}

Once, we have successfully generated an access token, we may use it for authorizing requests sent to the Quarkus application. But before that, we can verify our token with the Keycloak introspect endpoint. It is an additional step. However, it shows you what type of information is returned by the introspect endpoint, which is then used by the Quarkus OAuth2 module. You can see the request and response for the token value generated in the previous step. Pay close attention to how it returns a list of user’s roles.

$ curl -X POST http://localhost:8888/auth/realms/master/protocol/openid-connect/token/introspect \
-d "token=eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIilWRfdX..."
-H "Authorization: Basic cXVhcmt1czo3ZGQ0ZDUxNi1lMDZkLTRkODEtYjVlNy0zYTE1ZGViYWNlYmY="

{
    "exp": 1600200132,
    "iat": 1600198332,
    "jti": "af160b82-ad41-45d3-8c7d-28096beb2509",
    "iss": "http://localhost:8888/auth/realms/master",
    "sub": "f41828f6-d597-41cb-9081-46c2d7a4d76b",
    "typ": "Bearer",
    "azp": "quarkus",
    "session_state": "0fdbbd83-35f9-4f4f-912a-c17979c2a87b",
    "preferred_username": "test_viewer",
    "email": "test_viewer@example.com",
    "email_verified": true,
    "acr": "1",
    "scope": "email profile",
    "roles": [
        "viewer"
    ],
    "client_id": "quarkus",
    "username": "test_viewer",
    "active": true
}

The generated access token is valid. So, now the only thing we need to do is to set it on the request inside the Authorization header. Role viewer is allowed for the endpoint GET /employees/{id}, so the HTTP response status is 200 OK or 204 No Content.

$ curl -v http://localhost:8080/employees/1 -H "Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIilWRfdX..."
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8080 (#0)
> GET /employees/1 HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.55.1
> Accept: */*
> Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIilWRfdX...
>
< HTTP/1.1 204 No Content
<
* Connection #0 to host localhost left intact

Now, let’s try to call the endpoint that is disallowed for the viewer role. In the request visible below, we are trying to call endpoint DELETE /employees/{id}. In line with the expectations, the HTTP response status is 403 Forbidden.

$ curl -v -X DELETE http://localhost:8080/employees/1 -H "Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIilWRfdX..."
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8080 (#0)
> GET /employees/1 HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.55.1
> Accept: */*
> Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIilWRfdX...
>
< HTTP/1.1 403 Forbidden
< Content-Length: 0
<
* Connection #0 to host localhost left intact

Conclusion

It is relatively easy to configure and implement OAuth2 support with Quarkus. However, you may spend a lot of time on Keycloak configuration. That's why I explained step-by-step how to set up OAuth2 authorization there. Enjoy 🙂

The post Quarkus OAuth2 and security with Keycloak appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/09/16/quarkus-oauth2-and-security-with-keycloak/feed/ 0 8811
Timeouts and Retries In Spring Cloud Gateway https://piotrminkowski.com/2020/02/23/timeouts-and-retries-in-spring-cloud-gateway/ https://piotrminkowski.com/2020/02/23/timeouts-and-retries-in-spring-cloud-gateway/#comments Sun, 23 Feb 2020 22:47:09 +0000 http://piotrminkowski.com/?p=7772 In this article I’m going to describe two features of Spring Cloud Gateway: retrying based on GatewayFilter pattern and timeout handling based on a global configuration. In some previous articles in this series I have described rate limiting based on Redis, and a circuit breaker pattern built with Resilience4J. For more details about those two […]

The post Timeouts and Retries In Spring Cloud Gateway appeared first on Piotr's TechBlog.

]]>
In this article I’m going to describe two features of Spring Cloud Gateway: retrying based on GatewayFilter pattern and timeout handling based on a global configuration. In some previous articles in this series I have described rate limiting based on Redis, and a circuit breaker pattern built with Resilience4J. For more details about those two features you may refer to the following blog posts:

Example

We use the same repository as for two previous articles about Spring Cloud Gateway. The address of repository is https://github.com/piomin/sample-spring-cloud-gateway.git. The test class dedicated for the current article is GatewayRetryTest.

Implementation and testing

As you probably know most of the operations in Spring Cloud Gateway are realized using filter pattern, which is an implementation of Spring Framework GatewayFilter. Here, we can modify incoming requests and outgoing responses before or after sending the downstream request.
The same as for examples described in my two previous articles about Spring Cloud Gateway we will build JUnit test class. It leverages Testcontainers MockServer for running mock exposing REST endpoints.
Before running the test we need to prepare a sample route containing Retry filter. When defining this type of GatewayFilter we may set multiple parameters. Typically you will use the following three of them:

  • retries – the number of retries that should be attempted for a single incoming request. The default value of this property is 3
  • statuses – the list of HTTP status codes that should be retried, represented by using org.springframework.http.HttpStatus enum name.
  • backoff – the policy used for calculating Spring Cloud Gateway timeout between subsequent retry attempts. By default this property is disabled.

Let’s start from the simplest scenario – using default values of parameters. In that case we just need to set a name of GatewayFilter for a route – Retry.

@ClassRule
public static MockServerContainer mockServer = new MockServerContainer();

@BeforeClass
public static void init() {
   System.setProperty("spring.cloud.gateway.routes[0].id", "account-service");
   System.setProperty("spring.cloud.gateway.routes[0].uri", "http://192.168.99.100:" + mockServer.getServerPort());
   System.setProperty("spring.cloud.gateway.routes[0].predicates[0]", "Path=/account/**");
   System.setProperty("spring.cloud.gateway.routes[0].filters[0]", "RewritePath=/account/(?<path>.*), /$\\{path}");
   System.setProperty("spring.cloud.gateway.routes[0].filters[1].name", "Retry");
   MockServerClient client = new MockServerClient(mockServer.getContainerIpAddress(), mockServer.getServerPort());
   client.when(HttpRequest.request()
      .withPath("/1"), Times.exactly(3))
      .respond(response()
         .withStatusCode(500)
         .withBody("{\"errorCode\":\"5.01\"}")
         .withHeader("Content-Type", "application/json"));
   client.when(HttpRequest.request()
      .withPath("/1"))
      .respond(response()
         .withBody("{\"id\":1,\"number\":\"1234567891\"}")
         .withHeader("Content-Type", "application/json"));
   // OTHER RULES
}

Our test method is very simple. It is just using Spring Framework TestRestTemplate to perform a single call to the test endpoint.

@Autowired
TestRestTemplate template;

@Test
public void testAccountService() {
   LOGGER.info("Sending /1...");
   ResponseEntity r = template.exchange("/account/{id}", HttpMethod.GET, null, Account.class, 1);
   LOGGER.info("Received: status->{}, payload->{}", r.getStatusCodeValue(), r.getBody());
   Assert.assertEquals(200, r.getStatusCodeValue());
}

Before running the test we will change a logging level for Spring Cloud Gateway logs, to see the additional information about the retrying process.


logging.level.org.springframework.cloud.gateway.filter.factory: TRACE

Since we didn’t set any backoff policy the subsequent attempts were replied without any delay. As you see on the picture below, a default number of retries is 3, and the filter is trying to retry all HTTP 5XX codes (SERVER_ERROR).

timeouts-and-retries-in-spring-cloud-gateway-defaults

Now, let’s provide a little more advanced configuration. We can change the number of retries and set the exact HTTP status code for retrying instead of the series of codes. In our case a retried status code is HTTP 500, since it is returned by our mock endpoint. We can also enable backoff retrying policy beginning from 50ms to max 500ms. The factor is 2 what means that the backoff is calculated by using formula prevBackoff * factor. A formula is becoming slightly different when you set property basedOnPreviousValue to falsefirstBackoff * (factor ^ n). Here’s the appropriate configuration for our current test.

@ClassRule
public static MockServerContainer mockServer = new MockServerContainer();

@BeforeClass
public static void init() {
   System.setProperty("spring.cloud.gateway.routes[0].id", "account-service");
   System.setProperty("spring.cloud.gateway.routes[0].uri", "http://192.168.99.100:" + mockServer.getServerPort());
   System.setProperty("spring.cloud.gateway.routes[0].predicates[0]", "Path=/account/**");
   System.setProperty("spring.cloud.gateway.routes[0].filters[0]", "RewritePath=/account/(?<path>.*), /$\\{path}");
   System.setProperty("spring.cloud.gateway.routes[0].filters[1].name", "Retry");
   System.setProperty("spring.cloud.gateway.routes[0].filters[1].args.retries", "10");
   System.setProperty("spring.cloud.gateway.routes[0].filters[1].args.statuses", "INTERNAL_SERVER_ERROR");
   System.setProperty("spring.cloud.gateway.routes[0].filters[1].args.backoff.firstBackoff", "50ms");
   System.setProperty("spring.cloud.gateway.routes[0].filters[1].args.backoff.maxBackoff", "500ms");
   System.setProperty("spring.cloud.gateway.routes[0].filters[1].args.backoff.factor", "2");
   System.setProperty("spring.cloud.gateway.routes[0].filters[1].args.backoff.basedOnPreviousValue", "true");
   MockServerClient client = new MockServerClient(mockServer.getContainerIpAddress(), mockServer.getServerPort());
   client.when(HttpRequest.request()
      .withPath("/1"), Times.exactly(3))
      .respond(response()
         .withStatusCode(500)
         .withBody("{\"errorCode\":\"5.01\"}")
         .withHeader("Content-Type", "application/json"));
   client.when(HttpRequest.request()
      .withPath("/1"))
      .respond(response()
         .withBody("{\"id\":1,\"number\":\"1234567891\"}")
         .withHeader("Content-Type", "application/json"));
   // OTHER RULES
}

If you run the same test one more time with a new configuration the logs look a little different. I have highlighted the most important differences in the picture below. As you see the current number of retries 10 only for HTTP 500 status. After setting a backoff policy the first retry attempt is performed after 50ms, the second after 100ms, the third after 200ms etc.

timeouts-and-retries-in-spring-cloud-gateway-backoff

We have already analyzed the retry mechanism in Spring Cloud Gateway. Timeouts is another important aspect of request routing. With Spring Cloud Gateway we may easily set a global read and connect timeout. Alternatively, we may also define them for each route separately. Let’s add the following property to our test route definition. It sets a global timeout on 100ms. Now, our test route contains a test Retry filter with newly added global read timeout on 100ms.

System.setProperty("spring.cloud.gateway.httpclient.response-timeout", "100ms");

Alternatively, we may set timeout per single route. If we would prefer such a solution here a line we should add to our sample test.

System.setProperty("spring.cloud.gateway.routes[1].metadata.response-timeout", "100");

Then we define another test endpoint available under context path /2 with 200ms delay. Our current test method is pretty similar to the previous one, except that we are expecting HTTP 504 as a result.

@Test
public void testAccountServiceFail() {
   LOGGER.info("Sending /2...");
   ResponseEntity<Account> r = template.exchange("/account/{id}", HttpMethod.GET, null, Account.class, 2);
   LOGGER.info("Received: status->{}, payload->{}", r.getStatusCodeValue(), r.getBody());
   Assert.assertEquals(504, r.getStatusCodeValue());
}

Let’s run our test. The result is visible in the picture below. I have also highlighted the most important parts of the logs. After several failed retry attempts the delay between subsequent attempts has been set to the maximum backoff time – 500ms. Since the downstream service is delayed 100ms, the visible interval between retry attempts is around 600ms. Moreover, Retry filter by default handles IOException and TimeoutException, what is visible in the logs (exceptions parameter).

timeouts-and-retries-in-spring-cloud-gateway-logs

Summary

The current article is the last in series about traffic management in Spring Cloud Gateway. I have already described the following patterns: rate limiting, circuit breaker, fallback, failure retries and timeouts handling. That is only a part of Spring Cloud Gateway features. I hope that my articles help you in building API gateway for your microservices in an optimal way.

The post Timeouts and Retries In Spring Cloud Gateway appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/02/23/timeouts-and-retries-in-spring-cloud-gateway/feed/ 2 7772
Microservices API Documentation with Springdoc OpenAPI https://piotrminkowski.com/2020/02/20/microservices-api-documentation-with-springdoc-openapi/ https://piotrminkowski.com/2020/02/20/microservices-api-documentation-with-springdoc-openapi/#comments Thu, 20 Feb 2020 12:52:40 +0000 http://piotrminkowski.com/?p=7756 I have already written about documentation for microservices more than two years ago in my article Microservices API Documentation with Swagger2. In that case, I used project SpringFox for auto-generating Swagger documentation for Spring Boot applications. Since that time the SpringFox library has not been actively developed by the maintainers – the latest version has […]

The post Microservices API Documentation with Springdoc OpenAPI appeared first on Piotr's TechBlog.

]]>
I have already written about documentation for microservices more than two years ago in my article Microservices API Documentation with Swagger2. In that case, I used project SpringFox for auto-generating Swagger documentation for Spring Boot applications. Since that time the SpringFox library has not been actively developed by the maintainers – the latest version has been released in June 2018. Currently, the most important problems with this library are a lack of support for OpenAPI in the newest version 3, and for Spring reactive APIs built using WebFlux. All these features are implemented by Springdoc OpenAPI library. Therefore, it may threaten as a replacement for SpringFox as Swagger and OpenAPI 3 generation tool for Spring Boot applications.

Example

As a code example in this article we will use a typical microservices architecture built with Spring Cloud. It consists of Spring Cloud Config Server, Eureka discovery, and Spring Cloud Gateway as API gateway. We also have three microservices, which expose the REST API and are hidden behind the gateway for an external client. Each of them is exposing OpenAPI documentation that may be accessed on the gateway using Swagger UI. The repository with source code is available on GitHub: https://github.com/piomin/sample-spring-microservices-new.git. This repository has been used as an example in another article, so it contains code not only for Springdoc library demo. The following picture shows the architecture of our system.

microservices-api-documentation-springdoc-openapi.png

Implementating microservice with Springdoc OpenAPI

The first good news related to the Springdoc OpenAPI library is that it may exist together with the SpringFox library without any conflicts. This may simplify your migration into a new tool if anybody is using your Swagger documentation, for example for code generation of contract tests. To enable Springdoc for standard Spring MVC based application you need to include the following dependency into Maven pom.xml.

<dependency>
   <groupId>org.springdoc</groupId>
   <artifactId>springdoc-openapi-webmvc-core</artifactId>
   <version>1.2.32</version>
</dependency>

Each of our Spring Boot microservices is built on top of Spring MVC and provides endpoints for standard synchronous REST communication. However, the API gateway, which is built on top of Spring Cloud Gateway uses Netty as an embedded server and is based on reactive Spring WebFlux. It is also providing Swagger UI for accessing documentation exposed by all the microservices, so it must include a library that enables UI. The following two libraries must be included to enable Springdoc support for a reactive application based on Spring WebFlux.

<dependency>
   <groupId>org.springdoc</groupId>
   <artifactId>springdoc-openapi-webflux-core</artifactId>
   <version>1.2.31</version>
</dependency>
<dependency>
   <groupId>org.springdoc</groupId>
   <artifactId>springdoc-openapi-webflux-ui</artifactId>
   <version>1.2.31</version>
</dependency>

We can customize the default behavior of this library by setting properties in the Spring Boot configuration file or using @Beans. For example, we don’t want to generate OpenAPI manifests for all HTTP endpoints exposed by the application like Spring specific endpoints, so we may define a base package property for scanning as shown below. In our source code example each application YAML configuration file is located inside the config-service module.


springdoc:
  packagesToScan: pl.piomin.services.department

Here’s the main class of employee-service. We use @OpenAPIDefinition annotation to define a description for the application displayed on the Swagger site. As you see we can still have SpringFox enabled with @EnableSwagger2.

@SpringBootApplication
@EnableDiscoveryClient
@EnableSwagger2
@OpenAPIDefinition(info =
   @Info(title = "Employee API", version = "1.0", description = "Documentation Employee API v1.0")
)
public class EmployeeApplication {

   public static void main(String[] args) {
      SpringApplication.run(EmployeeApplication.class, args);
   }

}

OpenAPI on Spring Cloud Gateway

Once you start every microservice it will expose endpoint /v3/api-docs. We can customize that context by using property springdoc.api-docs.path in Spring configuration file. Since it is not required we may proceed to the implementation on the Spring Cloud Gateway. Springdoc doesn’t provide a similar class to SpringFox SwaggerResource, which has been used for exposing multiple APIs from different microservices in the previous article. Fortunately, there is a grouping mechanism that allows splitting OpenAPI definitions into different groups with a given name. To use it we need to declare a list of GroupOpenAPI beans.
Here’s the fragment of code inside gateway-service responsible for creating a list of OpenAPI resources handled by the gateway. First, we get all defined routes for services using RouteDefinitionLocator bean. Then we are fetching the id of each route and set it as a group name. As a result we have multiple OpenAPI resources under path /v3/api-docs/{SERVICE_NAME}, for example /v3/api-docs/employee.

@Autowired
RouteDefinitionLocator locator;

@Bean
public List<GroupedOpenApi> apis() {
   List<GroupedOpenApi> groups = new ArrayList<>();
   List<RouteDefinition> definitions = locator.getRouteDefinitions().collectList().block();
   definitions.stream().filter(routeDefinition -> routeDefinition.getId().matches(".*-service")).forEach(routeDefinition -> {
      String name = routeDefinition.getId().replaceAll("-service", "");
      GroupedOpenApi.builder().pathsToMatch("/" + name + "/**").setGroup(name).build();
   });
   return groups;
}

The API path like /v3/api-docs/{SERVICE_NAME} is not exactly what we want to achieve, because our routing to the downstream services is based on the service name fetched from discovery. So if you call address like http://localhost:8060/employee/** it is automatically load balanced between all registered instances of employee-service. Here’s the routes definition in gateway-service configuration.

spring:
  cloud:
    gateway:
      discovery:
        locator:
          enabled: true
      routes:
      - id: employee-service
        uri: lb://employee-service
        predicates:
        - Path=/employee/**
        filters:
        - RewritePath=/employee/(?<path>.*), /$\{path}
      - id: department-service
        uri: lb://department-service
        predicates:
        - Path=/department/**
        filters:
        - RewritePath=/department/(?<path>.*), /$\{path}
      - id: organization-service
        uri: lb://organization-service
        predicates:
        - Path=/organization/**
        filters:
        - RewritePath=/organization/(?<path>.*), /$\{path}

Since Springdoc doesn’t allow us to customize the default behavior of the grouping mechanism to change the generated paths, we need to provide some workaround. My proposition is just to add a new route definition inside gateway configuration dedicated to Open API path handling. It rewrites path /v3/api-docs/{SERVICE_NAME} into /{SERVICE_NAME}/v3/api-docs, which is handled by the another routes responsible for interacting with Eureka discovery.

  - id: openapi
   uri: http://localhost:${server.port}
   predicates:
   - Path=/v3/api-docs/**
   filters:
   - RewritePath=/v3/api-docs/(?<path>.*), /$\{path}/v3/api-docs

Testing

To test our sample simple we need to run all microservice, config server, discovery and gateway. While microservices are available under a dynamically generated port, config server is available under 8888, discovery under 8061, and gateway under 8060. We can access each microservice by calling http://localhost:8060/{SERVICE_PATH}/**, for example http://localhost:8060/employee/**. The Swagger UI is available under address http://localhost:8060/swagger-ui.html. Before let’s take a look on Eureka after running all required Spring Boot applications.

microservice-api-documentation-with-springdoc-openapi

After accessing Swagger UI exposed on the gateway you may see that we can choose between all three microservices registered in the discovery. This is exactly what we wanted to achieve.

microservice-api-documentation-with-springdoc-openapi-ui

Conclusion

Springdoc OpenAPI is compatible with OpenAPI 3, and supports Spring WebFlux, while SpringFox is not. Therefore, it seems that the choice is obvious especially if you are using reactive APIs or Spring Cloud Gateway. In this article I demonstrated you how to use Springdoc in microservices architecture with a gateway pattern.

The post Microservices API Documentation with Springdoc OpenAPI appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/02/20/microservices-api-documentation-with-springdoc-openapi/feed/ 41 7756
Versioning REST API with Spring Boot and Swagger https://piotrminkowski.com/2018/02/19/versioning-rest-api-with-spring-boot-and-swagger/ https://piotrminkowski.com/2018/02/19/versioning-rest-api-with-spring-boot-and-swagger/#respond Mon, 19 Feb 2018 14:26:04 +0000 https://piotrminkowski.wordpress.com/?p=6330 One thing’s for sure. If you don’t have to version your API, do not try to do that. However, sometimes you have to. A large part of the most popular services like Twitter, Facebook, Netflix, or PayPal is versioning their REST APIs. The advantages and disadvantages of that approach are obvious. On the one hand, […]

The post Versioning REST API with Spring Boot and Swagger appeared first on Piotr's TechBlog.

]]>
One thing’s for sure. If you don’t have to version your API, do not try to do that. However, sometimes you have to. A large part of the most popular services like Twitter, Facebook, Netflix, or PayPal is versioning their REST APIs. The advantages and disadvantages of that approach are obvious. On the one hand, you don’t have to worry about making changes in your API even if many external clients and applications consume it. But on the other hand, you have maintained different versions of API implementation in your code, which sometimes may be troublesome.

In this article, I’m going to show you how to maintain several versions of the REST API in your application in the most comfortable way. We will base on the sample application written on the top of the Spring Boot framework and exposing API documentation using Swagger and SpringFox libraries.

Spring Boot does not provide any dedicated solutions for versioning APIs. The situation is different for SpringFox Swagger2 library, which provides grouping mechanism from version 2.8.0, which is perfect for generating documentation of versioned REST API.

I have already introduced Swagger2 together with Spring Boot application in one of my previous posts. In the article Microservices API Documentation with Swagger2 you may read how to use Swagger2 for generating API documentation for all the independent microservices and publishing it in one place – on API Gateway.

Different approaches to API versioning

There are some different ways to provide an API versioning in your application. The most popular of them are:

  1. Through an URI path – you include the version number in the URL path of the endpoint, for example /api/v1/persons
  2. Through query parameters – you pass the version number as a query parameter with specified name, for example /api/persons?version=1
  3. Through custom HTTP headers – you define a new header that contains the version number in the request
  4. Through a content negotiation – the version number is included to the “Accept” header together with accepted content type. The request with cURL would look like in the following sample: curl -H "Accept: application/vnd.piomin.v1+json" http://localhost:8080/api/persons

The decision, which of that approach implement in the application is up to you. We would discuss the advantages and disadvantages of every single approach, however it is not the main purpose of that article. The main purpose is to show you how to implement versioning in Spring Boot application and then publish the API documentation automatically using Swagger2. The sample application source code is available on GitHub (https://github.com/piomin/sample-api-versioning.git). I have implemented two of the approaches described above – in point 1 and 4.

Enabling Swagger for Spring Boot

Swagger2 can be enabled in Spring Boot application by including SpringFox library. In fact, this is the suite of java libraries used for automating the generation of machine and human readable specifications for JSON APIs written using Spring Framework. It supports such formats like swagger, RAML and jsonapi. To enable it for your application include the following Maven dependencies to the project: io.springfox:springfox-swagger-ui, io.springfox:springfox-swagger2, io.springfox:springfox-spring-web. Then you will have to annotate the main class with @EnableSwagger2 and define Docker object. Docket is a Springfox’s primary configuration mechanism for Swagger 2.0. We will discuss the details about it in the next section along with the sample for each way of versioning API.

Sample API

Our sample API is very simple. It exposes basic CRUD methods for Person entity. There are three versions of API available for external clients: 1.0, 1.1 and 1.2. In the version 1.1 I have changed the method for updating Person entity. In version 1.0 it was available under /person path, while now it is available under /person/{id} path. This is the only difference between versions 1.0 and 1.1. There is also one only difference in API between versions 1.1 and 1.2. Instead of field birthDate it returns age as integer parameter. This change affects to all the endpoints except DELETE /person/{id}. Now, let’s proceed to the implementation.

Versioning using URI path

Here’s the full implementation of URI path versioning inside Spring @RestController.

@RestController
@RequestMapping("/person")
public class PersonController {

   @Autowired
   PersonMapper mapper;
   @Autowired
   PersonRepository repository;

   @PostMapping({"/v1.0", "/v1.1"})
   public PersonOld add(@RequestBody PersonOld person) {
      return (PersonOld) repository.add(person);
   }

   @PostMapping("/v1.2")
   public PersonCurrent add(@RequestBody PersonCurrent person) {
      return mapper.map((PersonOld) repository.add(person));
   }

   @PutMapping("/v1.0")
   @Deprecated
   public PersonOld update(@RequestBody PersonOld person) {
      return (PersonOld) repository.update(person);
   }

   @PutMapping("/v1.1/{id}")
   public PersonOld update(@PathVariable("id") Long id, @RequestBody PersonOld person) {
      return (PersonOld) repository.update(person);
   }

   @PutMapping("/v1.2/{id}")
   public PersonCurrent update(@PathVariable("id") Long id, @RequestBody PersonCurrent person) {
      return mapper.map((PersonOld) repository.update(person));
   }

   @GetMapping({"/v1.0/{id}", "/v1.1/{id}"})
   public PersonOld findByIdOld(@PathVariable("id") Long id) {
      return (PersonOld) repository.findById(id);
   }

   @GetMapping("/v1.2/{id}")
   public PersonCurrent findById(@PathVariable("id") Long id) {
      return mapper.map((PersonOld) repository.findById(id));
   }

   @DeleteMapping({"/v1.0/{id}", "/v1.1/{id}", "/v1.2/{id}"})
   public void delete(@PathVariable("id") Long id) {
      repository.delete(id);
   }

}

If you would like to have three different versions available in the single generated API specification you should declare three Docket @Beans – one per single version. In this case, the swagger group concept, which has been already introduced by SpringFox, would be helpful for us. The reason this concept has been introduced is a necessity for support applications that require more than one swagger resource listing. Usually, you need more than one resource listing in order to provide different versions of the same API. We can assign a group to every Docket just by invoking groupName DSL method on it. Because different versions of the API method are implemented within the same controller, we have to distinguish them by declaring path regex matching the selected version. All other settings are standard.

@Bean
public Docket swaggerPersonApi10() {
   return new Docket(DocumentationType.SWAGGER_2)
      .groupName("person-api-1.0")
      .select()
      .apis(RequestHandlerSelectors.basePackage("pl.piomin.services.versioning.controller"))
      .paths(regex("/person/v1.0.*"))
      .build()
      .apiInfo(new ApiInfoBuilder().version("1.0").title("Person API").description("Documentation Person API v1.0").build());
}

@Bean
public Docket swaggerPersonApi11() {
   return new Docket(DocumentationType.SWAGGER_2)
      .groupName("person-api-1.1")
      .select()
      .apis(RequestHandlerSelectors.basePackage("pl.piomin.services.versioning.controller"))
      .paths(regex("/person/v1.1.*"))
      .build()
      .apiInfo(new ApiInfoBuilder().version("1.1").title("Person API").description("Documentation Person API v1.1").build());
}

@Bean
public Docket swaggerPersonApi12() {
   return new Docket(DocumentationType.SWAGGER_2)
      .groupName("person-api-1.2")
      .select()
      .apis(RequestHandlerSelectors.basePackage("pl.piomin.services.versioning.controller"))
      .paths(regex("/person/v1.2.*"))
      .build()
      .apiInfo(new ApiInfoBuilder().version("1.2").title("Person API").description("Documentation Person API v1.2").build());
}

Now, we may display Swagger UI for our API just by calling URL in the web browser path /swagger-ui.html. You can switch between all available versions of API as you can see in the picture below.

api-1
Switching between available versions of API

Specification is generated by the exact version of API. Here’s documentation for version 1.0. Because method PUT /person is annotated with @Deprecated it is crossed out on the generated HTML documentation page.

api-2
Person API 1.0 specification

If you switch to group person-api-1 you will see all the methods that contains v1.1 in the path. Along them you may recognize the current version of PUT method with {id} field in the path.

api-3
Person API 1.1 specification

When using documentation generated by Swagger you may easily call every method after expanding it. Here’s the sample of calling method PUT /person/{id} from implemented for version 1.2.

api-5
Updating Person entity by calling method PUT from 1.2 version

Versioning using Accept header

To access the implementation of versioning witt ‘Accept’ header you should switch to branch header (https://github.com/piomin/sample-api-versioning/tree/header). Here’s the full implementation of content negotiation using ‘Accept’ header versioning inside Spring @RestController.

@RestController
@RequestMapping("/person")
public class PersonController {

   @Autowired
   PersonMapper mapper;
   @Autowired
   PersonRepository repository;

   @PostMapping(produces = {"application/vnd.piomin.app-v1.0+json", "application/vnd.piomin.app-v1.1+json"})
   public PersonOld add(@RequestBody PersonOld person) {
      return (PersonOld) repository.add(person);
   }

   @PostMapping(produces = "application/vnd.piomin.app-v1.2+json")
   public PersonCurrent add(@RequestBody PersonCurrent person) {
      return mapper.map((PersonOld) repository.add(person));
   }

   @PutMapping(produces = "application/vnd.piomin.app-v1.0+json")
   @Deprecated
   public PersonOld update(@RequestBody PersonOld person) {
      return (PersonOld) repository.update(person);
   }

   @PutMapping(value = "/{id}", produces = "application/vnd.piomin.app-v1.1+json")
   public PersonOld update(@PathVariable("id") Long id, @RequestBody PersonOld person) {
      return (PersonOld) repository.update(person);
   }

   @PutMapping(value = "/{id}", produces = "application/vnd.piomin.app-v1.2+json")
   public PersonCurrent update(@PathVariable("id") Long id, @RequestBody PersonCurrent person) {
      return mapper.map((PersonOld) repository.update(person));
   }

   @GetMapping(name = "findByIdOld", value = "/{idOld}", produces = {"application/vnd.piomin.app-v1.0+json", "application/vnd.piomin.app-v1.1+json"})
   @Deprecated
   public PersonOld findByIdOld(@PathVariable("idOld") Long id) {
      return (PersonOld) repository.findById(id);
   }

   @GetMapping(name = "findById", value = "/{id}", produces = "application/vnd.piomin.app-v1.2+json")
   public PersonCurrent findById(@PathVariable("id") Long id) {
      return mapper.map((PersonOld) repository.findById(id));
   }

   @DeleteMapping(value = "/{id}", produces = {"application/vnd.piomin.app-v1.0+json", "application/vnd.piomin.app-v1.1+json", "application/vnd.piomin.app-v1.2+json"})
   public void delete(@PathVariable("id") Long id) {
      repository.delete(id);
   }

}

We still have to define three Docker @Beans, but the filtering criterias are slightly different. The simple filtering by path is not an option here. We have to crate Predicate for RequestHandler object and pass it to apis DSL method. The predicate implementation should filter every method in order to find only those which have produces field with required version number. Here’s sample Docket implementation for version 1.2.

@Bean
public Docket swaggerPersonApi12() {
   return new Docket(DocumentationType.SWAGGER_2)
      .groupName("person-api-1.2")
      .select()
      .apis(p -> {
         if (p.produces() != null) {
            for (MediaType mt : p.produces()) {
               if (mt.toString().equals("application/vnd.piomin.app-v1.2+json")) {
                  return true;
               }
            }
         }
         return false;
      })
      .build()
      .produces(Collections.singleton("application/vnd.piomin.app-v1.2+json"))
      .apiInfo(new ApiInfoBuilder().version("1.2").title("Person API").description("Documentation Person API v1.2").build());
}

As you can see in the picture below the generated methods do not have the version number in the path.

api-6
Person API 1.2 specification for a content negotiation approach

When calling method for the selected version of API the only difference is in the response’s required content type.

api-7
Updating person and setting response content type

Summary

Versioning is one of the most important concepts around HTTP APIs designing. No matter which approaches to versioning you choose you should do everything to describe your API well. This seems to be especially important in the era of microservices, where your interface may be called by many other independent applications. In this case, creating documentation in isolation from the source code could be troublesome. Swagger solves all of the described problems. It may be easily integrated with your application, supports versioning. Thanks to the SpringFox project it also can be easily customized in your Spring Boot application to meet more advanced demands.

The post Versioning REST API with Spring Boot and Swagger appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2018/02/19/versioning-rest-api-with-spring-boot-and-swagger/feed/ 0 6330
Exposing Microservices over REST Protocol Buffers https://piotrminkowski.com/2017/06/05/exposing-microservices-over-rest-protocol-buffers/ https://piotrminkowski.com/2017/06/05/exposing-microservices-over-rest-protocol-buffers/#comments Mon, 05 Jun 2017 20:42:49 +0000 https://piotrminkowski.wordpress.com/?p=3623 In this article, you will learn how to expose Spring Boot microservices over REST Protocol Buffers. Today exposing RESTful API with JSON protocol is the most common standard. We can find many articles describing the advantages and disadvantages of JSON versus XML. Both these protocols exchange messages in text format. If an important aspect affecting […]

The post Exposing Microservices over REST Protocol Buffers appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to expose Spring Boot microservices over REST Protocol Buffers. Today exposing RESTful API with JSON protocol is the most common standard. We can find many articles describing the advantages and disadvantages of JSON versus XML. Both these protocols exchange messages in text format. If an important aspect affecting the choice of communication protocol in your systems is performance you should definitely pay attention to Protocol Buffers. It is a binary format created by Google as:

A language-neutral, platform-neutral, extensible way of serializing structured data for use in communications protocols, data storage, and more.

Protocol Buffers, which is sometimes referred as Protobuf is not only a message format but also a set of language rules that define the structure of messages. It is extremely useful in service-to-service communication which has been very well described in the article Beating JSON performance with Protobuf. In that example, Protobuf was about 5 times faster than JSON for tests based on Spring Boot framework.

Introduction to Protocol Buffers can be found here. My sample is similar to previous samples from my weblog – it is based on two microservices account and customer which calls one of account’s endpoint. Let’s begin from message types definition provided inside .proto file. Place your .proto file in src/main/proto directory. Here’s account.proto defined in account service. We set java_package and java_outer_classname to define the package and name of Java generated class. Message definition syntax is pretty intuitive. Account object generated from that file has three properties id, customerId and number. There is also the Accounts object which wraps a list of Account objects.

syntax = "proto3";
package model;
option java_package = "pl.piomin.services.protobuf.account.model";
option java_outer_classname = "AccountProto";

message Accounts {
   repeated Account account = 1;
}

message Account {
   int32 id = 1;
   string number = 2;
   int32 customer_id = 3;
}

 

Here’s .proto file definition from customer service. It is a little more complicated than the previous one from the account service. In addition to its definitions, it contains definitions of account service messages, because they are used by @Feign client.

syntax = "proto3";
package model;
option java_package = "pl.piomin.services.protobuf.customer.model";
option java_outer_classname = "CustomerProto";

message Accounts {
   repeated Account account = 1;
}

message Account {
   int32 id = 1;
   string number = 2;
   int32 customer_id = 3;
}

message Customers {
   repeated Customer customers = 1;
}

message Customer {
   int32 id = 1;
   string pesel = 2;
   string name = 3;
   CustomerType type = 4;
   repeated Account accounts = 5;
   enum CustomerType {
      INDIVIDUAL = 0;
      COMPANY = 1;
   }
}

 

We generate source code from the message definitions above by using the protoc-jar-maven-plugin Maven plugin. Plugin needs to have protocExecutable file location set. It can be downloaded from Google’s Protocol Buffer download site.

<plugin>
  <groupId>com.github.os72</groupId>
  <artifactId>protoc-jar-maven-plugin</artifactId>
  <version>3.11.4</version>
  <executions>
    <execution>
      <phase>generate-sources</phase>
      <goals>
        <goal>run</goal>
      </goals>
      <configuration>
        <addProtoSources>all</addProtoSources>
        <includeMavenTypes>direct</includeMavenTypes>
        <outputDirectory>src/main/generated</outputDirectory>
        <inputDirectories>
          <include>src/main/proto</include>
        </inputDirectories>
      </configuration>
    </execution>
  </executions>
</plugin>

 

Protobuf classes are generated into src/main/generated output directory. Let’s add that source directory to Maven sources with build-helper-maven-plugin.

<plugin>
   <groupId>org.codehaus.mojo</groupId>
   <artifactId>build-helper-maven-plugin</artifactId>
   <executions>
      <execution>
         <id>add-source</id>
         <phase>generate-sources</phase>
         <goals>
            <goal>add-source</goal>
         </goals>
         <configuration>
            <sources>
            <source>src/main/generated</source>
            </sources>
         </configuration>
      </execution>
   </executions>
</plugin>

 

Sample application source code is available on GitHub. Before proceeding to the next steps build application using mvn clean install command. Generated classes are available in the src/main/generated directory and our microservices are ready to run. Now, let me describe some implementation details. We need two dependencies in Maven pom.xml to use Protobuf.

<dependency>
   <groupId>com.google.protobuf</groupId>
   <artifactId>protobuf-java</artifactId>
   <version>3.24.2</version>
</dependency>
<dependency>
   <groupId>com.googlecode.protobuf-java-format</groupId>
   <artifactId>protobuf-java-format</artifactId>
   <version>1.4</version>
</dependency>

 

Then, we need to declare the default HttpMessageConverter @Bean and inject it into RestTemplate @Bean.

@Bean
@Primary
ProtobufHttpMessageConverter protobufHttpMessageConverter() {
   return new ProtobufHttpMessageConverter();
}
@Bean
RestTemplate restTemplate(ProtobufHttpMessageConverter hmc) {
   return new RestTemplate(Arrays.asList(hmc));
}

 

Here’s the REST @Controller code. Account and Accounts from the AccountProto generated class is returned as a response body in all three API methods visible below. All objects generated from .proto files have newBuilder method used for creating new object instances. I also set application/x-protobuf as the default response content type.

@RestController
public class AccountController {
   @Autowired
   AccountRepository repository;

   protected Logger logger = Logger.getLogger(AccountController.class.getName());

   @RequestMapping(value = "/accounts/{number}", produces = "application/x-protobuf")
   public Account findByNumber(@PathVariable("number") String number) {
      logger.info(String.format("Account.findByNumber(%s)", number));
      return repository.findByNumber(number);
   }

   @RequestMapping(value = "/accounts/customer/{customer}", produces = "application/x-protobuf")
   public Accounts findByCustomer(@PathVariable("customer") Integer customerId) {
      logger.info(String.format("Account.findByCustomer(%s)", customerId));
      return Accounts.newBuilder().addAllAccount(repository.findByCustomer(customerId)).build();
   }

   @RequestMapping(value = "/accounts", produces = "application/x-protobuf")
   public Accounts findAll() {
      logger.info("Account.findAll()");
      return Accounts.newBuilder().addAllAccount(repository.findAll()).build();
   }
}

 

Method GET /accounts/customer/{customer} is called from customer service using the @Feign client.

@FeignClient(value = "account-service")
public interface AccountClient {
   @RequestMapping(method = RequestMethod.GET, value = "/accounts/customer/{customerId}")
   Accounts getAccounts(@PathVariable("customerId") Integer customerId);
}

 

We can easily test the configuration using the JUnit test class visible below.

@SpringBootTest(webEnvironment = WebEnvironment.RANDOM_PORT)
public class AccountApplicationTest {

   protected Logger logger = Logger.getLogger(AccountApplicationTest.class.getName());
   @Autowired
   TestRestTemplate template;

   @Test
   public void testFindByNumber() {
      Account a = this.template.getForObject("/accounts/{id}", Account.class, "111111");
      logger.info("Account[\n" + a + "]");
   }

   @Test
   public void testFindByCustomer() {
      Accounts a = this.template.getForObject("/accounts/customer/{customer}", Accounts.class, "2");
      logger.info("Accounts[\n" + a + "]");
   }
   @Test
   public void testFindAll() {
      Accounts a = this.template.getForObject("/accounts", Accounts.class);
      logger.info("Accounts[\n" + a + "]");
   }

   @TestConfiguration
   static class Config {
      @Bean
      public RestTemplateBuilder restTemplateBuilder() {
         return new RestTemplateBuilder().additionalMessageConverters(new ProtobufHttpMessageConverter());
      }
   }
}

 

Conclusion

This article shows how to enable Protocol Buffers for microservices project based on Spring Boot. Protocol Buffer is an alternative to text-based protocols like XML or JSON and surpasses them in terms of performance. Adapt to this protocol using in Spring Boot application is pretty simple. For microservices, we can still use Spring Cloud components like Feign or Ribbon in combination with Protocol Buffers same as with REST over JSON or XML.

The post Exposing Microservices over REST Protocol Buffers appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2017/06/05/exposing-microservices-over-rest-protocol-buffers/feed/ 2 3623
Microservices API Documentation with Swagger2 https://piotrminkowski.com/2017/04/14/microservices-api-documentation-with-swagger2/ https://piotrminkowski.com/2017/04/14/microservices-api-documentation-with-swagger2/#comments Fri, 14 Apr 2017 07:53:49 +0000 https://piotrminkowski.wordpress.com/?p=2400 Swagger is the most popular tool for designing, building and documenting RESTful APIs. It has nice integration with Spring Boot. To use it in conjunction with Spring we need to add the following two dependencies to Maven pom.xml. Swagger configuration for a single Spring Boot service is pretty simple. The level of complexity is greater […]

The post Microservices API Documentation with Swagger2 appeared first on Piotr's TechBlog.

]]>
Swagger is the most popular tool for designing, building and documenting RESTful APIs. It has nice integration with Spring Boot. To use it in conjunction with Spring we need to add the following two dependencies to Maven pom.xml.

<dependency>
   <groupId>io.springfox</groupId>
   <artifactId>springfox-swagger2</artifactId>
   <version>2.6.1</version>
</dependency>
<dependency>
   <groupId>io.springfox</groupId>
   <artifactId>springfox-swagger-ui</artifactId>
   <version>2.6.1</version>
</dependency>

Swagger configuration for a single Spring Boot service is pretty simple. The level of complexity is greater if you want to create one documentation for several separated microservices. Such documentation should be available on API gateway. In the picture below you can see the architecture of our sample solution.

swagger

First, we should configure Swagger on every microservice. To enable it we have to declare @EnableSwagger2 on the main class. API documentation will be automatically generated from source code by Swagger library during application startup. The process is controlled by Docket @Bean which is also declared in the main class. The API version is read from pom.xml file using MavenXpp3Reader. We also set some other properties like title, author and description using apiInfo method. By default, Swagger generates documentation for all REST services including those created by Spring Boot. We would like to limit documentation only to our @RestController located inside pl.piomin.microservices.advanced.account.api package.

@Bean
public Docket api() throws IOException, XmlPullParserException {
   MavenXpp3Reader reader = new MavenXpp3Reader();
   Model model = reader.read(new FileReader("pom.xml"));
   return new Docket(DocumentationType.SWAGGER_2)
      .select()
      .apis(RequestHandlerSelectors.basePackage("pl.piomin.microservices.advanced.account.api"))
      .paths(PathSelectors.any())
      .build().apiInfo(new ApiInfo("Account Service Api Documentation", "Documentation automatically generated", model.getParent().getVersion(), null, new Contact("Piotr Mińkowski", "piotrminkowski.wordpress.com", "piotr.minkowski@gmail.com"), null, null));
}

Here’s our API RESTful controller.

@RestController
public class AccountController {

   @Autowired
   AccountRepository repository;

   protected Logger logger = Logger.getLogger(AccountController.class.getName());

   @RequestMapping(value = "/accounts/{number}", method = RequestMethod.GET)
   public Account findByNumber(@PathVariable("number") String number) {
      logger.info(String.format("Account.findByNumber(%s)", number));
      return repository.findByNumber(number);
   }

   @RequestMapping(value = "/accounts/customer/{customer}", method = RequestMethod.GET)
   public List findByCustomer(@PathVariable("customer") String customerId) {
      logger.info(String.format("Account.findByCustomer(%s)", customerId));
      return repository.findByCustomerId(customerId);
   }

   @RequestMapping(value = "/accounts", method = RequestMethod.GET)
   public List findAll() {
      logger.info("Account.findAll()");
      return repository.findAll();
   }

   @RequestMapping(value = "/accounts", method = RequestMethod.POST)
   public Account add(@RequestBody Account account) {
      logger.info(String.format("Account.add(%s)", account));
      return repository.save(account);
   }

   @RequestMapping(value = "/accounts", method = RequestMethod.PUT)
   public Account update(@RequestBody Account account) {
      logger.info(String.format("Account.update(%s)", account));
      return repository.save(account);
   }

}

The similar Swagger’s configuration exists on every microservice. API documentation UI is available under /swagger-ui.html. Now, we would like to enable one documentation embedded on the gateway for all microservices. Here’s Spring @Component implementing SwaggerResourcesProvider interface which overrides default provider configuration exists in Spring context.

@Component
@Primary
@EnableAutoConfiguration
public class DocumentationController implements SwaggerResourcesProvider {

   @Override
   public List get() {
      List resources = new ArrayList<>();
      resources.add(swaggerResource("account-service", "/api/account/v2/api-docs", "2.0"));
      resources.add(swaggerResource("customer-service", "/api/customer/v2/api-docs", "2.0"));
      resources.add(swaggerResource("product-service", "/api/product/v2/api-docs", "2.0"));
      resources.add(swaggerResource("transfer-service", "/api/transfer/v2/api-docs", "2.0"));
      return resources;
   }

   private SwaggerResource swaggerResource(String name, String location, String version) {
      SwaggerResource swaggerResource = new SwaggerResource();
      swaggerResource.setName(name);
      swaggerResource.setLocation(location);
      swaggerResource.setSwaggerVersion(version);
      return swaggerResource;
   }

}

All microservices api-docs are added as Swagger resources. The location address is proxied via Zuul gateway. Here’s gateway route configuration.

zuul:
  prefix: /api
     routes:
       account:
         path: /account/**
         serviceId: account-service
       customer:
         path: /customer/**
         serviceId: customer-service
       product:
         path: /product/**
         serviceId: product-service
       transfer:
         path: /transfer/**
         serviceId: transfer-service

Now, API documentation is available under gateway address http://localhost:8765/swagger-ui.html. You can see how it looks for the account-service in the picture below. We can select the source service in the combo box placed inside the title panel.

swagger-1

Documentation appearance can be easily customized by providing UIConfiguration @Bean. In the code below I changed default operations expansion level by setting “list” as a second constructor parameter – docExpansion.

@Bean
UiConfiguration uiConfig() {
   return new UiConfiguration("validatorUrl", "list", "alpha", "schema",
UiConfiguration.Constants.DEFAULT_SUBMIT_METHODS, false, true, 60000L);
}

You can expand every operation to see the details. Every operation can be test by providing required parameters and clicking Try it out! button.

swagger-2

swagger-3

Sample application source code is available on GitHub.

The post Microservices API Documentation with Swagger2 appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2017/04/14/microservices-api-documentation-with-swagger2/feed/ 42 2400